{"samples": [{"_id": {"$oid": "69092ad56ff4d31845e5a52d"}, "filepath": "data/2510.11296v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992742706764627, "type": "Poster", "name": "$\\Delta \\mathrm{Energy}$: Optimizing Energy Change During Vision-Language Alignment Improves both OOD Detection and OOD Generalization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116579", "abstract": "Recent approaches for vision-language models (VLMs) have shown remarkable success in achieving fast downstream adaptation. When applied to real-world downstream tasks, VLMs inevitably encounter both the in-distribution (ID) data and out-of-distribution (OOD) data. The OOD datasets often include both covariate shifts (e.g., known classes with changes in image styles) and semantic shifts (e.g., test-time unseen classes). This highlights the importance of improving VLMs' generalization ability to covariate-shifted OOD data, while effectively detecting open-set semantic-shifted OOD classes. In this paper, inspired by the substantial energy change observed in closed-set data when re-aligning vision-language modalities\u2014specifically by directly reducing the maximum cosine similarity to a low value\u2014we introduce a novel OOD score, named $\\Delta\\mathrm{Energy}$. $\\Delta\\mathrm{Energy}$ significantly outperforms the vanilla energy-based OOD score and provides a more reliable approach for OOD detection. Furthermore, $\\Delta\\mathrm{Energy}$ can simultaneously improve OOD generalization under covariate shifts, which is achieved by lower-bound maximization for $\\Delta\\mathrm{Energy}$ (termed EBM). EBM is theoretically proven to not only enhance OOD detection but also yields a domain-consistent Hessian, which serves as a strong indicator for OOD generalization. Based on this finding, we developed a unified fine-tuning framework that allows for improving VLMs' robustness in both OOD generalization and OOD detection. Extensive experiments on challenging OOD detection and generalization benchmarks demonstrate the superiority of our method, outperforming recent approaches by 10\\%\u201325\\% in AUROC.", "arxiv_id": "2510.11296v2", "arxiv_authors": ["Lin Zhu", "Yifeng Yang", "Xinbing Wang", "Qinying Gu", "Nanyang Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0bf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.443Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1173920, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a52e"}, "filepath": "data/2510.18637v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995852852275053, "type": "Poster", "name": "$\\epsilon$-Seg: Sparsely Supervised Semantic Segmentation of Microscopy Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115837", "abstract": "Semantic segmentation of electron microscopy (EM) images of biological samples remains a challenge in the life sciences.EM data captures details of biological structures, sometimes with such complexity that even human observers can find it overwhelming.Here we introduce $\\epsilon$-Seg, a method based on hierarchical variational autoencoders (HVAEs), employing center-region masking, sparse label contrastive learning (CL), a Gaussian Mixture Model (GMM) prior, and clustering-free label prediction.Center-region masking and the inpainting loss encourage the model to learn robust and representative embeddings to distinguish the desired classes, even if training labels are sparse ($0.05$\\% of the total image data or less).Additionally, we propose an entropy-based loss that can improve segmentation quality when fewer training labels are available (i.e. on $0.0025$\\% of the data).For optimal performance, we employ CL and a GMM prior to shape the latent space of the HVAE such that encoded input patches tend to cluster w.r.t. the semantic classes we wish to distinguish. Finally, instead of clustering latent embeddings for semantic segmentation, we propose a semantic segmentation head composed of MLP and FiLM layers to directly predict class labels from latent embeddings.We show empirical results of $\\epsilon$-Seg and baseline methods on $2$ dense EM datasets of biological tissues and demonstrate the applicability of our method also on fluorescence microscopy data. Our results show that $\\epsilon$-Seg is capable of achieving competitive semi-supervised segmentation results on complex biological image data, even if only limited amounts of training labels are available.", "arxiv_id": "2510.18637v1", "arxiv_authors": ["Sheida Rahnamai Kordasiabi", "Damian Dalle Nogare", "Florian Jug"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.444Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051878, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a52f"}, "filepath": "data/2410.06126v4.png", "tags": [], "_media_type": "image", "_rand": 0.9997212046787198, "type": "Poster", "name": "$\\mathcal{X}^2$-DFD: A framework for e$\\mathcal{X}$plainable and e$\\mathcal{X}$tendable Deepfake Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115622", "abstract": "This paper proposes **$\\mathcal{X}^2$-DFD**, an **e$\\mathcal{X}$plainable** and **e$\\mathcal{X}$tendable** framework based on multimodal large-language models (MLLMs) for deepfake detection, consisting of three key stages. The first stage, *Model Feature Assessment*, systematically evaluates the detectability of forgery-related features for the MLLM, generating a prioritized ranking of features based on their intrinsic importance to the model.The second stage, *Explainable Dataset Construction*, consists of two key modules: *Strong Feature Strengthening*, which is designed to enhance the model\u2019s existing detection and explanation capabilities by reinforcing its well-learned features, and *Weak Feature Supplementing*, which addresses gaps by integrating specific feature detectors (e.g., low-level artifact analyzers) to compensate for the MLLM\u2019s limitations.The third stage, Fine-tuning and Inference, involves fine-tuning the MLLM on the constructed dataset and deploying it for final detection and explanation.By integrating these three stages, our approach enhances the MLLM's strengths while supplementing its weaknesses, ultimately improving both the detectability and explainability.Extensive experiments and ablations, followed by a comprehensive human study, validate the improved performance of our approach compared to the original MLLMs.More encouragingly, our framework is designed to be plug-and-play, allowing it to seamlessly integrate with future more advanced MLLMs and specific feature detectors, leading to continual improvement and extension to face the challenges of rapidly evolving deepfakes.", "arxiv_id": "2410.06126v4", "arxiv_authors": ["Yize Chen", "Zhiyuan Yan", "Guangliang Cheng", "Kangran Zhao", "Siwei Lyu", "Baoyuan Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085914, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a530"}, "filepath": "data/2505.17423v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990210096294726, "type": "Poster", "name": "$\\mathtt{VIBE}$: Video-to-Text Information Bottleneck Evaluation for TL;DR", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119324", "abstract": "Many decision-making tasks, where both accuracy and efficiency matter, still require human supervision. For example, tasks like traffic officers reviewing hour-long dashcam footage or researchers screening conference videos can benefit from concise summaries that reduce cognitive load and save time. Yet current vision-language models (VLMs) often produce verbose, redundant outputs that hinder task performance. Existing video caption evaluation depends on costly human annotations and overlooks the summaries' utility in downstream tasks. We address these gaps with $\\underline{\\textbf{V}}$ideo-to-text $\\underline{\\textbf{I}}$nformation $\\underline{\\textbf{B}}$ottleneck $\\underline{\\textbf{E}}$valuation (VIBE), an annotation-free method that scores VLM outputs using two metrics: $\\textit{grounding}$ (how well the summary aligns with visual content) and $\\textit{utility}$ (how informative it is for the task). VIBE selects from randomly sampled VLM outputs by ranking them according to the two scores to support effective human decision-making. Human studies on $\\texttt{LearningPaper24}$, $\\texttt{SUTD-TrafficQA}$, and $\\texttt{LongVideoBench}$ show that summaries selected by VIBE consistently improve performance\u2014boosting task accuracy by up to $61.23$% and reducing response time by $75.77$% compared to naive VLM summaries or raw video.", "arxiv_id": "2505.17423v3", "arxiv_authors": ["Shenghui Chen", "Po-han Li", "Sandeep Chinchali", "Ufuk Topcu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1084858, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a531"}, "filepath": "data/2510.11321v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999324268064901, "type": "Poster", "name": "$\\textit{HiMaCon:}$ Discovering Hierarchical Manipulation Concepts from Unlabeled Multi-Modal Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120127", "abstract": "Effective generalization in robotic manipulation requires representations that capture invariant patterns of interaction across environments and tasks.We present a self-supervised framework for learning hierarchical manipulation concepts that encode these invariant patterns through cross-modal sensory correlations and multi-level temporal abstractions without requiring human annotation.Our approach combines a cross-modal correlation network that identifies persistent patterns across sensory modalities with a multi-horizon predictor that organizes representations hierarchically across temporal scales. Manipulation concepts learned through this dual structure enable policies to focus on transferable relational patterns while maintaining awareness of both immediate actions and longer-term goals.Empirical evaluation across simulated benchmarks and real-world deployments demonstrates significant performance improvements with our concept-enhanced policies. Analysis reveals that the learned concepts resemble human-interpretable manipulation primitives despite receiving no semantic supervision. This work advances both the understanding of representation learning for manipulation and provides a practical approach to enhancing robotic performance in complex scenarios.", "arxiv_id": "2510.11321v1", "arxiv_authors": ["Ruizhe Liu", "Pei Zhou", "Qian Luo", "Li Sun", "Jun Cen", "Yibing Song", "Yanchao Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c3"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1753488, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a532"}, "filepath": "data/2508.04016v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991526144207575, "type": "Poster", "name": "$\\text{S}^2$Q-VDiT: Accurate Quantized Video Diffusion Transformer with Salient Data and Sparse Token Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116949", "abstract": "Diffusion transformers have emerged as the mainstream paradigm for video generation models. However, the use of up to billions of parameters incurs significant computational costs. Quantization offers a promising solution by reducing memory usage and accelerating inference. Nonetheless, we observe that the joint modeling of spatial and temporal information in video diffusion models (V-DMs) leads to extremely long token sequences, which introduces high calibration variance and learning challenges. To address these issues, we propose **$\\text{S}^2$Q-VDiT**, a post-training quantization framework for V-DMs that leverages **S**alient data and **S**parse token distillation. During the calibration phase, we identify that quantization performance is highly sensitive to the choice of calibration data. To mitigate this, we introduce *Hessian-aware Salient Data Selection*, which constructs high-quality calibration datasets by considering both diffusion and quantization characteristics unique to V-DMs. To tackle the learning challenges, we further analyze the sparse attention patterns inherent in V-DMs. Based on this observation, we propose *Attention-guided Sparse Token Distillation*, which exploits token-wise attention distributions to emphasize tokens that are more influential to the model's output. Under W4A6 quantization, $\\text{S}^2$Q-VDiT achieves lossless performance while delivering $3.9\\times$ model compression and $1.3\\times$ inference acceleration.", "arxiv_id": "2508.04016v3", "arxiv_authors": ["Weilun Feng", "Haotong Qin", "Chuanguang Yang", "Xiangqi Li", "Han Yang", "Yuqi Li", "Zhulin An", "Libo Huang", "Michele Magno", "Yongjun Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3607096, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a533"}, "filepath": "data/2503.16422v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999047895086989, "type": "Poster", "name": "1000+ FPS 4D Gaussian Splatting for Dynamic Scene Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117408", "abstract": "4D Gaussian Splatting (4DGS) has recently gained considerable attention as a method for reconstructing dynamic scenes. Despite achieving superior quality, 4DGS typically requires substantial storage and suffers from slow rendering speed. In this work, we delve into these issues and identify two key sources of temporal redundancy. (Q1) \\textbf{Short-Lifespan Gaussians}: 4DGS uses a large portion of Gaussians with short temporal span to represent scene dynamics, leading to an excessive number of Gaussians. (Q2) \\textbf{Inactive Gaussians}: When rendering, only a small subset of Gaussians contributes to each frame. Despite this, all Gaussians are processed during rasterization, resulting in redundant computation overhead. To address these redundancies, we present \\textbf{4DGS-1K}, which runs at over 1000 FPS on modern GPUs. For Q1, we introduce the Spatial-Temporal Variation Score, a new pruning criterion that effectively removes short-lifespan Gaussians while encouraging 4DGS to capture scene dynamics using Gaussians with longer temporal spans. For Q2, we store a mask for active Gaussians across consecutive frames, significantly reducing redundant computations in rendering. Compared to vanilla 4DGS, our method achieves a $41\\times$ reduction in storage and $9\\times$ faster rasterization speed on complex dynamic scenes, while maintaining comparable visual quality.", "arxiv_id": "2503.16422v1", "arxiv_authors": ["Yuheng Yuan", "Qiuhong Shen", "Xingyi Yang", "Xinchao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2081120, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a534"}, "filepath": "data/2505.16969v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999946683847589, "type": "Poster", "name": "3D Equivariant Visuomotor Policy Learning via Spherical Projection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116373", "abstract": "Equivariant models have recently been shown to improve the data efficiency of diffusion policy by a significant margin. However, prior work that explored this direction focused primarily on point cloud inputs generated by multiple cameras fixed in the workspace. This type of point cloud input is not compatible with the now-common setting where the primary input modality is an eye-in-hand RGB camera like a GoPro. This paper closes this gap by incorporating into the diffusion policy model a process that projects features from the 2D RGB camera image onto a sphere. This enables us to reason about symmetries in $\\mathrm{SO}(3)$ without explicitly reconstructing a point cloud. We perform extensive experiments in both simulation and the real world that demonstrate that our method consistently outperforms strong baselines in terms of both performance and sample efficiency. Our work is the first $\\mathrm{SO}(3)$-equivariant policy learning framework for robotic manipulation that works using only monocular RGB inputs.", "arxiv_id": "2505.16969v2", "arxiv_authors": ["Boce Hu", "Dian Wang", "David Klee", "Heng Tian", "Xupeng Zhu", "Haojie Huang", "Robert Platt", "Robin Walters"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c6"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1642421, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a535"}, "filepath": "data/2509.16423v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990649146615459, "type": "Poster", "name": "3D Gaussian Flats: Hybrid 2D/3D Photometric Scene Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115491", "abstract": "Recent advances in radiance fields and novel view synthesis enable creation of realistic digital twins from photographs. However, current methods struggle with flat, texture-less surfaces, creating uneven and semi-transparent reconstructions, due to an ill-conditioned photometric reconstruction objective. Surface reconstruction methods solve this issue but sacrifice visual quality. We propose a novel hybrid 2D/3D representation that jointly optimizes constrained planar (2D) Gaussians for modeling flat surfaces and freeform (3D) Gaussians for the rest of the scene. Our end-to-end approach dynamically detects and refines planar regions, improving both visual fidelity and geometric accuracy. It achieves state-of-the-art depth estimation on ScanNet++ and ScanNetv2, and excels at mesh extraction without overfitting to a specific camera model, showing its effectiveness in producing high-quality reconstruction of indoor scenes.", "arxiv_id": "2509.16423v2", "arxiv_authors": ["Maria Taktasheva", "Lily Goli", "Alessandro Fiorini", "Zhen Li", "Daniel Rebain", "Andrea Tagliasacchi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.445Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3412535, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a536"}, "filepath": "data/2505.22657v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993877683493658, "type": "Poster", "name": "3DLLM-Mem: Long-Term Spatial-Temporal Memory for Embodied 3D Large Language Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115898", "abstract": "Humans excel at performing complex tasks by leveraging long-term memory across temporal and spatial experiences. In contrast, current Large Language Models (LLMs) struggle to effectively plan and act in dynamic, multi-room 3D environments. We posit that part of this limitation is due to the lack of proper 3D spatial-temporal memory modeling in LLMs. To address this, we first introduce 3DMem-Bench, a comprehensive benchmark comprising over 26,000 trajectories and 2,892 embodied tasks, question-answering and captioning, designed to evaluate an agent's ability to reason over long-term memory in 3D environments.Second, we propose 3DLLM-Mem, a novel dynamic memory management and fusion model for embodied spatial-temporal reasoning and actions in LLMs. Our model uses working memory tokens, which represents current observations, as queries to selectively attend to and fuse the most useful spatial and temporal features from episodic memory, which stores past observations and interactions. Our approach allows the agent to focus on task-relevant information while maintaining memory efficiency in complex, long-horizon environments.Experimental results demonstrate that 3DLLM-Mem achieves state-of-the-art performance across various tasks, outperforming the strongest baselines by 16.5\\% in success rate on 3DMem-Bench's most challenging in-the-wild embodied tasks.", "arxiv_id": "2505.22657v1", "arxiv_authors": ["Wenbo Hu", "Yining Hong", "Yanjun Wang", "Leison Gao", "Zibu Wei", "Xingcheng Yao", "Nanyun Peng", "Yonatan Bitton", "Idan Szpektor", "Kai-Wei Chang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.451Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3403910, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a537"}, "filepath": "data/2503.18853v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993862587753921, "type": "Poster", "name": "3D-OTT: Texture Transfer for 3D Objects from a Single Reference Image", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119398", "abstract": "Image-based 3D texture transfer from a single 2D reference image enables practical customization of 3D object appearances with minimal manual effort.Adapted 2D editing and text-driven 3D editing approaches can serve this purpose. However, 2D editing typically involves frame-by-frame manipulation, often resulting in inconsistencies across views, while text-driven 3D editing struggles to preserve texture characteristics from reference images.To tackle these challenges, we introduce 3D-OTT, a 3D Object Texture Transfer method based on a single reference image, integrating: 1) progressive generation, 2) view-consistency gradient guidance, and 3) prompt-tuned gradient guidance.To ensure view consistency, progressive generation starts by transferring texture from the reference image and gradually propagates it to adjacent views.View-consistency gradient guidance further reinforces coherence by conditioning the generation model on feature differences between consistent and inconsistent outputs.To preserve texture characteristics, prompt-tuning-based gradient guidance learns a token that describes differences between original and reference textures, guiding the transfer for faithful texture preservation across views.Overall, 3D-OTT combines these strategies to achieve effective texture transfer while maintaining structural coherence across viewpoints.Extensive qualitative and quantitative evaluations confirm that our three components enable convincing and effective 2D-to-3D texture transfer. Code will be available upon acceptance.", "arxiv_id": "2503.18853v2", "arxiv_authors": ["Xiao Cao", "Beibei Lin", "Bo Wang", "Zhiyong Huang", "Robby T. Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0c9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.453Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4067209, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a538"}, "filepath": "data/2506.11147v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990633909061304, "type": "Poster", "name": "3D-RAD: A Comprehensive 3D Radiology Med-VQA Dataset with Multi-Temporal Analysis and Diverse Diagnostic Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121618", "abstract": "Medical Visual Question Answering (Med-VQA) holds significant potential for clinical decision support, yet existing efforts primarily focus on 2D imaging with limited task diversity. This paper presents 3D-RAD, a large-scale dataset designed to advance 3D Med-VQA using radiology CT scans. The 3D-RAD dataset encompasses six diverse VQA tasks: anomaly detection, image observation, medical computation, existence detection, static temporal diagnosis, and longitudinal temporal diagnosis. It supports both open- and closed-ended questions while introducing complex reasoning challenges, including computational tasks and multi-stage temporal analysis, to enable comprehensive benchmarking. Extensive evaluations demonstrate that existing vision-language models (VLMs), especially medical VLMs exhibit limited generalization, particularly in multi-temporal tasks, underscoring the challenges of real-world 3D diagnostic reasoning. To drive future advancements, we release a high-quality training set 3D-RAD-T of 136,195 expert-aligned samples, showing that fine-tuning on this dataset could significantly enhance model performance. Our dataset and code are publicly available at https://github.com/Tang-xiaoxiao/M3D-RAD, aiming to catalyze multimodal medical AI research and establish a robust foundation for 3D medical visual understanding.", "arxiv_id": "2506.11147v1", "arxiv_authors": ["Xiaotang Gai", "Jiaxiang Liu", "Yichen Li", "Zijie Meng", "Jian Wu", "Zuozhu Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ca"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061003, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a539"}, "filepath": "data/2505.13061v4.png", "tags": [], "_media_type": "image", "_rand": 0.9992982110904808, "type": "Poster", "name": "3D Visual Illusion Depth Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115511", "abstract": "3D visual illusion is a perceptual phenomenon where a two-dimensional plane is manipulated to simulate three-dimensional spatial relationships, making a flat artwork or object look three-dimensional in the human visual system. In this paper, we reveal that the machine visual system is also seriously fooled by 3D visual illusions, including monocular and binocular depth estimation. In order to explore and analyze the impact of 3D visual illusion on depth estimation, we collect a large dataset containing almost 3k scenes and 200k images to train and evaluate SOTA monocular and binocular depth estimation methods. We also propose a 3D visual illusion depth estimation framework that uses common sense from the vision language model to adaptively fuse depth from binocular disparity and monocular depth. Experiments show that SOTA monocular, binocular, and multi-view depth estimation approaches are all fooled by various 3D visual illusions, while our method achieves SOTA performance.", "arxiv_id": "2505.13061v4", "arxiv_authors": ["Chengtang Yao", "Zhidan Liu", "Jiaxi Zeng", "Lidong Yu", "Yuwei Wu", "Yunde Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0cb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085598, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53a"}, "filepath": "data/2509.17513v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997445612581606, "type": "Poster", "name": "4DGCPro: Efficient Hierarchical 4D Gaussian Compression for Progressive Volumetric Video Streaming", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118452", "abstract": "Achieving seamless viewing of high-fidelity volumetric video, comparable to 2D video experiences, remains an open challenge. Existing volumetric video compression methods either lack the flexibility to adjust quality and bitrate within a single model for efficient streaming across diverse networks and devices, or struggle with real-time decoding and rendering on lightweight mobile platforms. To address these challenges, we introduce 4DGCPro, a novel hierarchical 4D Gaussian compression framework that facilitates real-time mobile decoding and high-quality rendering via progressive volumetric video streaming in a single bitstream. Specifically, we propose a perceptually-weighted and compression-friendly hierarchical 4D Gaussian representation with motion-aware adaptive grouping to reduce temporal redundancy, preserve coherence, and enable scalable multi-level detail streaming. Furthermore, we present an end-to-end entropy-optimized training scheme, which incorporates layer-wise rate-distortion (RD) supervision and attribute-specific entropy modeling for efficient bitstream generation. Extensive experiments show that 4DGCPro enables flexible quality and variable bitrate within a single model, achieving real-time decoding and rendering on mobile devices while outperforming existing methods in RD performance across multiple datasets.", "arxiv_id": "2509.17513v2", "arxiv_authors": ["Zihan Zheng", "Zhenlong Wu", "Houqiang Zhong", "Yuan Tian", "Ning Cao", "Lan Xu", "Jiangchao Yao", "Xiaoyun Zhang", "Qiang Hu", "Wenjun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0cc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2328938, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53b"}, "filepath": "data/2506.08015v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997715981965208, "type": "Poster", "name": "4DGT: Learning a 4D Gaussian Transformer Using Real-World Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115879", "abstract": "We propose 4DGT, a 4D Gaussian-based Transformer model for dynamic scene reconstruction, trained entirely on real-world monocular posed videos. Using 4D Gaussian as an inductive bias, 4DGT unifies static and dynamic components, enabling the modeling of complex, time-varying environments with varying object lifespans. We proposed a novel density control strategy in training, which enables our 4DGT to handle longer space-time input. Our model processes 64 consecutive posed frames in a rolling-window fashion, predicting consistent 4D Gaussians in the scene. Unlike optimization-based methods, 4DGT performs purely feed-forward inference, reducing reconstruction time from hours to seconds and scaling effectively to long video sequences. Trained only on large-scale monocular posed video datasets, 4DGT can outperform prior Gaussian-based networks significantly in real-world videos and achieve on-par accuracy with optimization-based methods on cross-domain videos.", "arxiv_id": "2506.08015v1", "arxiv_authors": ["Zhen Xu", "Zhengqin Li", "Zhao Dong", "Xiaowei Zhou", "Richard Newcombe", "Zhaoyang Lv"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0cd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3674792, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53c"}, "filepath": "data/2506.18890v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996259442667409, "type": "Poster", "name": "4D-LRM: Large Space-Time Reconstruction Model From and To Any View at Any Time", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120016", "abstract": "Can we scale 4D pretraining to learn a general space-time representation that reconstructs an object from a few views at some times to any view at any time? We introduce 4D-LRM, the first large-scale 4D reconstruction model that takes input from unconstrained views and timestamps and renders arbitrary novel view-time combinations. Unlike prior 4D approaches---optimization-based, geometry-based, or generative---that struggle with efficiency, generalization, or faithfulness, 4D-LRM learns a unified space-time representation and directly predicts per-pixel 4D Gaussian primitives from posed image tokens across time, enabling fast, high-quality rendering at, in principle, infinite frame rate. 4D-LRM generalizes to novel objects, interpolates across time, and handles diverse camera setups. It reconstructs 24-frame sequences in 1.5 seconds on a single A100 GPU. Our results demonstrate that scaling spatiotemporal pretraining enables accurate and efficient 4D reconstruction.", "arxiv_id": "2506.18890v1", "arxiv_authors": ["Ziqiao Ma", "Xuweiyi Chen", "Shoubin Yu", "Sai Bi", "Kai Zhang", "Chen Ziwen", "Sihan Xu", "Jianing Yang", "Zexiang Xu", "Kalyan Sunkavalli", "Mohit Bansal", "Joyce Chai", "Hao Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ce"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1880967, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53d"}, "filepath": "data/2506.22242v1.png", "tags": [], "_media_type": "image", "_rand": 0.999999593084745, "type": "Poster", "name": "4D-VLA: Spatiotemporal Vision-Language-Action Pretraining with Cross-Scene Calibration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115166", "abstract": "Leveraging diverse robotic data for pretraining remains a critical challenge. Existing methods typically model the dataset\u2019s action distribution using simple observations as inputs. However, these inputs are often incomplete, resulting in a dispersed conditional action distribution\u2014an issue we refer to as coordinate system chaos and state chaos. This inconsistency significantly hampers pretraining efficiency. To address this, we propose 4D-VLA, a novel approach that effectively integrates 4D information into the input to mitigate these sources of chaos. Our model introduces depth and temporal information into visual features with sequential RGB-D inputs, aligning the coordinate systems of the robot and the scene. This alignment endows the model with strong spatiotemporal reasoning capabilities while minimizing training overhead. Additionally, we introduce Memory bank sampling, a frame sampling strategy designed to extract informative frames from historical images, further improving effectiveness and efficiency. Experimental results demonstrate that our pretraining method and architectural components substantially enhance model performance. In both simulated and real-world experiments, our model achieves a significant increase in success rate over OpenVLA.To further assess spatial perception and generalization to novel views, we introduce MV-Bench, a multi-view simulation benchmark. Our model consistently outperforms existing methods, demonstrating stronger spatial understanding and adaptability.", "arxiv_id": "2506.22242v1", "arxiv_authors": ["Jiahui Zhang", "Yurui Chen", "Yueming Xu", "Ze Huang", "Yanpeng Zhou", "Yu-Jie Yuan", "Xinyue Cai", "Guowei Huang", "Xingyue Quan", "Hang Xu", "Li Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0cf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1990854, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53e"}, "filepath": "data/2507.07105v1.png", "tags": [], "_media_type": "image", "_rand": 0.999550599274987, "type": "Poster", "name": "4KAgent: Agentic Any Image to 4K Super-Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118816", "abstract": "We present 4KAgent, a unified agentic super-resolution generalist system designed to universally upscale any image to 4K resolution. Our system can transform images from extremely low resolutions with severe degradations, for example, highly distorted inputs at $256\\times 256$, into crystal clear, high-quality 4K outputs. 4KAgent comprises three core components: (1) Profiling, a module that customizes the 4KAgent pipeline based on bespoke use cases; (2) A Perception Agent, which leverages vision-language models alongside image quality assessment experts to analyze the input image and make a tailored restoration plan; and (3) A Restoration Agent, which executes the plan, following a recursive execution-reflection paradigm, guided by a quality-driven mixture-of-expert policy to select the optimal output for each step. Additionally, 4KAgent embeds a specialized face restoration pipeline, significantly enhancing facial details in portrait and selfie photos. We rigorously evaluate our 4KAgent across 12 distinct task categories encompassing a total of 26 diverse benchmarks, setting new state-of-the-art on a broad spectrum of imaging domains. Our evaluations cover natural images, portrait photos, AI-generated content, satellite imagery, fluorescence microscopy, and medical imaging, demonstrating superior performance in terms of both perceptual (e.g., NIQE, MUSIQ) and fidelity (e.g., PSNR) metrics. By establishing a novel agentic paradigm for low-level vision tasks, we aim to catalyze broader interest and innovation within vision-centric autonomous agents across diverse research communities.", "arxiv_id": "2507.07105v1", "arxiv_authors": ["Yushen Zuo", "Qi Zheng", "Mingyang Wu", "Xinrui Jiang", "Renjie Li", "Jian Wang", "Yide Zhang", "Gengchen Mai", "Lihong V. Wang", "James Zou", "Xiaoyu Wang", "Ming-Hsuan Yang", "Zhengzhong Tu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7758633, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a53f"}, "filepath": "data/2505.21962v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996268456359831, "type": "Poster", "name": "A2Seek: Towards Reasoning-Centric Benchmark for Aerial Anomaly Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121562", "abstract": "While unmanned aerial vehicles (UAVs) offer wide-area, high-altitude coverage for anomaly detection, they face challenges such as dynamic viewpoints, scale variations, and complex scenes. Existing datasets and methods, mainly designed for fixed ground-level views, struggle to adapt to these conditions, leading to significant performance drops in drone-view scenarios.To bridge this gap, we introduce A2Seek (Aerial Anomaly Seek), a large-scale, reasoning-centric benchmark dataset for aerial anomaly understanding. This dataset covers various scenarios and environmental conditions, providing high-resolution real-world aerial videos with detailed annotations, including anomaly categories, frame-level timestamps, region-level bounding boxes, and natural language explanations for causal reasoning. Building on this dataset, we propose A2Seek-R1, a novel reasoning framework that generalizes R1-style strategies to aerial anomaly understanding, enabling a deeper understanding of \"Where\" anomalies occur and \"Why\" they happen in aerial frames.To this end, A2Seek-R1 first employs a graph-of-thought (GoT)-guided supervised fine-tuning approach to activate the model's latent reasoning capabilities on A2Seek. Then, we introduce Aerial Group Relative Policy Optimization (A-GRPO) to design rule-based reward functions tailored to aerial scenarios. Furthermore, we propose a novel \"seeking\" mechanism that simulates UAV flight behavior by directing the model's attention to informative regions.Extensive experiments demonstrate that A2Seek-R1 achieves up to a 22.04% improvement in AP for prediction accuracy and a 13.9% gain in mIoU for anomaly localization, exhibiting strong generalization across complex environments and out-of-distribution scenarios.", "arxiv_id": "2505.21962v1", "arxiv_authors": ["Mengjingcheng Mo", "Xinyang Tong", "Jiaxu Leng", "Mingpi Tan", "Jiankang Zheng", "Yiran Liu", "Haosheng Chen", "Ji Gan", "Weisheng Li", "Xinbo Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1138479, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a540"}, "filepath": "data/2411.19628v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990387074005447, "type": "Poster", "name": "Accelerating Multimodal Large Language Models via Dynamic Visual-Token Exit and the Empirical Findings", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118110", "abstract": "In this paper, we study the visual redundancy problem of multimodal large language models (MLLMs) from the perspective of attention behaviors. Via extensive empirical experiments, we observe and conclude three main inference stages of MLLMs:(i) Early fusion between tokens is first accomplished quickly. (ii) Intra-modality modeling then comes to play. (iii) Multimodal reasoning resumes and lasts until the end of inference. In particular, we reveal that visual tokens will stop contributing to reasoning when the text tokens receive enough image information.Based on this observation, we propose an effective method to improve the efficiency of MLLMs, termed dynamic visual-token exit (DyVTE), which is orthogonal but collaborative to previous token-wise visual compression methods.To validate the efficacy of DyVTE, we apply it to a set of MLLMs, including LLaVA, VILA, EAGLE and InternVL.The experimental results not only show the effectiveness of our DyVTE in improving MLLMs' efficiency, e.g., DyVTE reduces the computation overhead of LLaVA-1.5 by up to 45.7% without performance drop, but also reveal a general pattern across multiple MLLMs, well facilitating the in-depth analysis of MLLMs. Our code is anonymously released at https://anonymous.4open.science/r/AnonymousDyVTE-26AB/.", "arxiv_id": "2411.19628v2", "arxiv_authors": ["Qiong Wu", "Wenhao Lin", "Yiyi Zhou", "Weihao Ye", "Zhanpeng Zen", "Xiaoshuai Sun", "Rongrong Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1192657, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a541"}, "filepath": "data/2507.17511v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995671728374167, "type": "Poster", "name": "Accelerating Parallel Diffusion Model Serving with Residual Compression", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116432", "abstract": "Diffusion models produce realistic images and videos but require substantial computational resources, necessitating multi-accelerator parallelism for real-time deployment. However, parallel inference introduces significant communication overhead from exchanging large activations between devices, limiting efficiency and scalability. We present CompactFusion, a compression framework that significantly reduces communication while preserving generation quality. Our key observation is that diffusion activations exhibit strong temporal redundancy\u2014adjacent steps produce highly similar activations, saturating bandwidth with near-duplicate data carrying little new information. To address this inefficiency, we seek a more compact representation that encodes only the essential information. CompactFusion achieves this via Residual Compression that transmits only compressed residuals (step-wise activation differences). Based on empirical analysis and theoretical justification, we show that it effectively removes redundant data, enabling substantial data reduction while maintaining high fidelity. We also integrate lightweight error feedback to prevent error accumulation. CompactFusion establishes a new paradigm for parallel diffusion inference, delivering lower latency and significantly higher generation quality than prior methods. On 4$\\times$L20, it achieves $3.0\\times$ speedup while greatly improving fidelity. It also uniquely supports communication-heavy strategies like sequence parallelism on slow networks, achieving $6.7\\times$ speedup over prior overlap-based method. CompactFusion applies broadly across diffusion models and parallel settings, and integrates easily without requiring pipeline rework. Portable implementation demonstrated on xDiT is publicly available at https://anonymous.4open.science/r/CompactFusion.", "arxiv_id": "2507.17511v1", "arxiv_authors": ["Jiajun Luo", "Yicheng Xiao", "Jianru Xu", "Yangxiu You", "Rongwei Lu", "Chen Tang", "Jingyan Jiang", "Zhi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2912364, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a542"}, "filepath": "data/2505.18875v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990755581490378, "type": "Poster", "name": "Accelerating Video Diffusion Transformers with Sparse Attention via Semantic-Aware Permutation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117598", "abstract": "Diffusion Transformers (DiTs) are essential for video generation but suffer from significant latency due to the quadratic complexity of attention. By computing only critical tokens, sparse attention reduces computational costs and offers a promising acceleration approach. However, we identify that existing methods fail to approach optimal generation quality under the same computation budget for two reasons: (1) Inaccurate critical token identification: current methods cluster tokens based on position rather than semantics, leading to imprecise aggregated representations. (2) Excessive computation waste: critical tokens are scattered among non-critical ones, leading to wasted computation on GPUs, which are optimized for processing contiguous tokens.In this paper, we propose SAPAttn, a training-free framework that maximizes identification accuracy and minimizes computation waste, achieving a Pareto frontier trade-off between generation quality and efficiency. The core of SAPAttn is semantic-aware permutation, which clusters and reorders tokens based on semantic similarity using k-means. This approach ensures both a precise cluster representation, improving identification accuracy, and a densified layout of critical tokens, enabling efficient computation without padding. Additionally, SAPAttn integrates Top-p dynamic budget control and customized kernel implementations, achieving up to $2.30\\times$ and $1.89\\times$ speedup while maintaining a PSNR of up to $30$ and $26$ on HunyuanVideo and Wan 2.1, respectively.", "arxiv_id": "2505.18875v3", "arxiv_authors": ["Shuo Yang", "Haocheng Xi", "Yilong Zhao", "Muyang Li", "Jintao Zhang", "Han Cai", "Yujun Lin", "Xiuyu Li", "Chenfeng Xu", "Kelly Peng", "Jianfei Chen", "Song Han", "Kurt Keutzer", "Ion Stoica"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.454Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3313804, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a543"}, "filepath": "data/2510.22260v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995336686183084, "type": "Poster", "name": "Accident Anticipation via Temporal Occurrence Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119711", "abstract": "Driving accident anticipation aims to predict potential collisions in real time, enabling timely alarms to enhance road safety. Existing methods typically predict frame-level anomaly scores as risk indicators. However, these approaches suffer from inconsistent supervision signals because driving risks evolve progressively rather than abruptly, and risk assessment inherently involves human subjectivity. To address this limitation, we propose a novel paradigm that directly predicts the probability of an accident occurring at multiple future timestamps (0.1s\u20132.0s), offering more precise supervision and improved interpretability. Our framework employs a snippet encoder to capture spatiotemporal dynamics and a Transformer-based decoder to simultaneously estimate accident probabilities across different time steps. Furthermore, we introduce a refined evaluation protocol that measures recall rate and Time-to-Accident (TTA) only under acceptable false alarm rates, ensuring practical applicability in the real world. Experiments demonstrate that our method achieves superior performance in both recall and TTA, validating its effectiveness for real-world accident anticipation.", "arxiv_id": "2510.22260v1", "arxiv_authors": ["Tianhao Zhao", "Yiyang Zou", "Zihao Mao", "Peilun Xiao", "Yulin Huang", "Hongda Yang", "Yuxuan Li", "Qun Li", "Guobin Wu", "Yutian Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 943062, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a544"}, "filepath": "data/2510.20348v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999727390239495, "type": "Poster", "name": "AccuQuant: Simulating Multiple Denoising Steps for Quantizing Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118264", "abstract": "We present in this paper a novel post-training quantization (PTQ) method, dubbed AccuQuant, for diffusion models. We show analytically and empirically that quantization errors for diffusion models are accumulated over denoising steps in a sampling process. To alleviate the error accumulation problem, AccuQuant minimizes the discrepancies between outputs of a full-precision diffusion model and its quantized version within a couple of denoising steps. That is, it simulates multiple denoising steps of a diffusion sampling process explicitly for quantization, accounting the accumulated errors over multiple denoising steps, which is in contrast to previous approaches to imitating a training process of diffusion models, namely, minimizing the discrepancies independently for each step. We also present an efficient implementation technique for AccuQuant, together with a novel objective, which reduces a memory complexity significantly from $\\mathcal{O}(n)$ to $\\mathcal{O}(1)$, where $n$ is the number of denoising steps. We demonstrate the efficacy and efficiency of AccuQuant across various tasks and diffusion models on standard benchmarks.", "arxiv_id": "2510.20348v1", "arxiv_authors": ["Seunghoon Lee", "Jeongwoo Choi", "Byunggwan Son", "Jaehyeon Moon", "Jeimin Jeon", "Bumsub Ham"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053501, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a545"}, "filepath": "data/2507.01961v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994787948883582, "type": "Poster", "name": "AC-DiT: Adaptive Coordination Diffusion Transformer for Mobile Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115843", "abstract": "Recently, mobile manipulation has attracted increasing attention for enabling language-conditioned robotic control in household tasks.However, existing methods still face challenges in coordinating mobile base and manipulator, primarily due to two limitations.On the one hand, they fail to explicitly model the influence of the mobile base on manipulator control, which easily leads to error accumulation under high degrees of freedom.On the other hand, they treat the entire mobile manipulation process with the same visual observation modality (e.g., either all 2D or all 3D), overlooking the distinct multimodal perception requirements at different stages during mobile manipulation.To address this, we propose the Adaptive Coordination Diffusion Transformer (AC-DiT), which enhances mobile base and manipulator coordination for end-to-end mobile manipulation.First, since the motion of the mobile base directly influences the manipulator's actions, we introduce a mobility-to-body conditioning mechanism that guides the model to first extract base motion representations, which are then used as context prior for predicting whole-body actions. This enables whole-body control that accounts for the potential impact of the mobile base\u2019s motion.Second, to meet the perception requirements at different stages of mobile manipulation, we design a perception-aware multimodal conditioning strategy that dynamically adjusts the fusion weights between various 2D visual images and 3D point clouds, yielding visual features tailored to the current perceptual needs.This allows the model to, for example, adaptively rely more on 2D inputs when semantic information is crucial for action prediction, while placing greater emphasis on 3D geometric information when precise spatial understanding is required.We empirically validate AC-DiT through extensive experiments on both simulated and real-world mobile manipulation tasks, demonstrating superior performance compared to existing methods.", "arxiv_id": "2507.01961v3", "arxiv_authors": ["Sixiang Chen", "Jiaming Liu", "Siyuan Qian", "Han Jiang", "Lily Li", "Renrui Zhang", "Zhuoyang Liu", "Chenyang Gu", "Chengkai Hou", "Pengwei Wang", "Zhongyuan Wang", "Shanghang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d7"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031661, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a546"}, "filepath": "data/2507.01372v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992855819920102, "type": "Poster", "name": "Active Measurement: Efficient Estimation at Scale", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116156", "abstract": "AI has the potential to transform scientific discovery by analyzing vast datasets with little human effort. However, current workflows often do not provide the accuracy or statistical guarantees that are needed. We introduce \\emph{active measurement}, a human-in-the-loop AI framework for scientific measurement. An AI model is used to predict measurements for individual units, which are then sampled for human labeling using importance sampling. With each new set of human labels, the AI model is improved and an unbiased Monte Carlo estimate of the total measurement is refined. Active measurement can provide precise estimates even with an imperfect AI model, and requires little human effort when the AI model is very accurate. We derive novel estimators, weighting schemes, and confidence intervals, and show that active measurement reduces estimation error compared to alternatives in several measurement tasks.", "arxiv_id": "2507.01372v1", "arxiv_authors": ["Max Hamilton", "Jinlin Lai", "Wenlong Zhao", "Subhransu Maji", "Daniel Sheldon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1034896, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a547"}, "filepath": "data/2506.06630v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994815824318858, "type": "Poster", "name": "Active Test-time Vision-Language Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119128", "abstract": "Vision-Language Navigation (VLN) policies trained on offline datasets often exhibit degraded task performance when deployed in unfamiliar navigation environments at test time, where agents are typically evaluated without access to external interaction or feedback. Entropy minimization has emerged as a practical solution for reducing prediction uncertainty at test time; however, it can suffer from accumulated errors, as agents may become overconfident in incorrect actions without sufficient contextual grounding. To tackle these challenges, we introduce ATENA (Active TEst-time Navigation Agent), a test-time active learning framework that enables a practical human-robot interaction via episodic feedback on uncertain navigation outcomes. In particular, ATENA learns to increase certainty in successful episodes and decrease it in failed ones, improving uncertainty calibration. Here, we propose mixture entropy optimization, where entropy is obtained from a combination of the action and pseudo-expert distributions\u2014a hypothetical action distribution assuming the agent's selected action to be optimal\u2014controlling both prediction confidence and action preference. In addition, we propose a self-active learning strategy that enables an agent to evaluate its navigation outcomes based on confident predictions. As a result, the agent stays actively engaged throughout all iterations, leading to well-grounded and adaptive decision-making. Extensive evaluations on challenging VLN benchmarks\u2014REVERIE, R2R, and R2R-CE\u2014demonstrate that ATENA successfully overcomes distributional shifts at test time, outperforming the compared baseline methods across various settings.", "arxiv_id": "2506.06630v1", "arxiv_authors": ["Heeju Ko", "Sungjune Kim", "Gyeongrok Oh", "Jeongyoon Yoon", "Honglak Lee", "Sujin Jang", "Seungryong Kim", "Sangpil Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0d9"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063977, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a548"}, "filepath": "data/2509.25822v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993590829297184, "type": "Poster", "name": "Act to See, See to Act: Diffusion-Driven Perception-Action Interplay for Adaptive Policies", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118213", "abstract": "Existing imitation learning methods decouple perception and action, which overlooks the causal reciprocity between sensory representations and action execution that humans naturally leverage for adaptive behaviors. To bridge this gap, we introduce Action-Guided Diffusion Policy (DP-AG), a unified representation learning that explicitly models a dynamic interplay between perception and action through probabilistic latent dynamics. DP-AG encodes latent observations into a Gaussian posterior via variational inference and evolves them using an action-guided SDE, where the Vector\u2013Jacobian Product (VJP) of the diffusion policy's noise predictions serves as a structured stochastic force driving latent updates. To promote bidirectional learning between perception and action, we introduce a cycle-consistent contrastive loss that organizes the gradient flow of the noise predictor into a coherent perception\u2013action loop, enforcing mutually consistent transitions in both latent updates and action refinements. Theoretically, we derive a variational lower bound for the action-guided SDE, and prove that the contrastive objective enhances continuity in both latent and action trajectories. Empirically, DP-AG significantly outperforms state-of-the-art methods across simulation benchmarks and real-world UR5 manipulation tasks. As a result, our DP-AG offers a promising step toward bridging biological adaptability and artificial policy learning.", "arxiv_id": "2509.25822v3", "arxiv_authors": ["Jing Wang", "Weiting Peng", "Jing Tang", "Zeyu Gong", "Xihua Wang", "Bo Tao", "Li Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0da"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1105766, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a549"}, "filepath": "data/2510.23285v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993363722351304, "type": "Poster", "name": "Adaptive Stochastic Coefficients for Accelerating Diffusion Sampling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119065", "abstract": "Diffusion-based generative processes, grounded in differential equation solving, frequently require striking a balance between computational speed and output quality. Our theoretical investigation of prevalent solving approaches - ordinary differential equations (ODE) and stochastic differential equations (SDE) solvers - uncovers distinct limitations: ODE solvers exhibit irreducible gradient error accumulation from deterministic path dependence, while SDE methods suffer amplified discretization errors when step counts are reduced. Building upon this insight, we introduce AdaSDE, a novel single-step SDE solver that aims to unify the efficiency of ODEs with the error resilience of SDEs. At the core of our design is a learnable parameter obtained through lightweight tuning, which dynamically regulates the error correction strength to accelerate diffusion sampling. Notably, our framework can be integrated with numerous solvers to enhance their capabilities through lightweight parameter tuning. Extensive experiments demonstrate state-of-the-art performance: At 5 NFE, AdaSDE achieves FID scores of 4.79 on CIFAR-10, 8.91 on FFHQ 64\u00d764 and 6.96 on LSUN Bedroom.", "arxiv_id": "2510.23285v1", "arxiv_authors": ["Ruoyu Wang", "Beier Zhu", "Junzhi Li", "Liangyu Yuan", "Chi Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0db"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086874, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54a"}, "filepath": "data/2506.13589v2.png", "tags": [], "_media_type": "image", "_rand": 0.999553645832672, "type": "Poster", "name": "AdaVideoRAG: Omni-Contextual Adaptive Retrieval-Augmented for Efficient Long Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119056", "abstract": "Multimodal Large Language Models (MLLMs) have demonstrated excellent performance in video understanding but suffer from degraded effectiveness when processing long videos due to fixed-length contexts and weaknesses in modeling long-term dependencies. Retrieval-Augmented Generation (RAG) technology can mitigate these limitations through dynamic knowledge expansion, but existing RAG schemes for video understanding employ fixed retrieval paradigms that use uniform structures regardless of input query difficulty. This introduces redundant computational overhead and latency (\\textit{e.g.}, complex graph traversal operations) for simple queries (\\textit{e.g.}, frame-level object recognition) while potentially causing critical information loss due to insufficient retrieval granularity for multi-hop reasoning. Such single-step retrieval mechanisms severely constrain the model's balance between resource efficiency and cognitive depth. To address this, we first propose a novel AdaVideoRAG framework for long-video understanding, which uses a lightweight intent classifier to dynamically and adaptively allocate appropriate retrieval schemes\u2014ranging from the simplest to the most sophisticated\u2014for different video understanding tasks based on query complexity. We introduce an Omni-Knowledge Indexing module to extract valuable information from multi-modal signals for context modeling and build corresponding databases, \\textit{i.e.}, a text base from clip captions, ASR, and OCR; a visual base; and a graph for deep semantic understanding. This enables hierarchical knowledge access, integration, and generation from naive retrieval to graph retrieval, achieving an optimal balance between resource consumption and video understanding capabilities. Finally, we construct the HiVU benchmark for deep understanding evaluation. Extensive experiments show that our framework enhances the overall efficiency and accuracy of Video-QA for long videos and can be seamlessly integrated with existing MLLMs via lightweight API calls, establishing a new paradigm for adaptive retrieval augmentation in video analysis. Codes will be open-sourced soon.", "arxiv_id": "2506.13589v2", "arxiv_authors": ["Zhucun Xue", "Jiangning Zhang", "Xurong Xie", "Yuxuan Cai", "Yong Liu", "Xiangtai Li", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0dc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070676, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54b"}, "filepath": "data/2510.08625v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996844347808215, "type": "Poster", "name": "Adjusting Initial Noise to Mitigate Memorization in Text-to-Image Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119951", "abstract": "Despite their impressive generative capabilities, text-to-image diffusion models often memorize and replicate training data, prompting serious concerns over privacy and copyright. Recent work has attributed this memorization to an attraction basin\u2014a region where applying classifier-free guidance (CFG) steers the denoising trajectory toward memorized outputs\u2014and has proposed deferring CFG application until the denoising trajectory escapes this basin. However, such delays often result in non-memorized images that are poorly aligned with the input prompts, highlighting the need to promote earlier escape so that CFG can be applied sooner in the denoising process. In this work, we show that the initial noise sample plays a crucial role in determining when this escape occurs. We empirically observe that different initial samples lead to varying escape times. Building on this insight, we propose two mitigation strategies that adjust the initial noise\u2014either collectively or individually\u2014to find and utilize initial samples that encourage earlier basin escape. These approaches significantly reduce memorization while preserving image-text alignment.", "arxiv_id": "2510.08625v1", "arxiv_authors": ["Hyeonggeun Han", "Sehwan Kim", "Hyungjun Joo", "Sangwoo Hong", "Jungwoo Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0dd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001917, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54c"}, "filepath": "data/2506.15980v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992517630158937, "type": "Poster", "name": "Advanced Sign Language Video Generation with Compressed and Quantized Multi-Condition Tokenization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119805", "abstract": "Sign Language Video Generation (SLVG) seeks to generate identity-preserving sign language videos from spoken language texts. Existing methods primarily rely on the single coarse condition (e.g., skeleton sequences) as the intermediary to bridge the translation model and the video generation model, which limits both the naturalness and expressiveness of the generated videos. To overcome these limitations, we propose SignViP, a novel SLVG framework that incorporate multiple fine-grained conditions for improved generation fidelity. Rather than directly translating error-prone high-dimensional conditions, SignViP adopts a discrete tokenization paradigm to integrate and represent fine-grained conditions (i.e., fine-grained poses and 3D hands). SignViP contains three core components. (1) Sign Video Diffusion Model is jointly trained with a multi-condition encoder to learn continuous embeddings that encapsulate fine-grained motion and appearance. (2) Finite Scalar Quantization (FSQ) Autoencoder is further trained to compress and quantize these embeddings into discrete tokens for compact representation of the conditions. (3) Multi-Condition Token Translator is trained to translate spoken language text to discrete multi-condition tokens. During inference, Multi-Condition Token Translator first translates the spoken language text into discrete multi-condition tokens. These tokens are then decoded to continuous embeddings by FSQ Autoencoder, which are subsequently injected into Sign Video Diffusion Model to guide video generation. Experimental results show that SignViP achieves state-of-the-art performance across metrics, including video quality, temporal coherence, and semantic fidelity.", "arxiv_id": "2506.15980v1", "arxiv_authors": ["Cong Wang", "Zexuan Deng", "Zhiwei Jiang", "Fei Shen", "Yafeng Yin", "Shiwei Gan", "Zifeng Cheng", "Shiping Ge", "Qing Gu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0de"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040631, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54d"}, "filepath": "data/2509.16645v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991491977949596, "type": "Poster", "name": "AdvEDM: Fine-grained Adversarial Attack against VLM-based Embodied Decision-Making Systems", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116436", "abstract": "Vision-Language Models (VLMs), with their strong reasoning and planning capabilities, are widely used in embodied decision-making (EDM) tasks such as autonomous driving and robotic manipulation. Recent research has increasingly explored adversarial attacks on VLMs to reveal their vulnerabilities. However, these attacks either rely on overly strong assumptions, requiring full knowledge of the victim VLM, which is impractical for attacking VLM-based EDM systems, or exhibit limited effectiveness. The latter stems from disrupting most semantic information in the image, which leads to a misalignment between the perception and the task context defined by system prompts. This inconsistency interrupts the VLM's reasoning process, resulting in invalid outputs that fail to affect interactions in the physical world. To this end, we propose a fine-grained adversarial attack framework, AdvEDM, which modifies the VLM's perception of only a few key objects while preserving the semantics of the remaining regions. This attack effectively reduces conflicts with the task context, making VLMs output valid but incorrect decisions and affecting the actions of entities, thus posing a more substantial safety threat in the physical world. We design two variants of based on this framework, AdvEDM-R and AdvEDM-A, which respectively remove the semantics of a specific object from the image and add the semantics of a new object into the image. The experimental results in both general scenarios and EDM tasks demonstrate fine-grained control and excellent attack performance.", "arxiv_id": "2509.16645v1", "arxiv_authors": ["Yichen Wang", "Hangtao Zhang", "Hewen Pan", "Ziqi Zhou", "Xianlong Wang", "Peijin Guo", "Lulu Xue", "Shengshan Hu", "Minghui Li", "Leo Yu Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0df"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.455Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1084172, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54e"}, "filepath": "data/2505.21494v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990813249020766, "type": "Poster", "name": "Adversarial Attacks against Closed-Source MLLMs via Feature Optimal Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116337", "abstract": "Multimodal large language models (MLLMs) remain vulnerable to transferable adversarial examples. While existing methods typically achieve targeted attacks by aligning global features\u2014such as CLIP\u2019s [CLS] token\u2014between adversarial and target samples, they often overlook the rich local information encoded in patch tokens. This leads to suboptimal alignment and limited transferability, particularly for closed-source models. To address this limitation, we propose a targeted transferable adversarial attack method based on feature optimal alignment, called FOA-Attack, to improve adversarial transfer capability. Specifically, at the global level, we introduce a global feature loss based on cosine similarity to align the coarse-grained features of adversarial samples with those of target samples. At the local level, given the rich local representations within Transformers, we leverage clustering techniques to extract compact local patterns to alleviate redundant local features. We then formulate local feature alignment between adversarial and target samples as an optimal transport (OT) problem and propose a local clustering optimal transport loss to refine fine-grained feature alignment. Additionally, we propose a dynamic ensemble model weighting strategy to adaptively balance the influence of multiple models during adversarial example generation, thereby further improving transferability. Extensive experiments across various models demonstrate the superiority of the proposed method, outperforming state-of-the-art methods, especially in transferring to closed-source MLLMs.", "arxiv_id": "2505.21494v1", "arxiv_authors": ["Xiaojun Jia", "Sensen Gao", "Simeng Qin", "Tianyu Pang", "Chao Du", "Yihao Huang", "Xinfeng Li", "Yiming Li", "Bo Li", "Yang Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1049483, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a54f"}, "filepath": "data/2504.14305v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990399389300596, "type": "Poster", "name": "Adversarial Locomotion and Motion Imitation for Humanoid Policy Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116642", "abstract": "Humans exhibit diverse and expressive whole-body movements. However, attaining human-like whole-body coordination in humanoid robots remains challenging, as conventional approaches that mimic whole-body motions often neglect the distinct roles of upper and lower body. This oversight leads to computationally intensive policy learning and frequently causes robot instability and falls during real-world execution. To address these issues, we propose Adversarial Locomotion and Motion Imitation (ALMI), a novel framework that enables adversarial policy learning between upper and lower body. Specifically, the lower body aims to provide robust locomotion capabilities to follow velocity commands while the upper body tracks various motions. Conversely, the upper-body policy ensures effective motion tracking when the robot executes velocity-based movements. Through iterative updates, these policies achieve coordinated whole-body control, which can be extended to loco-manipulation tasks with teleoperation systems. Extensive experiments demonstrate that our method achieves robust locomotion and precise motion tracking in both simulation and on the full-size Unitree H1-2 robot. Additionally, we release a large-scale whole-body motion control dataset featuring high-quality episodic trajectories from MuJoCo simulations. The project page is https://almi-humanoid.github.io.", "arxiv_id": "2504.14305v3", "arxiv_authors": ["Jiyuan Shi", "Xinzhe Liu", "Dewei Wang", "Ouyang Lu", "S\u00f6ren Schwertfeger", "Chi Zhang", "Fuchun Sun", "Chenjia Bai", "Xuelong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e1"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029128, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a550"}, "filepath": "data/2503.10635v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998305254279273, "type": "Poster", "name": "A Frustratingly Simple Yet Highly Effective Attack Baseline: Over 90% Success Rate Against the Strong Black-box Models of GPT-4.5/4o/o1", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119497", "abstract": "Despite promising performance on open-source large vision-language models (LVLMs), transfer-based targeted attacks often fail against black-box commercial closed-source LVLMs. Analyzing failed adversarial perturbations reveals that the learned perturbations typically originate from a uniform distribution and lack clear semantic details, resulting in unintended responses. This critical absence of semantic information leads commercial LVLMs to either ignore the perturbation entirely or misinterpret its embedded semantics, thereby causing the attack to fail. To overcome these issues, we propose to refine semantic clarity by encoding explicit semantic details within local regions, thus ensuring interoperability and capturing finer-grained features, and by concentrating modifications on semantically rich areas rather than applying them uniformly. To achieve this, we propose *a simple yet highly effective baseline*: at each optimization step, the adversarial image is cropped randomly by a controlled aspect ratio and scale, resized, and then aligned with the target image in the embedding space. While the na\\\"ive source-target matching method has been utilized before in the literature, we are the first to provide a tight analysis, which establishes a close connection between perturbation optimization and semantics. Experimental results confirm our hypothesis. Our adversarial examples crafted with local-aggregated perturbations focused on crucial regions exhibit surprisingly good transferability to commercial LVLMs, including GPT-4.5, GPT-4o, Gemini-2.0-flash, Claude-3.5/3.7-sonnet, and even reasoning models like o1, Claude-3.7-thinking and Gemini-2.0-flash-thinking. Our approach achieves success rates exceeding 90\\% on GPT-4.5, 4o, and o1, significantly outperforming all prior state-of-the-art attack methods. Our code and optimized adversarial examples are available in supplementary materials.", "arxiv_id": "2503.10635v2", "arxiv_authors": ["Zhaoyi Li", "Xiaohan Zhao", "Dong-Dong Wu", "Jiacheng Cui", "Zhiqiang Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1570414, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a551"}, "filepath": "data/2506.16371v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992554510761751, "type": "Poster", "name": "AGC-Drive: A Large-Scale Dataset for Real-World Aerial-Ground Collaboration in Driving Scenarios", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121689", "abstract": "By sharing information across multiple agents, collaborative perception helps autonomous vehicles mitigate occlusions and improve overall perception accuracy. While most previous work focus on vehicle-to-vehicle and vehicle-to-infrastructure collaboration, with limited attention to aerial perspectives provided by UAVs, which uniquely offer dynamic, top-down views to alleviate occlusions and monitor large-scale interactive environments. A major reason for this is the lack of high-quality datasets for aerial-ground collaborative scenarios. To bridge this gap, we present AGC-Drive, the first large-scale real-world dataset for Aerial-Ground Cooperative 3D perception. The data collection platform consists of two vehicles, each equipped with five cameras and one LiDAR sensor, and one UAV carrying a forward-facing camera and a LiDAR sensor, enabling comprehensive multi-view and multi-agent perception. Consisting of approximately 120k LiDAR frames and 440k images, the dataset covers 14 diverse real-world driving scenarios, including urban roundabouts, highway tunnels, and on/off ramps. Notably, 19.5% of the data comprises dynamic interaction events, including vehicle cut-ins, cut-outs, and frequent lane changes. AGC-Drive contains 400 scenes, each with approximately 100 frames and fully annotated 3D bounding boxes covering 13 object categories. We provide benchmarks for two 3D perception tasks: vehicle-to-vehicle collaborative perception and vehicle-to-UAV collaborative perception. Additionally, we release an open-source toolkit, including spatiotemporal alignment verification tools, multi-agent visualization systems, and collaborative annotation utilities. The dataset and code are available at https://github.com/PercepX/AGC-Drive.", "arxiv_id": "2506.16371v2", "arxiv_authors": ["Yunhao Hou", "Bochao Zou", "Min Zhang", "Ran Chen", "Shangdong Yang", "Yanmei Zhang", "Junbao Zhuo", "Siheng Chen", "Jiansheng Chen", "Huimin Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080337, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a552"}, "filepath": "data/2505.13043v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994731906805563, "type": "Poster", "name": "A Generalized Label Shift Perspective for Cross-Domain Gaze Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116912", "abstract": "Aiming to generalize the well-trained gaze estimation model to new target domains, Cross-domain Gaze Estimation (CDGE) is developed for real-world application scenarios. Existing CDGE methods typically extract the domain-invariant features to mitigate domain shift in feature space, which is proved insufficient by Generalized Label Shift (GLS) theory. In this paper, we introduce a novel GLS perspective to CDGE and modelize the cross-domain problem by label and conditional shift problem. A GLS correction framework is presented and a feasible realization is proposed, in which a importance reweighting strategy based on truncated Gaussian distribution is introduced to overcome the continuity challenges in label shift correction. To embed the reweighted source distribution to conditional invariant learning, we further derive a probability-aware estimation of conditional operator discrepancy. Extensive experiments on standard CDGE tasks with different backbone models validate the superior generalization capability across domain and applicability on various models of proposed method.", "arxiv_id": "2505.13043v1", "arxiv_authors": ["Hao-Ran Yang", "Xiaohui Chen", "Chuan-Xian Ren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 927391, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a553"}, "filepath": "data/2504.10568v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996046587574853, "type": "Poster", "name": "AgMMU: A Comprehensive Agricultural Multimodal Understanding Benchmark", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121696", "abstract": "We present **AgMMU**, a challenging real\u2011world benchmark for evaluating and advancing vision-language models (VLMs) in the knowledge\u2011intensive domain of agriculture. Unlike prior datasets that rely on crowdsourced prompts, AgMMU is distilled from 116,231 authentic dialogues between everyday growers and USDA-authorized Cooperative Extension experts. Through a three\u2011stage pipeline: automated knowledge extraction, QA generation, and human verification, we construct (i) AgMMU, an evaluation set of 746 multiple\u2011choice questions (MCQs) and 746 open\u2011ended questions (OEQs), and (ii) AgBase, a development corpus of 57,079 multimodal facts covering five high-stakes agricultural topics: insect identification, species identification, disease categorization, symptom description, and management instruction. AgMMU has three key advantages:- **Authentic \\& Expert\u2011Verified**: All facts, images, and answers originate from real farmer and gardener inquiries answered by credentialed specialists, ensuring high\u2011fidelity agricultural knowledge.- **Complete Development Suite**: AgMMU uniquely couples a dual\u2011format evaluation benchmark (MCQ and OEQ) with AgBase, a large\u2011scale training set, enabling both rigorous assessment and targeted improvement of VLMs.- **Knowledge\u2011intensive Challenge**: Our tasks demand the synergy of nuanced visual perception and domain expertise, exposing fundamental limitations of current general\u2011purpose models and charting a path toward robust, application\u2011ready agricultural AI.Benchmarking 12 leading VLMs reveals pronounced gaps in fine\u2011grained perception and factual grounding. Open\u2011sourced models trail after proprietary ones by a wide margin. Simple fine\u2011tuning on AgBase boosts open-sourced model performance on challenging OEQs for up to 11.6\\% on average, narrowing this gap and also motivating future research to propose better strategies in knowledge extraction and distillation from AgBase. We hope AgMMU stimulates research on domain\u2011specific knowledge integration and trustworthy decision support in agriculture AI development.", "arxiv_id": "2504.10568v2", "arxiv_authors": ["Aruna Gauba", "Irene Pi", "Yunze Man", "Ziqi Pang", "Vikram S. Adve", "Yu-Xiong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061180, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a554"}, "filepath": "data/2509.16421v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998734993087283, "type": "Poster", "name": "Aha! - Predicting What Matters Next: Online Highlight Detection Without Looking Ahead", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119707", "abstract": "Real-time understanding of continuous video streams is essential for intelligent agents operating in high-stakes environments, including autonomous vehicles, surveillance drones, and disaster response robots. Yet, most existing video understanding and highlight detection methods assume access to the entire video during inference, making them unsuitable for online or streaming scenarios. In particular, current models optimize for offline summarization, failing to support step-by-step reasoning needed for real-time decision-making. We introduce Aha!, an autoregressive highlight detection framework that predicts the relevance of each video frame against a task described in natural language. Without accessing future video frames, Aha! utilizes a multimodal language-vision model and lightweight, decoupled heads trained on a large, curated dataset of human-centric video labels. To enable scalability, we adopt a fixed-size SinkCache mechanism that achieves constant memory usage across infinite-length streams without degrading performance on standard benchmarks. This encourages the hidden representation to capture high-level task objectives, enabling effective frame-level rankings for informativeness, relevance, and uncertainty with respect to the natural language task. Aha! achieves state-of-the-art performance on highlight detection benchmarks, surpassing prior full-context and video-language models by +5.5\\% on TVSum and +8.3\\% on Mr. HiSum in mAP. We explore Aha!\u2019s potential for real-world robotics applications given a task-oriented natural language input and a continuous, robot-centric video. Both experiments demonstrate Aha!'s potential effectiveness as a real-time reasoning module for downstream planning and long-horizon understanding.", "arxiv_id": "2509.16421v2", "arxiv_authors": ["Aiden Chang", "Celso De Melo", "Stephanie M. Lukin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 995293, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a555"}, "filepath": "data/2507.00583v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991268663186491, "type": "Poster", "name": "AI-Generated Video Detection via Perceptual Straightening", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118520", "abstract": "The rapid advancement of generative AI enables highly realistic synthetic video, posing significant challenges for content authentication and raising urgent concerns about misuse. Existing detection methods often struggle with generalization and capturing subtle temporal inconsistencies. We propose $ReStraV$ ($Re$presentation $Stra$ightening for $V$ideo), a novel approach to distinguish natural from AI-generated videos. Inspired by the ``perceptual straightening'' hypothesis\u2014which suggests real-world video trajectories become more straight in neural representation domain\u2014we analyze deviations from this expected geometric property. Using a pre-trained self-supervised vision transformer (DINOv2), we quantify the temporal curvature and stepwise distance in the model's representation domain. We aggregate statistical and signals descriptors of these measures for each video and train a classifier. Our analysis shows that AI-generated videos exhibit significantly different curvature and distance patterns compared to real videos. A lightweight classifier achieves state-of-the-art detection performance (e.g., $97.17$ % accuracy and $98.63$ % AUROC on the VidProM benchmark, substantially outperforming existing image- and video-based methods. ReStraV is computationally efficient, it is offering a low-cost and effective detection solution. This work provides new insights into using neural representation geometry for AI-generated video detection.", "arxiv_id": "2507.00583v2", "arxiv_authors": ["Christian Intern\u00f2", "Robert Geirhos", "Markus Olhofer", "Sunny Liu", "Barbara Hammer", "David Klindt"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1694283, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a556"}, "filepath": "data/2505.19297v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994611290718928, "type": "Poster", "name": "Alchemist: Turning Public Text-to-Image Data into Generative Gold", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121494", "abstract": "Pre-training equips text-to-image (T2I) models with broad world knowledge, but this alone is often insufficient to achieve high aesthetic quality and alignment. Consequently, supervised fine-tuning (SFT) is crucial for further refinement. However, its effectiveness highly depends on the quality of the fine-tuning dataset.Existing public SFT datasets frequently target narrow domains (e.g., anime or specific art styles), and the creation of high-quality, general-purpose SFT datasets remains a significant challenge.Current curation methods are often costly and struggle to identify truly impactful samples.This challenge is further complicated by the scarcity of public general-purpose datasets, as leading models often rely on large, proprietary, and poorly documented internal data, hindering broader research progress.This paper introduces a novel methodology for creating general-purpose SFT datasets by leveraging a pre-trained generative model as an estimator of high-impact training samples. We apply this methodology to construct and release Alchemist, a compact (3,350 samples) yet highly effective SFT dataset. Experiments demonstrate that Alchemist substantially improves the generative quality of five public T2I models while preserving diversity and style. Additionally, we release the fine-tuned models' weights to the public.", "arxiv_id": "2505.19297v1", "arxiv_authors": ["Valerii Startsev", "Alexander Ustyuzhanin", "Alexey Kirillov", "Dmitry Baranchuk", "Sergey Kastryulin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6632145, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a557"}, "filepath": "data/2510.22673v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999725712747667, "type": "Poster", "name": "Alias-Free ViT: Fractional Shift Invariance via Linear Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118064", "abstract": "Transformers have emerged as a competitive alternative to convnets in vision tasks, yet they lack the architectural inductive bias of convnets, which may hinder their potential performance. Specifically, Vision Transformers (ViTs) are not translation\u2011invariant and are more sensitive to minor image translations than standard convnets.Previous studies have shown, however, that convnets are also not perfectly shift\u2011invariant, due to aliasing in down\u2011sampling and non\u2011linear layers. Consequently, anti\u2011aliasing approaches have been proposed to certify convnets translation robustness. Building on this line of work, we propose an Alias\u2011Free ViT, which combines two main components. First, it uses alias-free down\u2011sampling and non\u2011linearities. Second, it uses linear cross\u2011covariance attention that is shift\u2011invariant to both integer and fractional translations.Our model maintains competitive performance in image classification and outperforms similar\u2011sized models in terms of robustness to adversarial translations.", "arxiv_id": "2510.22673v1", "arxiv_authors": ["Hagay Michaeli", "Daniel Soudry"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0e9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 982447, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a558"}, "filepath": "data/2509.17088v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994828196297044, "type": "Poster", "name": "AlignedGen: Aligning Style Across Generated Images", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117223", "abstract": "Diffusion-based generative models struggle to maintain high style consistency across generated images via text description. Although several style-aligned image generation methods have been proposed to address this issue, they exhibit suboptimal performance and are primarily built upon the U-Net architecture, limiting their compatibility with MM-DiT diffusion models like Flux that has emerged as a predominant model in the field of image generation. To address these limitations, we propose $\\textit{\\textbf{AlignedGen}}$, a novel training-free style-aligned image generation method for Flux to significantly enhance style consistency across generated images. Specifically, AlignedGen incorporates two key components to achieve this: Shifted Position Embedding (ShiftPE) and Selective Shared Attention (SSA) layer. ShiftPE alleviates the text controllability degradation observed in prior methods when applied to Flux through its non-overlapping position indices design, while SSA further enhances style consistency across images. In addition, our method can be seamlessly integrated with various controllable generation technologies (e.g., subject-driven generation, depth control), demonstrating broad applicability across diverse scenarios. Extensive experimental results validate that our method effectively enhances style consistency across generated images while maintaining favorable text controllability.", "arxiv_id": "2509.17088v1", "arxiv_authors": ["Jiexuan Zhang", "Yiheng Du", "Qian Wang", "Weiqi Li", "Yu Gu", "Jian Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ea"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2915212, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a559"}, "filepath": "data/2503.08250v4.png", "tags": [], "_media_type": "image", "_rand": 0.9993733008987145, "type": "Poster", "name": "Aligning Text to Image in Diffusion Models is Easier Than You Think", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117814", "abstract": "While recent advancements in generative modeling have significantly improved text-image alignment, some residual misalignment between text and image representations still remains. Some approaches address this issue by fine-tuning models in terms of preference optimization, etc., which require tailored datasets. Orthogonal to these methods, we revisit the challenge from the perspective of representation alignment\u2014an approach that has gained popularity with the success of REPresentation Alignment (REPA). We first argue that conventional text-to-image (T2I) diffusion models, typically trained on paired image and text data (i.e., positive pairs) by minimizing score matching or flow matching losses, is suboptimal from the standpoint of representation alignment. Instead, a better alignment can be achieved through contrastive learning that leverages existing dataset as both positive and negative pairs. To enable efficient alignment with pretrained models, we propose SoftREPA\u2014a lightweight contrastive fine-tuning strategy that leverages soft text tokens for representation alignment. This approach improves alignment with minimal computational overhead by adding fewer than 1M trainable parameters to the pretrained model. Our theoretical analysis demonstrates that our method explicitly increases the mutual information between text and image representations, leading to enhanced semantic consistency. Experimental results across text-to-image generation and text-guided image editing tasks validate the effectiveness of our approach in improving the semantic consistency of T2I generative models.", "arxiv_id": "2503.08250v4", "arxiv_authors": ["Jaa-Yeon Lee", "Byunghee Cha", "Jeongsol Kim", "Jong Chul Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0eb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.456Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6173370, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55a"}, "filepath": "data/2506.14603v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991482684758781, "type": "Poster", "name": "Align Your Flow: Scaling Continuous-Time Flow Map Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115909", "abstract": "Diffusion- and flow-based models have emerged as state-of-the-art generative modeling approaches, but they require many sampling steps. Consistency models can distill these models into efficient one-step generators; however, unlike flow- and diffusion-based methods, their performance inevitably degrades when increasing the number of steps, which we show both analytically and empirically.Flow maps generalize these approaches by connecting any two noise levels in a single step and remain effective across all step counts. In this paper, we introduce two new continuous-time objectives for training flow maps, along with additional novel training techniques, generalizing existing consistency and flow matching objectives. We further demonstrate that autoguidance can improve performance, using a low-quality model for guidance during distillation, and an additional boost can be achieved by adversarial finetuning, with minimal loss in sample diversity.We extensively validate our flow map models, called *Align Your Flow*, on challenging image generation benchmarks and achieve state-of-the-art few-step generation performance on both ImageNet 64x64 and 512x512, using small and efficient neural networks. Finally, we show text-to-image flow map models that outperform all existing non-adversarially trained few-step samplers in text-conditioned synthesis.", "arxiv_id": "2506.14603v1", "arxiv_authors": ["Amirmojtaba Sabour", "Sanja Fidler", "Karsten Kreis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ec"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5258729, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55b"}, "filepath": "data/2503.07561v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997357225972057, "type": "Poster", "name": "Alligat0R: Pre-Training through Covisibility Segmentation for Relative Camera Pose Regression", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115161", "abstract": "Pre-training techniques have greatly advanced computer vision, with CroCo\u2019s cross-view completion approach yielding impressive results in tasks like 3D reconstruction and pose regression. However, cross-view completion is ill-posed in non-covisible regions, limiting its effectiveness. We introduce Alligat0R, a novel pre-training approach that replaces cross-view learning with a covisibility segmentation task. Our method predicts whether each pixel in one image is covisible in the second image, occluded, or outside the field of view, making the pre-training effective in both covisible and non-covisible regions, and provides interpretable predictions. To support this, we present Cub3, a large-scale dataset with 5M image pairs and dense covisibility annotations derived from the nuScenes and ScanNet datasets. Cub3 includes diverse scenarios with varying degrees of overlap. The experiments show that our novel pre-training method Alligat0R significantly outperforms CroCo in relative pose regression. Alligat0R and Cub3 will be made publicly available.", "arxiv_id": "2503.07561v1", "arxiv_authors": ["Thibaut Loiseau", "Guillaume Bourmaud", "Vincent Lepetit"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ed"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2424632, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55c"}, "filepath": "data/2505.21817v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996348352681615, "type": "Poster", "name": "ALTER: All-in-One Layer Pruning and Temporal Expert Routing for Efficient Diffusion Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120357", "abstract": "Diffusion models have demonstrated exceptional capabilities in generating high-fidelity images. However, their iterative denoising process results in significant computational overhead during inference, limiting their practical deployment in resource-constrained environments. Existing acceleration methods often adopt uniform strategies that fail to capture the temporal variations during diffusion generation, while the commonly adopted sequential $\\textit{pruning-then-fine-tuning strategy}$ suffers from sub-optimality due to the misalignment between pruning decisions made on pretrained weights and the model\u2019s final parameters. To address these limitations, we introduce $\\textbf{ALTER}$: $\\textbf{A}$ll-in-One $\\textbf{L}$ayer Pruning and $\\textbf{T}$emporal $\\textbf{E}$xpoert $\\textbf{R}$outing, a unified framework that transforms diffusion models into a mixture of efficient temporal experts.ALTER achieves a single-stage optimization that unifies layer pruning, expert routing, and model fine-tuning by employing a trainable hypernetwork, which dynamically generates layer pruning decisions and manages timestep routing to specialized, pruned expert sub-networks throughout the ongoing fine-tuning of the UNet. This unified co-optimization strategy enables significant efficiency gains while preserving high generative quality. Specifically, ALTER achieves same-level visual fidelity to the original 50-step Stable Diffusion v2.1 model while utilizing only 25.9\\% of its total MACs with just 20 inference steps and delivering a 3.64$\\times$ speedup through 35\\% sparsity.", "arxiv_id": "2505.21817v1", "arxiv_authors": ["Xiaomeng Yang", "Lei Lu", "Qihui Fan", "Changdi Yang", "Juyi Lin", "Yanzhi Wang", "Xuan Zhang", "Shangqian Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ee"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060765, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55d"}, "filepath": "data/2505.16495v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993446070408432, "type": "Poster", "name": "ALTo: Adaptive-Length Tokenizer for Autoregressive Mask Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116479", "abstract": "While humans effortlessly draw visual objects and shapes by adaptively allocating attention based on their complexity, existing multimodal large language models (MLLMs) remain constrained by rigid token representations. Bridging this gap, we propose ALTo, an adaptive length tokenizer for autoregressive mask generation. To achieve this, a novel token length predictor is designed, along with a length regularization term and a differentiable token chunking strategy. We further build ALToLLM that seamlessly integrates ALTo into MLLM. Preferences on the trade-offs between mask quality and efficiency is implemented by group relative policy optimization (GRPO). Experiments demonstrate that ALToLLM achieves state-of-the-art performance with adaptive token cost on popular segmentation benchmarks. Code and models will be released.", "arxiv_id": "2505.16495v1", "arxiv_authors": ["Lingfeng Wang", "Hualing Lin", "Senda Chen", "Tao Wang", "Changxu Cheng", "Yangyang Zhong", "Dong Zheng", "Wuyue Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ef"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4149693, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55e"}, "filepath": "data/2506.10038v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991424037362632, "type": "Poster", "name": "Ambient Diffusion Omni: Training Good Models with Bad Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118464", "abstract": "We show how to use low-quality, synthetic, and out-of-distribution images to improve the quality of a diffusion model. Typically, diffusion models are trained on curated datasets that emerge from highly filtered data pools from the Web and other sources. We show that there is immense value in the lower-quality images that are often discarded. We present Ambient Diffusion Omni, a simple, principled framework to train diffusion models that can extract signal from arbitrarily images during training. Our framework exploits two properties of natural images -- spectral power law decay and locality. We first validate our framework by successfully training diffusion models with images synthetically corrupted by Gaussian blur, JPEG compression, and motion blur. We use our framework to achieve state-of-the-art ImageNet FID and we show significant improvements in both image quality and diversity for text-to-image generative modeling. The core insight is that noise dampens the initial skew between the desired high-quality distribution and the mixed distribution we actually observe. We provide rigorous theoretical justification for our approach by analyzing the trade-off between learning from biased data versus limited unbiased data across diffusion times.", "arxiv_id": "2506.10038v1", "arxiv_authors": ["Giannis Daras", "Adrian Rodriguez-Munoz", "Adam Klivans", "Antonio Torralba", "Constantinos Daskalakis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f0"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 928923, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a55f"}, "filepath": "data/2505.17316v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997531119719528, "type": "Poster", "name": "Analyzing Fine-Grained Alignment and Enhancing Vision Understanding in Multimodal Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118225", "abstract": "Achieving better alignment between vision embeddings and Large Language Models (LLMs) is crucial for enhancing the abilities of Multimodal LLMs (MLLMs), particularly for recent models that rely on powerful pretrained vision encoders and LLMs. A common approach to connect the pretrained vision encoder and LLM is through a projector applied after the vision encoder. However, the projector is often trained to enable the LLM to generate captions, and hence the mechanism by which LLMs understand each vision token remains unclear. In this work, we first investigate the role of the projector in compressing vision embeddings and aligning them with word embeddings. We show that the projector significantly compresses visual information, removing redundant details while preserving essential elements necessary for the LLM to understand visual content. We then examine patch-level alignment---the alignment between each vision patch and its corresponding semantic words---and propose a $\\textit{multi-semantic alignment hypothesis}$. Our analysis indicates that the projector trained by caption loss improves patch-level alignment but only to a limited extent, resulting in weak and coarse alignment. To address this issue, we propose $\\textit{patch-aligned training}$ to efficiently enhance patch-level alignment. Our experiments show that patch-aligned training (1) achieves stronger compression capability and improved patch-level alignment, enabling the MLLM to generate higher-quality captions, (2) improves the MLLM's performance by 16% on referring expression grounding tasks, 4% on question-answering tasks, and 3% on modern instruction-following benchmarks when using the same supervised fine-tuning (SFT) setting. The proposed method can be easily extended to other multimodal models.", "arxiv_id": "2505.17316v1", "arxiv_authors": ["Jiachen Jiang", "Jinxin Zhou", "Bo Peng", "Xia Ning", "Zhihui Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1013234, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a560"}, "filepath": "data/2506.09538v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996592097056498, "type": "Poster", "name": "AngleRoCL: Angle-Robust Concept Learning for Physically View-Invariant Adversarial Patches", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117272", "abstract": "Cutting-edge works have demonstrated that text-to-image (T2I) diffusion models can generate adversarial patches that mislead state-of-the-art object detectors in the physical world, revealing detectors' vulnerabilities and risks. However, these methods neglect the adversarial patches' attack effectiveness when observed from different views in the physical world (\\ie, angle robustness of the adversarial patches). In this paper, for the first time, we study the angle robustness of generated patches comprehensively, revealing the angle-robust issues of existing works and demonstrating that input texts affect the angle robustness of generated patches significantly. Motivated by the studies, we introduce Angle-Robust Concept Learning (AngleRoCL), a novel approach that learns a generalizable concept (\\ie, specialized text embeddings in implementation) representing the capability of generating angle-robust patches. The learned concept can be incorporated into text prompts and guides T2I models to generate patches with their attack effectiveness inherently resistant to viewpoint variations. Through extensive simulation and physical-world experiments across multiple observation views, we demonstrate that AngleRoCL significantly enhances the angle robustness of generated patches compared to baseline methods. Our patches maintain high attack success rates even under challenging viewing conditions, with an average improvement of xxx in attack effectiveness across multiple angles. This research advances the understanding of physically angle-robust patches and provides insights into the relationship between textual concepts and physical properties in T2I-generated contents.", "arxiv_id": "2506.09538v1", "arxiv_authors": ["Wenjun Ji", "Yuxiang Fu", "Luyang Ying", "Deng-Ping Fan", "Yuyi Wang", "Ming-Ming Cheng", "Ivor Tsang", "Qing Guo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033727, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a561"}, "filepath": "data/2506.11252v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995678038628728, "type": "Poster", "name": "Anti-Aliased 2D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119938", "abstract": "2D Gaussian Splatting (2DGS) has recently emerged as a promising method for novel view synthesis and surface reconstruction, offering better view-consistency and geometric accuracy than volumetric 3DGS. However, 2DGS suffers from severe aliasing artifacts when rendering at different sampling rates than those used during training, limiting its practical applications in scenarios requiring camera zoom or varying fields of view. We identify that these artifacts stem from two key limitations: the lack of frequency constraints in the representation and an ineffective screen-space clamping approach. To address these issues, we present AA-2DGS, an antialiased formulation of 2D Gaussian Splatting that maintains its geometric benefits while significantly enhancing rendering quality across different scales. Our method introduces a world space flat smoothing kernel that constrains the frequency content of 2D Gaussian primitives based on the maximal sampling frequency from training views, effectively eliminating high-frequency artifacts when zooming in. Additionally, we derive a novel object space Mip filter by leveraging an affine approximation of the ray-splat intersection mapping, which allows us to efficiently apply proper anti-aliasing directly in the local space of each splat.", "arxiv_id": "2506.11252v1", "arxiv_authors": ["Mae Younes", "Adnane Boukhayma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f3"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6979344, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a562"}, "filepath": "data/2505.02830v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990400796705712, "type": "Poster", "name": "AOR: Anatomical Ontology-Guided Reasoning for Medical Large Multimodal Model in Chest X-Ray Interpretation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118045", "abstract": "Chest X-rays (CXRs) are the most frequently performed imaging examinations in clinical settings. Recent advancements in Medical Large Multimodal Models (MLMMs) have enabled automated CXR interpretation, improving diagnostic accuracy and efficiency. However, despite their strong visual understanding, current MLMMs still face two major challenges: (1) Insufficient region-level understanding and interaction, and (2) Limited accuracy and interpretability due to single-step prediction. In this paper, we address these challenges by empowering MLMMs with anatomy-centric reasoning capabilities to enhance their interactivity and explainability. Specifically, we propose an Anatomical Ontology-Guided Reasoning (AOR) framework that accommodates both textual and optional visual prompts, centered on region-level information to enable multimodal multi-step reasoning. We also develop AOR-Instruction, a large instruction dataset for MLMMs training, under the guidance of expert physicians. Our experiments demonstrate AOR's superior performance in both Visual Question Answering (VQA) and report generation tasks. Code and data are available at: https://anonymous.4open.science/r/AOR-48C7/.", "arxiv_id": "2505.02830v1", "arxiv_authors": ["Qingqiu Li", "Zihang Cui", "Seongsu Bae", "Jilan Xu", "Runtian Yuan", "Yuejie Zhang", "Rui Feng", "Quanli Shen", "Xiaobo Zhang", "Junjun He", "Shujun Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1142841, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a563"}, "filepath": "data/2509.08104v1.png", "tags": [], "_media_type": "image", "_rand": 0.999295441967004, "type": "Poster", "name": "APML: Adaptive Probabilistic Matching Loss for Robust 3D Point Cloud Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118183", "abstract": "Training deep learning models for point cloud prediction tasks such as shape completion and generation depends critically on loss functions that measure discrepancies between predicted and ground-truth point sets. Commonly used functions such as Chamfer Distance (CD), HyperCD, and InfoCD rely on nearest-neighbor assignments, which often induce many-to-one correspondences, leading to point congestion in dense regions and poor coverage in sparse regions. These losses also involve non-differentiable operations due to index selection, which may affect gradient-based optimization. Earth Mover Distance (EMD) enforces one-to-one correspondences and captures structural similarity more effectively, but its cubic computational complexity limits its practical use. We propose the Adaptive Probabilistic Matching Loss (APML), a fully differentiable approximation of one-to-one matching that leverages Sinkhorn iterations on a temperature-scaled similarity matrix derived from pairwise distances. We analytically compute the temperature to guarantee a minimum assignment probability, eliminating manual tuning. APML achieves near-quadratic runtime, comparable to Chamfer-based losses, and avoids non-differentiable operations. When integrated into state-of-the-art architectures (PoinTr, PCNNet) on ShapeNet benchmarks and on a spatio\u2011temporal Transformer (CSI2PC) that \\textit{generates} 3\u2011D human point clouds from WiFi\u2011CSI measurements, APM loss yields faster convergence, superior spatial distribution, especially in low-density regions, and improved or on-par quantitative performance without additional hyperparameter search. The code is available at: https://github.com/apm-loss/apml.", "arxiv_id": "2509.08104v1", "arxiv_authors": ["Sasan Sharifipour", "Constantino \u00c1lvarez Casado", "Mohammad Sabokrou", "Miguel Bordallo L\u00f3pez"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2708708, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a564"}, "filepath": "data/2505.13431v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999289344523377, "type": "Poster", "name": "A Practical Guide for Incorporating Symmetry in Diffusion Policy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116965", "abstract": "Recently, equivariant neural networks for policy learning have shown promising improvements in sample efficiency and generalization, however, their wide adoption faces substantial barriers due to implementation complexity. Equivariant architectures typically require specialized mathematical formulations and custom network design, posing significant challenges when integrating with modern policy frameworks like diffusion-based models. In this paper, we explore a number of straightforward and practical approaches to incorporate symmetry benefits into diffusion policies without the overhead of full equivariant designs. Specifically, we investigate (i) invariant representations via relative trajectory actions and eye-in-hand perception, (ii) integrating equivariant vision encoders, and (iii) symmetric feature extraction with pretrained encoders using Frame Averaging. We first prove that combining eye-in-hand perception with relative or delta action parameterization yields inherent SE(3)-invariance, thus improving policy generalization. We then perform a systematic experimental study on those design choices for integrating symmetry in diffusion policies, and conclude that an invariant representation with equivariant feature extraction significantly improves the policy performance. Our method achieves performance on par with or exceeding fully equivariant architectures while greatly simplifying implementation.", "arxiv_id": "2505.13431v2", "arxiv_authors": ["Dian Wang", "Boce Hu", "Shuran Song", "Robin Walters", "Robert Platt"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f6"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042324, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a565"}, "filepath": "data/2503.22346v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990354081927343, "type": "Poster", "name": "ArchCAD-400K: A Large-Scale CAD drawings Dataset and New Baseline for Panoptic Symbol Spotting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115808", "abstract": "Recognizing symbols in architectural CAD drawings is critical for various advanced engineering applications. In this paper, we propose a novel CAD data annotation engine that leverages intrinsic attributes from systematically archived CAD drawings to automatically generate high-quality annotations, thus significantly reducing manual labeling efforts. Utilizing this engine, we construct ArchCAD-400K, a large-scale CAD dataset consisting of 413,062 chunks from 5538 highly standardized drawings, making it over 26 times larger than the largest existing CAD dataset. ArchCAD-400K boasts an extended drawing diversity and broader categories, offering line-grained annotations. Furthermore, we present a new baseline model for panoptic symbol spotting, termed Dual-Pathway Symbol Spotter (DPSS). It incorporates an adaptive fusion module to enhance primitive features with complementary image features, achieving state-of-the-art performance and enhanced robustness. Extensive experiments validate the effectiveness of DPSS, demonstrating the value of ArchCAD-400K and its potential to drive innovation in architectural design and construction.", "arxiv_id": "2503.22346v2", "arxiv_authors": ["Ruifeng Luo", "Zhengjie Liu", "Tianxiao Cheng", "Jie Wang", "Tongjie Wang", "Xingguang Wei", "Haomin Wang", "YanPeng Li", "Fu Chai", "Fei Cheng", "Shenglong Ye", "Wenhai Wang", "Yanting Zhang", "Yu Qiao", "Hongjie Zhang", "Xianzhong Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.457Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061242, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a566"}, "filepath": "data/2506.02093v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991382664287294, "type": "Poster", "name": "Are Pixel-Wise Metrics Reliable for Computerized Tomography Reconstruction?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118574", "abstract": "Widely adopted evaluation metrics for sparse-view CT reconstruction---such as Structural Similarity Index Measure and Peak Signal-to-Noise Ratio---prioritize pixel-wise fidelity but often fail to capture the completeness of critical anatomical structures, particularly small or thin regions that are easily missed. To address this limitation, we propose a suite of novel anatomy-aware evaluation metrics designed to assess structural completeness across anatomical structures, including large organs, small organs, intestines, and vessels. Building on these metrics, we introduce CARE, a Completeness-Aware Reconstruction Enhancement framework that incorporates structural penalties during training to encourage anatomical preservation of significant regions. CARE is model-agnostic and can be seamlessly integrated into both analytical reconstruction methods and modern learning-based methods, such as Neural Radiance Fields and Gaussian Splatting. When applied to these methods, CARE substantially improves structural completeness in reconstructed CT scans, yielding performance gains of up to +32\\% for large organs, +22\\% for small organs, +40\\% for intestines, and +36\\% for vessels. Code has been attached as supplementary material for peer review and will be made publicly available.", "arxiv_id": "2506.02093v2", "arxiv_authors": ["Tianyu Lin", "Xinran Li", "Chuntung Zhuang", "Qi Chen", "Yuanhao Cai", "Kai Ding", "Alan L. Yuille", "Zongwei Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f8"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2177492, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a567"}, "filepath": "data/2510.20803v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992322582408042, "type": "Poster", "name": "ARGenSeg: Image Segmentation with Autoregressive Image Generation Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115738", "abstract": "We propose a novel AutoRegressive Generation-based paradigm for image Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level perception within a unified framework. Prior works integrating image segmentation into multimodal large language models (MLLMs) typically employ either boundary points representation or dedicated segmentation heads.These methods rely on discrete representations or semantic prompts fed into task-specific decoders, which limits the ability of the MLLM to capture fine-grained visual details.To address these challenges, we introduce a segmentation framework for MLLM based on image generation, which naturally produces dense masks for target objects.We leverage MLLM to output visual tokens and detokenize them into images using an universal VQ-VAE, making the segmentation fully dependent on the pixel-level understanding of the MLLM.To reduce inference latency, we employ a next-scale-prediction strategy to generate required visual tokens in parallel.Extensive experiments demonstrate that our method surpasses prior state-of-the-art approaches on multiple segmentation datasets with a remarkable boost in inference speed, while maintaining strong understanding capabilities.", "arxiv_id": "2510.20803v1", "arxiv_authors": ["Xiaolong Wang", "Lixiang Ru", "Ziyuan Huang", "Kaixiang Ji", "Dandan Zheng", "Jingdong Chen", "Jun Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0f9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4476852, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a568"}, "filepath": "data/2509.20824v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998829640983481, "type": "Poster", "name": "ARMesh: Autoregressive Mesh Generation via Next-Level-of-Detail Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115211", "abstract": "Directly generating 3D meshes, the default representation for 3D shapes in the graphics industry, using auto-regressive (AR) models has become popular these days, thanks to their sharpness, compactness in the generated results, and ability to represent various types of surfaces. However, AR mesh generative models typically construct meshes face by face in lexicographic order, which does not effectively capture the underlying geometry in a manner consistent with human perception. Inspired by 2D models that progressively refine images, such as the prevailing next-scale prediction AR models, we propose generating meshes auto-regressively in a progressive coarse-to-fine manner. Specifically, we view mesh simplification algorithms, which gradually merge mesh faces to build simpler meshes, as a natural fine-to-coarse process. Therefore, we develop a transformer-based AR model to approximate the reverse process of a generalized mesh simplification algorithm in the order of level-of-detail, constructing meshes initially from a single point and gradually adding geometric details through local remeshing, where the topology is not predefined and is alterable. Our ablation studies and experiments show that this novel progressive mesh generation approach not only leads to improved mesh quality but also enables applications such as mesh refinement and editing.", "arxiv_id": "2509.20824v1", "arxiv_authors": ["Jiabao Lei", "Kewei Shi", "Zhihao Liang", "Kui Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0fa"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2836810, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a569"}, "filepath": "data/2506.06962v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996505207527241, "type": "Poster", "name": "AR-RAG: Autoregressive Retrieval Augmentation for Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116365", "abstract": "We introduce Autoregressive Retrieval Augmentation (AR-RAG), a novel paradigm that enhances image generation by autoregressively incorporating k-nearest neighbor retrievals at the patch level.Unlike prior methods that perform a single, static retrieval before generation and condition the entire generation on fixed reference images, AR-RAG performs context-aware retrievals at each generation step, using prior-generated patches as queries to retrieve and incorporate the most relevant patch-level visual references, enabling the model to respond to evolving generation needs while avoiding limitations (e.g., over-copying, stylistic bias, etc.) prevalent in existing methods. To realize AR-RAG, we propose two parallel frameworks: (1) Distribution-Augmentation in Decoding (DAiD), a training-free plug-and-use decoding strategy that directly merges the distribution of model-predicted patches with the distribution of retrieved patches, and (2) Feature-Augmentation in Decoding (FAiD), a parameter-efficient fine-tuning method that progressively smooths the features of retrieved patches via multi-scale convolution operations and leverages them to augment the image generation process. We validate the effectiveness of AR-RAG on widely adopted benchmarks, including Midjourney-30K, GenEval and DPG-Bench, demonstrating significant performance gains over state-of-the-art image generation models.", "arxiv_id": "2506.06962v3", "arxiv_authors": ["Jingyuan Qi", "Zhiyang Xu", "Qifan Wang", "Lifu Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0fb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1973779, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56a"}, "filepath": "data/2506.21724v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990758279284557, "type": "Poster", "name": "Asymmetric Dual Self-Distillation for 3D Self-Supervised Representation Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119465", "abstract": "Learning semantically meaningful representations from unstructured 3D point clouds remains a central challenge in computer vision, especially in the absence of large-scale labeled datasets. While masked point modeling (MPM) is widely used in self-supervised 3D learning, its reconstruction-based objective can limit its ability to capture high-level semantics. We propose AsymDSD, an Asymmetric Dual Self-Distillation framework that unifies masked modeling and invariance learning through prediction in the latent space rather than the input space. AsymDSD builds on a joint embedding architecture and introduces several key design choices: an efficient asymmetric setup, disabling attention between masked queries to prevent shape leakage, multi-mask sampling, and a point cloud adaptation of multi-crop. AsymDSD achieves state-of-the-art results on ScanObjectNN (90.53\\%) and further improves to 93.72\\% when pretrained on 930k shapes, surpassing prior methods.", "arxiv_id": "2506.21724v1", "arxiv_authors": ["Remco F. Leijenaar", "Hamidreza Kasaei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0fc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1140309, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56b"}, "filepath": "data/2507.17657v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993025716075105, "type": "Poster", "name": "Attention (as Discrete-Time Markov) Chains", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115284", "abstract": "We introduce a new interpretation of the attention matrix as a discrete-time Markov chain. Our interpretation sheds light on common operations involving attention scores such as selection, summation, and averaging in a unified framework. It further extends them by considering indirect attention, propagated through the Markov chain, as opposed to previous studies that only model immediate effects. Our main observation is that tokens corresponding to semantically similar regions form a set of metastable states, where the attention clusters, while noisy attention scores tend to disperse. Metastable states and their prevalence can be easily computed through simple matrix multiplication and eigenanalysis, respectively. Using these lightweight tools, we demonstrate state-of-the-art zero-shot segmentation. Lastly, we define TokenRank---the steady state vector of the Markov chain, which measures global token importance. We demonstrate that using it brings improvements in unconditional image generation. We believe our framework offers a fresh view of how tokens are being attended in modern visual transformers.", "arxiv_id": "2507.17657v2", "arxiv_authors": ["Yotam Erel", "Olaf D\u00fcnkel", "Rishabh Dabral", "Vladislav Golyanik", "Christian Theobalt", "Amit H. Bermano"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0fd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1584596, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56c"}, "filepath": "data/2505.19911v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992378264817714, "type": "Poster", "name": "Attention! You Vision Language Model Could Be Maliciously Manipulated", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119984", "abstract": "Large Vision-Language Models (VLMs) have achieved remarkable success in understanding complex real-world scenarios and supporting data-driven decision-making processes. However, VLMs exhibit significant vulnerability against adversarial examples, either text or image, which can lead to various adversarial outcomes, e.g., jailbreaking, hijacking, and hallucination, etc. In this work, we empirically and theoretically demonstrate that VLMs are particularly susceptible to image-based adversarial examples, where imperceptible perturbations can precisely manipulate each output token. To this end, we propose a novel attack called Vision-language model Manipulation Attack (VMA), which integrates first-order and second-order momentum optimization techniques with a differentiable transformation mechanism to effectively optimize the adversarial perturbation. Notably, VMA can be a double-edged sword: it can be leveraged to implement various attacks, such as jailbreaking, hijacking, privacy breaches, Denial-of-Service, and the generation of sponge examples, etc, while simultaneously enabling the injection of watermarks for copyright protection. Extensive empirical evaluations substantiate the efficacy and generalizability of VMA across diverse scenarios and datasets.", "arxiv_id": "2505.19911v1", "arxiv_authors": ["Xiaosen Wang", "Shaokang Wang", "Zhijin Ge", "Yuyang Luo", "Shudong Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0fe"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1098187, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56d"}, "filepath": "data/2506.08003v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998608168813468, "type": "Poster", "name": "Audio-Sync Video Generation with Multi-Stream Temporal Control", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120270", "abstract": "Audio is inherently temporal and closely synchronized with the visual world, making it a naturally aligned and expressive control signal for controllable video generation (e.g., movies).Beyond control, directly translating audio into video is essential for understanding and visualizing rich audio narratives (e.g., Podcasts or historical recordings).However, existing approaches fall short in generating high-quality videos with precise audio-visual synchronization, especially across diverse and complex audio types.In this work, we introduce MTV, a versatile framework for audio-sync video generation. MTV explicitly separates audios into speech, effects, and music tracks, enabling disentangled control over lip motion, event timing, and visual mood, respectively\u2014resulting in fine-grained and semantically aligned video generation.To support the framework, we additionally present DEMIX, a dataset comprising high-quality cinematic videos and demixed audio tracks. DEMIX is structured into five overlapped subsets, enabling scalable multi-stage training for diverse generation scenarios.Extensive experiments demonstrate that MTV achieves state-of-the-art performance across six standard metrics spanning video quality, text-video consistency, and audio-video alignment.", "arxiv_id": "2506.08003v1", "arxiv_authors": ["Shuchen Weng", "Haojie Zheng", "Zheng Chang", "Si Li", "Boxin Shi", "Xinlong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a0ff"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080870, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56e"}, "filepath": "data/2505.19858v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993576522036306, "type": "Poster", "name": "A Unified Solution to Video Fusion: From Multi-Frame Learning to Benchmarking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116108", "abstract": "The real world is dynamic, yet most image fusion methods process static frames independently, ignoring temporal correlations in videos and leading to flickering and temporal inconsistency. To address this, we propose Unified Video Fusion (UniVF), a novel framework for temporally coherent video fusion that leverages multi-frame learning and optical flow-based feature warping for informative, temporally coherent video fusion. To support its development, we also introduce Video Fusion Benchmark (VF-Bench), the first comprehensive benchmark covering four video fusion tasks: multi-exposure, multi-focus, infrared-visible, and medical fusion. VF-Bench provides high-quality, well-aligned video pairs obtained through synthetic data generation and rigorous curation from existing datasets, with a unified evaluation protocol that jointly assesses the spatial quality and temporal consistency of video fusion. Extensive experiments show that UniVF achieves state-of-the-art results across all tasks on VF-Bench. Both the code and the dataset will be made publicly available.", "arxiv_id": "2505.19858v2", "arxiv_authors": ["Zixiang Zhao", "Haowen Bai", "Bingxin Ke", "Yukun Cui", "Lilun Deng", "Yulun Zhang", "Kai Zhang", "Konrad Schindler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a100"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5928962, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a56f"}, "filepath": "data/2506.11430v3.png", "tags": [], "_media_type": "image", "_rand": 0.999099062379421, "type": "Poster", "name": "Auto-Connect: Connectivity-Preserving RigFormer with Direct Preference Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115497", "abstract": "We introduce Auto-Connect, a novel approach for automatic rigging that explicitly preserves skeletal connectivity through a connectivity-preserving tokenization scheme. Unlike previous methods that predict bone positions represented as two joints or first predict points before determining connectivity, our method employs special tokens to define endpoints for each joint's children and for each hierarchical layer, effectively automating connectivity relationships. This approach significantly enhances topological accuracy by integrating connectivity information directly into the prediction framework.To further guarantee high-quality topology, we implement a topology-aware reward function that quantifies topological correctness, which is then utilized in a post-training phase through reward-guided Direct Preference Optimization. Additionally, we incorporate implicit geodesic features for latent top-$k$ bone selection, which substantially improves skinning quality. By leveraging geodesic distance information within the model's latent space, our approach intelligently determines the most influential bones for each vertex, effectively mitigating common skinning artifacts.This combination of connectivity-preserving tokenization, reward-guided fine-tuning, and geodesic-aware bone selection enables our model to consistently generate more anatomically plausible skeletal structures with superior deformation properties.", "arxiv_id": "2506.11430v3", "arxiv_authors": ["Jingfeng Guo", "Jian Liu", "Jinnan Chen", "Shiwei Mao", "Changrong Hu", "Puhua Jiang", "Junlin Yu", "Jing Xu", "Qi Liu", "Lixin Xu", "Zhuo Chen", "Chunchao Guo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a101"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1020234, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a570"}, "filepath": "data/2509.15031v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999351517950196, "type": "Poster", "name": "AutoEdit: Automatic Hyperparameter Tuning for Image Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116702", "abstract": "Recent advances in diffusion models have revolutionized text-guided image editing, yet existing editing methods face critical challenges in hyperparameter identification. To get the reasonable editing performance, these methods often require the user to brute-force tune multiple interdependent hyperparameters, such as inversion timesteps and attention modification, \\textit{etc.} This process incurs high computational costs due to the huge hyperparameter search space. We consider searching optimal editing's hyperparameters as a sequential decision-making task within the diffusion denoising process. Specifically, we propose a reinforcement learning framework, which establishes a Markov Decision Process that dynamically adjusts hyperparameters across denoising steps, integrating editing objectives into a reward function. The method achieves time efficiency through proximal policy optimization while maintaining optimal hyperparameter configurations. Experiments demonstrate significant reduction in search time and computational overhead compared to existing brute-force approaches, advancing the practical deployment of a diffusion-based image editing framework in the real world.", "arxiv_id": "2509.15031v2", "arxiv_authors": ["Chau Pham", "Quan Dao", "Mahesh Bhosale", "Yunjie Tian", "Dimitris Metaxas", "David Doermann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a102"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 957777, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a571"}, "filepath": "data/2510.21704v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991021733298198, "type": "Poster", "name": "Automated Detection of Visual Attribute Reliance with a Self-Reflective Agent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118510", "abstract": "When a vision model performs image recognition, which visual attributes drive its predictions? Detecting unintended use of specific visual features is critical for ensuring model robustness, preventing overfitting, and avoiding spurious correlations. We introduce an automated framework for detecting these dependencies in trained vision models. At the core of our method is a self-reflective agent that systematically generates and tests hypotheses about the unintended visual attributes that a model may rely on. This process is iterative: the agent refines its hypotheses based on experimental outcomes and uses a self-evaluation protocol to assess whether its findings accurately explain model behavior. If inconsistencies are detected, the agent self-reflects over its findings and triggers a new cycle of experimentation. We evaluate our approach on a novel benchmark of 130 models designed to exhibit diverse visual attribute dependencies across 18 categories. Our results show that the agent's performance consistently improves with self-reflection, with a significant performance increase over non-reflective baselines. We further demonstrate that the agent identifies real-world visual attribute dependencies in state-of-the-art models, including CLIP's vision encoder and the YOLOv8 object detector.", "arxiv_id": "2510.21704v1", "arxiv_authors": ["Christy Li", "Josep Lopez Camu\u00f1as", "Jake Thomas Touchet", "Jacob Andreas", "Agata Lapedriza", "Antonio Torralba", "Tamar Rott Shaham"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a103"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1138842, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a572"}, "filepath": "data/2311.16515v4.png", "tags": [], "_media_type": "image", "_rand": 0.9990100702564103, "type": "Poster", "name": "Automatic Synthetic Data and Fine-grained Adaptive Feature Alignment for Composed Person Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115728", "abstract": "Person retrieval has attracted rising attention. Existing methods are mainly divided into two retrieval modes, namely image-only and text-only. However, they are unable to make full use of the available information and are difficult to meet diverse application requirements. To address the above limitations, we propose a new Composed Person Retrieval (CPR) task, which combines visual and textual queries to identify individuals of interest from large-scale person image databases. Nevertheless, the foremost difficulty of the CPR task is the lack of available annotated datasets. Therefore, we first introduce a scalable automatic data synthesis pipeline, which decomposes complex multimodal data generation into the creation of textual quadruples followed by identity-consistent image synthesis using fine-tuned generative models. Meanwhile, a multimodal filtering method is designed to ensure the resulting SynCPR dataset retains 1.15 million high-quality and fully synthetic triplets. Additionally, to improve the representation of composed person queries, we propose a novel Fine-grained Adaptive Feature Alignment (FAFA) framework through fine-grained dynamic alignment and masked feature reasoning. Moreover, for objective evaluation, we manually annotate the Image-Text Composed Person Retrieval (ITCPR) test set. The extensive experiments demonstrate the effectiveness of the SynCPR dataset and the superiority of the proposed FAFA framework when compared with the state-of-the-art methods. All code will be open-sourced.", "arxiv_id": "2311.16515v4", "arxiv_authors": ["Delong Liu", "Haiwen Li", "Zhaohui Hou", "Zhicheng Zhao", "Fei Su", "Yuan Dong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a104"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.458Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2921510, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a573"}, "filepath": "data/2510.05061v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998692990396322, "type": "Poster", "name": "Automaton Constrained Q-Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119224", "abstract": "Real-world robotic tasks often require agents to achieve sequences of goals while respecting time-varying safety constraints. However, standard Reinforcement Learning (RL) paradigms are fundamentally limited in these settings. A natural approach to these problems is to combine RL with Linear-time Temporal Logic (LTL), a formal language for specifying complex, temporally extended tasks and safety constraints. Yet, existing RL methods for LTL objectives exhibit poor empirical performance in complex and continuous environments. As a result, no scalable methods support both temporally ordered goals and safety simultaneously, making them ill-suited for realistic robotics scenarios. We propose Automaton Constrained Q-Learning (ACQL), an algorithm that addresses this gap by combining goal-conditioned value learning with automaton-guided reinforcement. ACQL supports most LTL task specifications and leverages their automaton representation to explicitly encode stage-wise goal progression and both stationary and non-stationary safety constraints. We show that ACQL outperforms existing methods across a range of continuous control tasks, including cases where prior methods fail to satisfy either goal-reaching or safety constraints. We further validate its real-world applicability by deploying ACQL on a 6-DOF robotic arm performing a goal-reaching task in a cluttered, cabinet-like space with safety constraints. Our results demonstrate that ACQL is a robust and scalable solution for learning robotic behaviors according to rich temporal specifications.", "arxiv_id": "2510.05061v1", "arxiv_authors": ["Anastasios Manganaris", "Vittorio Giammarino", "Ahmed H. Qureshi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a105"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.459Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054420, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a574"}, "filepath": "data/2507.13346v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997573349178527, "type": "Poster", "name": "AutoPartGen: Autoregressive 3D Part Generation and Discovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116264", "abstract": "We introduce AutoPartGen, a model that generates objects composed of 3D parts in an autoregressive manner.This model can take as input an image of an object, 2D masks of the object's parts, or an existing 3D object, and generate a corresponding compositional 3D reconstruction.Our approach builds upon 3DShape2VecSet, a recent latent 3D representation with powerful geometric expressiveness.We observe that this latent space exhibits strong compositional properties, making it particularly well-suited for part-based generation tasks.Specifically, AutoPartGen generates object parts autoregressively, predicting one part at a time while conditioning on previously generated parts and additional inputs, such as 2D images, masks, or 3D objects.This process continues until the model decides that all parts have been generated, thus determining automatically the type and number of parts.The resulting parts can be seamlessly assembled into coherent objects or scenes without requiring additional optimization.We evaluate both the overall 3D generation capabilities and the part-level generation quality of AutoPartGen, demonstrating that it achieves state-of-the-art performance in 3D part generation.", "arxiv_id": "2507.13346v2", "arxiv_authors": ["Minghao Chen", "Jianyuan Wang", "Roman Shapovalov", "Tom Monnier", "Hyunyoung Jung", "Dilin Wang", "Rakesh Ranjan", "Iro Laina", "Andrea Vedaldi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a106"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.459Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2987613, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a575"}, "filepath": "data/2506.09350v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993043241154326, "type": "Poster", "name": "Autoregressive Adversarial Post-Training for Real-Time Interactive Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116297", "abstract": "Existing large-scale video generation models are computationally intensive, preventing adoption in real-time and interactive applications. In this work, we propose autoregressive adversarial post-training (AAPT) to turn a pre-trained latent video diffusion model intoa real-time, interactive, streaming video generator. Our model autoregressively generates a latent frame at a time using a single neural function evaluation (1NFE). The model can stream the result to the user in real time and receive interactive responses as control to generate the next latent frame. Unlike existing approaches, our method explores adversarial training as an effective paradigm for autoregressive generation. This allows us to design a more efficient architecture for one-step generation and to train the model in a student-forcing way to mitigate error accumulation. The adversarial approach also enables us to train the model for long-duration generation fully utilizing the KV cache. As a result, our 8B model achieves real-time, 24fps, nonstop, streaming video generation at 736x416 resolution on a single H100, or 1280x720 on 8xH100 up to a minute long (1440 frames).", "arxiv_id": "2506.09350v2", "arxiv_authors": ["Shanchuan Lin", "Ceyuan Yang", "Hao He", "Jianwen Jiang", "Yuxi Ren", "Xin Xia", "Yang Zhao", "Xuefeng Xiao", "Lu Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a107"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.459Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044101, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a576"}, "filepath": "data/2506.13757v1.png", "tags": [], "_media_type": "image", "_rand": 0.999574621450819, "type": "Poster", "name": "AutoVLA: Vision-Language-Action Model for End-to-End Autonomous Driving with Adaptive Reasoning and Reinforcement Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120167", "abstract": "Recent advancements in Vision-Language-Action (VLA) models have shown promise for end-to-end autonomous driving by leveraging world knowledge and reasoning capabilities. However, current VLA models often struggle with physically infeasible action outputs, complex model structures, and unnecessarily long reasoning. In this paper, we propose AutoVLA, a novel VLA framework that unifies reasoning and action generation within a single autoregressive generation model. AutoVLA performs semantic reasoning and trajectory planning directly from raw visual inputs and language instructions. We tokenize continuous trajectories into discrete, feasible actions, enabling direct integration into the language model. For training, we employ supervised fine-tuning to equip the model with dual thinking modes: fast thinking (trajectory-only) and slow thinking (enhanced with chain-of-thought reasoning). To further enhance planning performance and efficiency, we introduce a reinforcement fine-tuning method based on Group Relative Policy Optimization (GRPO), reducing unnecessary reasoning in straightforward scenarios. Extensive experiments across real-world and simulated datasets and benchmarks, including nuPlan, nuScenes, Waymo, and CARLA, demonstrate the competitive performance of AutoVLA in both open-loop and closed-loop settings. Qualitative results further showcase the adaptive reasoning and accurate planning capabilities of AutoVLA in diverse scenarios. We will release the code, model weights, and datasets to facilitate future research in the field.", "arxiv_id": "2506.13757v1", "arxiv_authors": ["Zewei Zhou", "Tianhui Cai", "Seth Z. Zhao", "Yun Zhang", "Zhiyu Huang", "Bolei Zhou", "Jiaqi Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a108"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.467Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085610, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a577"}, "filepath": "data/2505.11886v4.png", "tags": [], "_media_type": "image", "_rand": 0.9993844211783981, "type": "Poster", "name": "Aux-Think: Exploring Reasoning Strategies for Data-Efficient Vision-Language Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115414", "abstract": "Vision-Language Navigation is a critical task for developing embodied agents that can follow natural language instructions to navigate in complex real-world environments. Recent advances by finetuning large pretrained models have significantly improved generalization and instruction grounding compared to traditional approaches. However, the role of reasoning strategies in navigation\u2014an action-centric, long-horizon task\u2014remains underexplored, despite Chain-of-Thought reasoning's demonstrated success in static tasks like question answering and visual reasoning. To address this gap, we conduct the first systematic evaluation of reasoning strategies for VLN, including No-Think (direct action prediction), Pre-Think (reason before action), and Post-Think (reason after action). Surprisingly, our findings reveal the Inference-time Reasoning Collaps issue, where inference-time reasoning degrades navigation accuracy, highlighting the challenges of integrating reasoning into VLN. Based on this insight, we propose Aux-Think, a framework that trains models to internalize structured reasoning patterns through CoT supervision during training, while preserving No-Think inference for efficient action prediction. To support this framework, we release R2R-CoT-320k, a large-scale Chain-of-Thought annotated dataset. Empirically, Aux-Think significantly reduces training effort without compromising performance.", "arxiv_id": "2505.11886v4", "arxiv_authors": ["Shuo Wang", "Yongcai Wang", "Wanting Li", "Xudong Cai", "Yucheng Wang", "Maiyue Chen", "Kaihui Wang", "Zhizhong Su", "Deying Li", "Zhaoxin Fan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a109"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.467Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 956444, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a578"}, "filepath": "data/2505.20862v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995753451515118, "type": "Poster", "name": "AVCD: Mitigating Hallucinations in Audio-Visual Large Language Models through Contrastive Decoding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119986", "abstract": "Hallucination remains a major challenge in multimodal large language models (MLLMs). To address this, various contrastive decoding (CD) methods have been proposed that contrasts original logits with hallucinated logits generated from perturbed inputs. While CD has shown promise in vision-language models (VLMs), it is not well-suited for AV-LLMs, where hallucinations often emerge from both unimodal and cross-modal combinations involving audio, video, and language. These intricate interactions call for a more adaptive and modality-aware decoding strategy. In this paper, we propose Audio-Visual Contrastive Decoding (AVCD)\u2014a novel, training-free decoding framework designed to model trimodal interactions and suppress modality-induced hallucinations in AV-LLMs. Unlike previous CD methods in VLMs that corrupt a fixed modality, AVCD leverages attention distributions to dynamically identify less dominant modalities and applies attentive masking to generate perturbed output logits. To support CD in a trimodal setting, we also reformulate the original CD framework to jointly handle audio, visual, and textual inputs. Finally, to improve efficiency, we introduce entropy-guided adaptive decoding, which selectively skips unnecessary decoding steps based on the model\u2019s confidence in its predictions. Extensive experiments demonstrate that AVCD consistently outperforms existing decoding methods. Especially, on the AVHBench dataset, it improves accuracy by 6% for VideoLLaMA2 and 11% for Video-SALMONN, demonstrating strong robustness and generalizability.", "arxiv_id": "2505.20862v2", "arxiv_authors": ["Chaeyoung Jung", "Youngjoon Jang", "Joon Son Chung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.467Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112010, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a579"}, "filepath": "data/2510.21111v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997038735700791, "type": "Poster", "name": "AVR: Active Visual Reasoning for Multimodal Large Language Models in Physical Environments", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119450", "abstract": "Visual reasoning in multimodal large language models (MLLMs) has primarily been studied in static, fully observable settings, limiting their effectiveness in real-world environments where information is often incomplete due to occlusion or limited field of view. Humans, in contrast, actively explore and interact with their environment\u2014moving, examining, and manipulating objects\u2014to gather information through a closed-loop process integrating perception, reasoning, and action. Inspired by this human capability, we introduce the Active Visual Reasoning (AVR) task, extending visual reasoning to partially observable, interactive environments. AVR necessitates agents to: (1) actively acquire information via sequential physical actions, (2) integrate observations across multiple steps for coherent reasoning, and (3) dynamically adjust decisions based on evolving visual feedback. To rigorously evaluate AVR, we introduce CLEVR-AVR, a simulation benchmark featuring multi-round interactive environments designed to assess both reasoning correctness and information-gathering efficiency. We present AVR-152k, a large-scale dataset offers rich Chain-of-Thought (CoT) annotations detailing iterative reasoning for uncertainty identification, action-conditioned information gain prediction, and information-maximizing action selection, crucial for training agents in a higher-order Markov Decision Process. Building on this, we develop PhysVLM-AVR, an MLLM achieving state-of-the-art performance on CLEVR-AVR, embodied reasoning (OpenEQA, RoboVQA), and passive visual reasoning (GeoMath, Geometry30K). Our analysis also reveals that current embodied MLLMs, despite detecting information incompleteness, struggle to actively acquire and integrate new information through interaction, highlighting a fundamental gap in active reasoning capabilities.", "arxiv_id": "2510.21111v1", "arxiv_authors": ["Weijie Zhou", "Xuantang Xiong", "Yi Peng", "Manli Tao", "Chaoyang Zhao", "Honghui Dong", "Ming Tang", "Jinqiao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.467Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054299, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57a"}, "filepath": "data/2509.15497v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998179781010827, "type": "Poster", "name": "Backdoor Mitigation via Invertible Pruning Masks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115412", "abstract": "Model pruning has gained traction as a promising defense strategy against backdoor attacks in deep learning. However, existing pruning-based approaches often fall short in accurately identifying and removing the specific parameters responsible for inducing backdoor behaviors. Despite the dominance of fine-tuning-based defenses in recent literature, largely due to their superior performance, pruning remains a compelling alternative, offering greater interpretability and improved robustness in low-data regimes. In this paper, we propose a novel pruning approach featuring a learned \\emph{selection} mechanism to identify parameters critical to both main and backdoor tasks, along with an \\emph{invertible} pruning mask designed to simultaneously achieve two complementary goals: eliminating the backdoor task while preserving it through the inverse mask. We formulate this as a bi-level optimization problem that jointly learns selection variables, a sparse invertible mask, and sample-specific backdoor perturbations derived from clean data. The inner problem synthesizes candidate triggers using the inverse mask, while the outer problem refines the mask to suppress backdoor behavior without impairing clean-task accuracy. Extensive experiments demonstrate that our approach outperforms existing pruning-based backdoor mitigation approaches, maintains strong performance under limited data conditions, and achieves competitive results compared to state-of-the-art fine-tuning approaches. Notably, the proposed approach is particularly effective in restoring correct predictions for compromised samples after successful backdoor mitigation.", "arxiv_id": "2509.15497v2", "arxiv_authors": ["Kealan Dunnett", "Reza Arablouei", "Dimity Miller", "Volkan Dedeoglu", "Raja Jurdak"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.467Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025428, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57b"}, "filepath": "data/2510.21366v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995158104399371, "type": "Poster", "name": "BADiff: Bandwidth Adaptive Diffusion Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116460", "abstract": "In this work, we propose a novel framework to enable diffusion models to adapt their generation quality based on real-time network bandwidth constraints. Traditional diffusion models produce high-fidelity images by performing a fixed number of denoising steps, regardless of downstream transmission limitations. However, in practical cloud-to-device scenarios, limited bandwidth often necessitates heavy compression, leading to loss of fine textures and wasted computation. To address this, we introduce a joint end-to-end training strategy where the diffusion model is conditioned on a target quality level derived from the available bandwidth. During training, the model learns to adaptively modulate the denoising process, enabling early-stop sampling that maintains perceptual quality appropriate to the target transmission condition. Our method requires minimal architectural changes and leverages a lightweight quality embedding to guide the denoising trajectory. Experimental results demonstrate that our approach significantly improves the visual fidelity of bandwidth-adapted generations compared to naive early-stopping, offering a promising solution for efficient image delivery in bandwidth-constrained environments.", "arxiv_id": "2510.21366v1", "arxiv_authors": ["Xi Zhang", "Hanwei Zhu", "Yan Zhong", "Jiamang Wang", "Weisi Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061342, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57c"}, "filepath": "data/2505.22038v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995233888398319, "type": "Poster", "name": "Balanced Token Pruning: Accelerating Vision Language Models Beyond Local Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115558", "abstract": "Large Vision-Language Models (LVLMs) have shown impressive performance across multi-modal tasks by encoding images into thousands of tokens. However, the large number of image tokens results in significant computational overhead, and the use of dynamic high-resolution inputs further increases this burden. Previous approaches have attempted to reduce the number of image tokens through token pruning, typically by selecting tokens based on attention scores or image token diversity. Through empirical studies, we observe that existing methods often overlook the joint impact of pruning on both the current layer\u2019s output (local) and the outputs of subsequent layers (global), leading to suboptimal pruning decisions. To address this challenge, we propose Balanced Token Pruning (BTP), a plug-and-play method for pruning vision tokens. Specifically, our method utilizes a small calibration set to divide the pruning process into multiple stages. In the early stages, token pruning emphasizes their impact on downstream layers, whereas in the deeper stages, the focus shifts to their influence on outputs within the current layer. Extensive experiments across various LVLMs demonstrate the broad effectiveness of our approach on multiple benchmarks. Our source code is publicly available at https://anonymous.4open.science/r/BTP-EE00TY89U/.", "arxiv_id": "2505.22038v2", "arxiv_authors": ["Kaiyuan Li", "Xiaoyue Chen", "Chen Gao", "Yong Li", "Xinlei Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1064077, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57d"}, "filepath": "data/2506.06072v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994189883137246, "type": "Poster", "name": "BEAST: Efficient Tokenization of B-Splines Encoded Action Sequences for Imitation Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115779", "abstract": "We present the B-spline Encoded Action Sequence Tokenizer(BEAST), a novel action tokenizer that encodes action sequences into compact discrete or continuous tokens using B-splines. In contrast to existing action tokenizers based on vector quantization or byte pair encoding, BEAST requires no separate tokenizer training and consistently produces tokens of uniform length, enabling fast action sequence generation via parallel decoding. Leveraging our B-spline formulation, BEAST inherently ensures generating smooth trajectories without discontinuities between adjacent segments. We extensively evaluate BEAST by integrating it with three distinct model architectures: a Variational Autoencoder (VAE) with continuous tokens, a decoder-only Transformer with discrete tokens, and Florence-2, a pretrained Vision-Language Model with an encoder-decoder architecture, demonstrating BEAST's compatibility and scalability with large pretrained models. We evaluate BEAST across three established benchmarks consisting of 166 simulated tasks and on three distinct robot settings with a total of 8 real-world tasks. Experimental results demonstrate that BEAST (i) significantly reduces both training and inference computational costs, and (ii) consistently generates smooth, high-frequency control signals suitable for continuous control tasks while (iii) reliably achieves competitive task success rates compared to state-of-the-art methods.", "arxiv_id": "2506.06072v2", "arxiv_authors": ["Hongyi Zhou", "Weiran Liao", "Xi Huang", "Yucheng Tang", "Fabian Otto", "Xiaogang Jia", "Xinkai Jiang", "Simon Hilber", "Ge Li", "Qian Wang", "\u00d6mer Erdin\u00e7 Ya\u011fmurlu", "Nils Blank", "Moritz Reuss", "Rudolf Lioutikov"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a10f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1055179, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57e"}, "filepath": "data/2506.06271v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998349984000182, "type": "Poster", "name": "BecomingLit: Relightable Gaussian Avatars with Hybrid Neural Shading", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116917", "abstract": "We introduce *BecomingLit*, a novel method for reconstructing relightable, high-resolution head avatars that can be rendered from novel viewpoints at interactive rates. Therefore, we propose a new low-cost light stage capture setup, tailored specifically towards capturing faces. Using this setup, we collect a novel dataset consisting of diverse multi-view sequences of numerous subjects under varying illumination conditions and facial expressions. By leveraging our new dataset, we introduce a new relightable avatar representation based on 3D Gaussian primitives that we animate with a parametric head model and an expression-dependent dynamics module. We propose a new hybrid neural shading approach, combining a neural diffuse BRDF with an analytical specular term. Our method reconstructs disentangled materials from our dynamic light stage recordings and enables all-frequency relighting of our avatars with both point lights and environment maps. In addition, our avatars can easily be animated and controlled from monocular videos. We validate our approach in extensive experiments on our dataset, where we consistently outperform existing state-of-the-art methods in relighting and reenactment by a significant margin.", "arxiv_id": "2506.06271v1", "arxiv_authors": ["Jonathan Schmidt", "Simon Giebenhain", "Matthias Niessner"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a110"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2495152, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a57f"}, "filepath": "data/2506.06487v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998590305939571, "type": "Poster", "name": "BeliefMapNav: 3D Voxel-Based Belief Map for Zero-Shot Object Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119733", "abstract": "Zero-shot object navigation (ZSON) allows robots to find target objects in unfamiliar environments using natural language instructions, without relying on pre-built maps or task-specific training. Recent general-purpose models, such as large language models (LLMs) and vision-language models (VLMs), equip agents with semantic reasoning abilities to estimate target object locations in a zero-shot manner. However, these models often greedily select the next goal without maintaining a global understanding of the environment and are fundamentally limited in the spatial reasoning necessary for effective navigation. To overcome these limitations, we propose a novel 3D voxel-based belief map that estimates the target\u2019s prior presence distribution within a voxelized 3D space. This approach enables agents to integrate semantic priors from LLMs and visual embeddings with hierarchical spatial structure, alongside real-time observations, to build a comprehensive 3D global posterior belief of the target\u2019s location. Building on this 3D voxel map, we introduce BeliefMapNav, an efficient navigation system with two key advantages: i) grounding LLM semantic reasoning within the 3D hierarchical semantics voxel space for precise target position estimation, and ii) integrating sequential path planning to enable efficient global navigation decisions. Experiments on HM3D, MP3D, and HSSD benchmarks show that BeliefMapNav achieves state-of-the-art (SOTA) Success Rate (SR) and Success weighted by Path Length (SPL), with a notable **46.4\\%** SPL improvement over the previous best SR method, validating its effectiveness and efficiency. We will release the code of BeliefMapNav.", "arxiv_id": "2506.06487v1", "arxiv_authors": ["Zibo Zhou", "Yue Hu", "Lingkai Zhang", "Zonglin Li", "Siheng Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a111"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062211, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a580"}, "filepath": "data/2510.22443v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997848272171699, "type": "Poster", "name": "Benchmarking Egocentric Multimodal Goal Inference for Assistive Wearable Agents", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121655", "abstract": "There has recently been a surge of interest in Wearable Assistant Agents: agents embodied in a wearable form factor such as smart glasses, who can take actions toward a user\u2019s stated goal \u2014 a high-level language-expressed command such as \u201cwhere did I leave my keys?\u201d, \u201cText Alice I will be late\u201d, or \u201cWhat\u2019s the weather in Cancun?\u201d. In this work, we consider the complementary problem of eliminating the effort required to interact with such an agent by proactively inferring the user\u2019s goal from multimodal contextual observations. As vision-language models (VLMs) hold strong potential to ultimately solve this problem, our work focuses on creating a strong benchmark to measure progress toward this end. Given the limited prior work in this area, establishing the benchmark required collecting a novel multimodal goal-inference dataset; our dataset comprises ~30 hours of data from 363 participants across 3,482 recordings, featuring ground-truth reference goals alongside accompanying visual, audio, digital, and longitudinal contextual observations. We ran a human predictability study, where we found that humans set a strong baseline that comprises a de facto upper bound on model performance: they show multiple choice question (MCQ) accuracy of 93%, with the best VLM achieving about 84% accuracy. However, MCQ assesses discrimination, not the model\u2019s ultimate task of generating the goal through open-ended text generation. Through a meta-evaluation, we find that a VLM judging the generated goals is as good as a human judge if it has access to a human-authored script of the video or a correct reference goal. Finally, we evaluate several families of modern vision-language models on the benchmark, showing that larger models have a significant performance advantage, but are still far from being practically useful, as they produce relevant goals only ~57% of the time. The best-performing smaller models\u2014whose size makes them better suited to wearable applications\u2014perform significantly worse than their counterparts, generating ~49% accuracy on the benchmark. Through a modality ablation, we show that models benefit from extra information in relevant modalities with minimal performance degradation from irrelevant modalities, but don\u2019t gain as much when noisy modalities are included (e.g., in the case of digital context when most of the app state is irrelevant).", "arxiv_id": "2510.22443v1", "arxiv_authors": ["Vijay Veerabadran", "Fanyi Xiao", "Nitin Kamra", "Pedro Matias", "Joy Chen", "Caley Drooff", "Brett D Roads", "Riley Williams", "Ethan Henderson", "Xuanyi Zhao", "Kevin Carlberg", "Joseph Tighe", "Karl Ridgeway"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a112"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1287886, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a581"}, "filepath": "data/2510.20639v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996913911760243, "type": "Poster", "name": "Better Tokens for Better 3D: Advancing Vision-Language Modeling in 3D Medical Imaging", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116459", "abstract": "Recent progress in vision-language modeling for 3D medical imaging has been fueled by large-scale computed tomography (CT) corpora with paired free-text reports, stronger architectures, and powerful pretrained models. This has enabled applications such as automated report generation and text-conditioned 3D image synthesis. Yet, current approaches struggle with high-resolution, long-sequence volumes: contrastive pretraining often yields vision encoders that are misaligned with clinical language, and slice-wise tokenization blurs fine anatomy, reducing diagnostic performance on downstream tasks. We introduce BTB3D (Better Tokens for Better 3D), a causal convolutional encoder-decoder that unifies 2D and 3D training and inference while producing compact, frequency-aware volumetric tokens. A three-stage training curriculum enables (i) local reconstruction, (ii) overlapping-window tiling, and (iii) long-context decoder refinement, during which the model learns from short slice excerpts yet generalizes to scans exceeding $300$ slices without additional memory overhead. BTB3D sets a new state-of-the-art on two key tasks: it improves BLEU scores and increases clinical F1 by 40\\% over CT2Rep, CT-CHAT, and Merlin for report generation; and it reduces FID by 75\\% and halves FVD compared to GenerateCT and MedSyn for text-to-CT synthesis, producing anatomically consistent $512\\times512\\times241$ volumes. These results confirm that precise three-dimensional tokenization, rather than larger language backbones alone, is essential for scalable vision-language modeling in 3D medical imaging.", "arxiv_id": "2510.20639v1", "arxiv_authors": ["Ibrahim Ethem Hamamci", "Sezgin Er", "Suprosanna Shit", "Hadrien Reynaud", "Dong Yang", "Pengfei Guo", "Marc Edgar", "Daguang Xu", "Bernhard Kainz", "Bjoern Menze"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a113"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1048843, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a582"}, "filepath": "data/2502.09080v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996128596642435, "type": "Poster", "name": "BevSplat: Resolving Height Ambiguity via Feature-Based Gaussian Primitives for Weakly-Supervised Cross-View Localization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118781", "abstract": "This paper addresses the problem of weakly supervised cross-view localization, where the goal is to estimate the pose of a ground camera relative to a satellite image with noisy ground truth annotations. A common approach to bridge the cross-view domain gap for pose estimation is Bird\u2019s-Eye View (BEV) synthesis. However, existing methods struggle with height ambiguity due to the lack of depth information in ground images and satellite height maps. Previous solutions either assume a flat ground plane or rely on complex models, such as cross-view transformers.We propose BevSplat, a novel method that resolves height ambiguity by using feature-based Gaussian primitives. Each pixel in the ground image is represented by a 3D Gaussian with semantic and spatial features, which are synthesized into a BEV feature map for relative pose estimation.We validate our method on the widely used KITTI and VIGOR datasets, which include both pinhole and panoramic query images. Experimental results show that BevSplat significantly improves localization accuracy over prior approaches.", "arxiv_id": "2502.09080v3", "arxiv_authors": ["Qiwei Wang", "Shaoxun Wu", "Yujiao Shi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a114"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057961, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a583"}, "filepath": "data/2506.10967v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999279187074855, "type": "Poster", "name": "Beyond Attention or Similarity: Maximizing Conditional Diversity for Token Pruning in MLLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119383", "abstract": "In multimodal large language models (MLLMs), the length of input visual tokens is often significantly greater than that of their textual counterparts, leading to a high inference cost. Many works aim to address this issue by removing redundant visual tokens. However, current approaches either rely on attention-based pruning, which retains numerous duplicate tokens, or use similarity-based pruning, overlooking the instruction relevance, consequently causing suboptimal performance. In this paper, we go beyond attention or similarity by proposing a novel visual token pruning method named **CDPruner**, which maximizes the conditional diversity of retained tokens. We first define the conditional similarity between visual tokens conditioned on the instruction, and then reformulate the token pruning problem with determinantal point process (DPP) to maximize the conditional diversity of the selected subset. The proposed CDPruner is training-free and model-agnostic, allowing easy application to various MLLMs. Extensive experiments across diverse MLLMs show that CDPruner establishes new state-of-the-art on various vision-language benchmarks. By maximizing conditional diversity through DPP, the selected subset better represents the input images while closely adhering to user instructions, thereby preserving strong performance even with high reduction ratios. When applied to LLaVA, CDPruner reduces FLOPs by **95\\%** and CUDA latency by **78\\%**, while maintaining **94\\%** of the original accuracy. Our code will be released.", "arxiv_id": "2506.10967v2", "arxiv_authors": ["Qizhe Zhang", "Mengzhen Liu", "Lichen Li", "Ming Lu", "Yuan Zhang", "Junwen Pan", "Qi She", "Shanghang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a115"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087210, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a584"}, "filepath": "data/2505.14705v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996829838836427, "type": "Poster", "name": "Beyond Modality Collapse: Representation Blending for Multimodal Dataset Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117473", "abstract": "Multimodal Dataset Distillation (MDD) seeks to condense large-scale image-text datasets into compact surrogates while retaining their effectiveness for cross-modal learning. Despite recent progress, existing MDD approaches often suffer from ***Modality Collapse***, characterized by over-concentrated intra-modal representations and enlarged distributional gap across modalities. In this paper, at the first time, we identify this issue as stemming from a fundamental conflict between the over-compression behavior inherent in dataset distillation and the cross-modal supervision imposed by contrastive objectives. To alleviate modality collapse, we introduce **RepBlend**, a novel MDD framework that weakens overdominant cross-modal supervision via representation blending, thereby significantly enhancing intra-modal diversity. Additionally, we observe that current MDD methods impose asymmetric supervision across modalities, resulting in biased optimization. To address this, we propose symmetric projection trajectory matching, which synchronizes the optimization dynamics using modality-specific projection heads, thereby promoting balanced supervision and enhancing cross-modal alignment.Experiments on Flickr-30K and MS-COCO show that RepBlend consistently outperforms prior state-of-the-art MDD methods, achieving significant gains in retrieval performance (e.g., +9.4 IR@10, +6.3 TR@10 under the 100-pair setting) and offering up to 6.7$\\times$ distillation speedup.", "arxiv_id": "2505.14705v1", "arxiv_authors": ["Xin Zhang", "Ziruo Zhang", "Jiawei Du", "Zuozhu Liu", "Joey Tianyi Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a116"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057949, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a585"}, "filepath": "data/2510.04838v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998175918849137, "type": "Poster", "name": "Beyond Random: Automatic Inner-loop Optimization in Dataset Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116998", "abstract": "The growing demand for efficient deep learning has positioned dataset distillation as a pivotal technique for compressing training dataset while preserving model performance. However, existing inner-loop optimization methods for dataset distillation typically rely on random truncation strategies, which lack flexibility and often yield suboptimal results. In this work, we observe that neural networks exhibit distinct learning dynamics across different training stages\u2014early, middle, and late\u2014making random truncation ineffective. To address this limitation, we propose Automatic Truncated Backpropagation Through Time (AT-BPTT), a novel framework that dynamically adapts both truncation positions and window sizes according to intrinsic gradient behavior. AT-BPTT introduces three key components: (1) a probabilistic mechanism for stage-aware timestep selection, (2) an adaptive window sizing strategy based on gradient variation, and (3) a low-rank Hessian approximation to reduce computational overhead. Extensive experiments on CIFAR-10, CIFAR-100, Tiny-ImageNet, and ImageNet-1K show that AT-BPTT achieves state-of-the-art performance, improving accuracy by an average of 6.16\\% over baseline methods. Moreover, our approach accelerates inner-loop optimization by 3.9 \u00d7 while saving 63\\% memory cost.", "arxiv_id": "2510.04838v1", "arxiv_authors": ["Muquan Li", "Hang Gou", "Dongyang Zhang", "Shuang Liang", "Xiurui Xie", "Deqiang Ouyang", "Ke Qin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a117"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1059806, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a586"}, "filepath": "data/2412.06639v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993610937294833, "type": "Poster", "name": "Beyond Scalars: Concept-Based Alignment Analysis in Vision Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116322", "abstract": "Measuring the alignment between representations lets us understand similarities between the feature spaces of different models, such as Vision Transformers trained under diverse paradigms. However, traditional measures for representational alignment yield only scalar values that obscure how these spaces agree in terms of learned features. To address this, we combine alignment analysis with concept discovery, allowing a fine-grained breakdown of alignment into individual concepts. This approach reveals both universal concepts across models and each representation\u2019s internal concept structure. We introduce a new definition of concepts as non-linear manifolds, hypothesizing they better capture the geometry of the feature space. A sanity check demonstrates the advantage of this manifold-based definition over linear baselines for concept-based alignment. Finally, our alignment analysis of four different ViTs shows that increased supervision tends to reduce semantic organization in learned representations.", "arxiv_id": "2412.06639v1", "arxiv_authors": ["Johanna Vielhaben", "Dilyara Bareeva", "Jim Berend", "Wojciech Samek", "Nils Strodthoff"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a118"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.468Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2118591, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a587"}, "filepath": "data/2510.04770v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994092075624305, "type": "Poster", "name": "Beyond the Seen: Bounded Distribution Estimation for Open-Vocabulary Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119391", "abstract": "Open-vocabulary learning requires modeling the data distribution in open environments, which consists of both seen-class and unseen-class data. Existing methods estimate the distribution in open environments using seen-class data, where the absence of unseen classes makes the estimation error inherently unidentifiable. Intuitively, learning beyond the seen classes is crucial for distribution estimation to bound the estimation error. We theoretically demonstrate that the distribution can be effectively estimated by generating unseen-class data, through which the estimation error is upper-bounded. Building on this theoretical insight, we propose a novel open-vocabulary learning method, which generates unseen-class data for estimating the distribution in open environments.The method consists of a class-domain-wise data generation pipeline and a distribution alignment algorithm.The data generation pipeline generates unseen-class data under the guidance of a hierarchical semantic tree and domain information inferred from the seen-class data, facilitating accurate distribution estimation.With the generated data, the distribution alignment algorithm estimates and maximizes the posterior probability to enhance generalization in open-vocabulary learning.Extensive experiments on 11 datasets demonstrate that our method outperforms baseline approaches by up to 14%, highlighting its effectiveness and superiority.", "arxiv_id": "2510.04770v1", "arxiv_authors": ["Xiaomeng Fan", "Yuchuan Mao", "Zhi Gao", "Yuwei Wu", "Jin Chen", "Yunde Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a119"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 968828, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a588"}, "filepath": "data/2503.16424v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990707892458522, "type": "Poster", "name": "B\u00e9zier Splatting for Fast and Differentiable Vector Graphics Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117178", "abstract": "Differentiable vector graphics (VGs) are widely used in image vectorization and vector synthesis, while existing representations are costly to optimize and struggle to achieve high-quality rendering results for high-resolution images. This work introduces a new differentiable VG representation, dubbed B\u00e9zier Splatting, that enables fast yet high-fidelity VG rasterization. B\u00e9zier Splatting samples 2D Gaussians along B\u00e9zier curves, which naturally provide positional gradients at object boundaries. Thanks to the efficient splatting-based differentiable rasterizer, B\u00e9zier Splatting achieves 30\u00d7 and 150\u00d7 faster per forward and backward rasterization step for open curves compared to DiffVG. Additionally, we introduce an adaptive pruning and densification strategy that dynamically adjusts the spatial distribution of curves to escape local minima, further improving VG quality. Furthermore, our new VG representation supports conversion to standard XML-based SVG format, enhancing interoperability with existing VG tools and pipelines. Experimental results show that B\u00e9zier Splatting significantly outperforms existing methods with better visual fidelity and significant optimization speedup.", "arxiv_id": "2503.16424v3", "arxiv_authors": ["Xi Liu", "Chaoyi Zhou", "Nanxuan Zhao", "Siyu Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11a"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2829194, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a589"}, "filepath": "data/2506.09485v1.png", "tags": [], "_media_type": "image", "_rand": 0.999898052049585, "type": "Poster", "name": "Bidirectional Motion Transformer for Safety-Critical Traffic Scenario Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117225", "abstract": "Scenario-based testing is essential for validating the performance of autonomous driving (AD) systems. However, such testing is limited by the scarcity of long-tailed, safety-critical scenarios in existing datasets collected in the real world. To tackle the data issue, we propose the Adv-BMT framework, which augments real-world scenarios with diverse and realistic adversarial interactions. The core component of Adv-BMT is a bidirectional motion transformer (BMT) model to perform inverse traffic motion predictions, which takes the last frame of the scenario as input and reconstruct the traffic in the inverse of chronological order, till the initial time step. The Adv-BMT framework is a two-stage pipeline: it first conducts adversarial initializations and then inverse motion predictions. Different from previous work, we do not need any collision data for pretraining and are still able to generate realistic and diverse collision interactions. Our experimental results validate the quality of generated collision scenarios by Adv-BMT: training in our augmented dataset would reduce episode collision rates by 20\\% compared to previous work. The code will be made available.", "arxiv_id": "2506.09485v1", "arxiv_authors": ["Yuxin Liu", "Zhenghao Peng", "Xuanhao Cui", "Bolei Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11b"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1005498, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58a"}, "filepath": "data/2508.05954v1.png", "tags": [], "_media_type": "image", "_rand": 0.999915168770458, "type": "Poster", "name": "Bifrost-1: Bridging Multimodal LLMs and Diffusion Models with Patch-level CLIP Latents", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115089", "abstract": "There is growing interest in integrating high-fidelity visual synthesis capabilities into large language models (LLMs) without compromising their strong reasoning capabilities. Existing methods that directly train LLMs or bridge LLMs and diffusion models usually suffer from costly training since the backbone LLMs have not seen image representations during pretraining. We present Bifrost-1, a unified framework that bridges pretrained multimodal LLMs (MLLMs) and diffusion models using patch-level CLIP image embeddings as latent variables, which are natively aligned with the MLLM\u2019s CLIP visual encoder. These patch-level image embeddings are integrated into the diffusion model with a lightweight adaptation of its ControlNet. To retain the original multimodal reasoning capabilities of MLLMs, we equip the MLLM with a visual generation branch initialized from the original MLLM parameters when predicting the patch-level image embeddings. By seamlessly integrating pretrained MLLMs and diffusion models with patch-level CLIP latents, our framework enables high-fidelity controllable image generation with significant training efficiency. Our experiments demonstrate that Bifrost-1 achieves comparable or better performance than previous methods in terms of visual fidelity and multimodal understanding, with substantially lower compute during training. We also provide comprehensive ablation studies showing the effectiveness of our design choices. Code, technical details and additional experiment results are included in the supplementary materials.", "arxiv_id": "2508.05954v1", "arxiv_authors": ["Han Lin", "Jaemin Cho", "Amir Zadeh", "Chuan Li", "Mohit Bansal"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110781, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58b"}, "filepath": "data/2505.18132v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993145153208898, "type": "Poster", "name": "BiggerGait: Unlocking Gait Recognition with Layer-wise Representations from Large Vision Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118541", "abstract": "Large vision models (LVM) based gait recognition has achieved impressive performance.However, existing LVM-based approaches may overemphasize gait priors while neglecting the intrinsic value of LVM itself, particularly the rich, distinct representations across its multi-layers. To adequately unlock LVM's potential, this work investigates the impact of layer-wise representations on downstream recognition tasks.Our analysis reveals that LVM's intermediate layers offer complementary properties across tasks, integrating them yields an impressive improvement even without rich well-designed gait priors.Building on this insight, we propose a simple and universal baseline for LVM-based gait recognition, termed BiggerGait.Comprehensive evaluations on CCPG, CAISA-B*, SUSTech1K, and CCGR_MINI validate the superiority of BiggerGait across both within- and cross-domain tasks, establishing it as a simple yet practical baseline for gait representation learning.All the models and code will be publicly available.", "arxiv_id": "2505.18132v3", "arxiv_authors": ["Dingqiang Ye", "Chao Fan", "Zhanbo Huang", "Chengwen Luo", "Jianqiang Li", "Shiqi Yu", "Xiaoming Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069643, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58c"}, "filepath": "data/2510.18650v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996717102032973, "type": "Poster", "name": "Binary Quadratic Quantization: Beyond First-Order Quantization for Real-Valued Matrix Compression", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119877", "abstract": "This paper proposes a novel matrix quantization method, Binary Quadratic Quantization (BQQ). In contrast to conventional first-order quantization approaches\u2014such as uniform quantization and binary coding quantization\u2014that approximate real-valued matrices via linear combinations of binary bases, BQQ leverages the expressive power of binary quadratic expressions while maintaining an extremely compact data format.We validate our approach with two experiments: a matrix compression benchmark and post-training quantization (PTQ) on pretrained Vision Transformer-based models.Experimental results demonstrate that BQQ consistently achieves a superior trade-off between memory efficiency and reconstruction error than conventional methods for compressing diverse matrix data. It also delivers strong PTQ performance, even though we neither target state-of-the-art PTQ accuracy under tight memory constraints nor rely on PTQ-specific binary matrix optimization.For example, our proposed method outperforms the state-of-the-art PTQ method by up to 2.0\\% and 59.1\\% on the ImageNet dataset under the calibration-based and data-free scenarios, respectively, with quantization equivalent to 2 bits.These findings highlight the surprising effectiveness of binary quadratic expressions for efficient matrix approximation and neural network compression.", "arxiv_id": "2510.18650v1", "arxiv_authors": ["Kyo Kuroki", "Yasuyuki Okoshi", "Thiem Van Chu", "Kazushi Kawamura", "Masato Motomura"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 974961, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58d"}, "filepath": "data/2505.23883v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995616977403204, "type": "Poster", "name": "BioCLIP-XL: Emergent Properties from Scaling Hierarchical Contrastive Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115146", "abstract": "Foundation models trained at scale exhibit remarkable emergent behaviors, learning new capabilities beyond their initial training objectives. We find such emergent behaviors in biological vision models via large-scale contrastive vision-language training. To achieve this, we first curate TreeOfLife-200M, comprising 214 million images of living organisms, the largest and most diverse biological organism image dataset to date. We then train BioCLIP-XL on TreeOfLife-200M to distinguish different species. Despite the narrow training objective, BioCLIP-XL yields extraordinary accuracy when applied to various biological visual tasks such as habitat classification and trait prediction. We identify emergent properties in the learned embedding space of BioCLIP-XL. At the inter-species level, the embedding distribution of different species aligns closely with functional and ecological meanings (e.g., beak sizes and habitats). At the intra-species level, instead of being diminished, the intra-species variations (e.g., life stages and sexes) are preserved and better separated in subspaces orthogonal to inter-species distinctions. We provide formal proof and analyses to explain why hierarchical supervision and contrastive objectives encourage these emergent properties. Crucially, our results reveal that these properties become increasingly significant with larger-scale training data, leading to a biologically meaningful embedding space.", "arxiv_id": "2505.23883v2", "arxiv_authors": ["Jianyang Gu", "Samuel Stevens", "Elizabeth G Campolongo", "Matthew J Thompson", "Net Zhang", "Jiaman Wu", "Andrei Kopanev", "Zheda Mai", "Alexander E. White", "James Balhoff", "Wasila Dahdul", "Daniel Rubenstein", "Hilmar Lapp", "Tanya Berger-Wolf", "Wei-Lun Chao", "Yu Su"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a11f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2290895, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58e"}, "filepath": "data/2507.00469v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992358988678481, "type": "Poster", "name": "Bisecle: Binding and Separation in Continual Learning for Video Language Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116080", "abstract": "Frontier vision-language models (VLMs) have made remarkable improvements in video understanding tasks. However, real-world videos typically exist as continuously evolving data streams (e.g., dynamic scenes captured by wearable glasses), necessitating models to continually adapt to shifting data distributions and novel scenarios. Considering the prohibitive computational costs of fine-tuning models on new tasks, usually, a small subset of parameters is updated while the bulk of the model remains frozen. This poses new challenges to existing continual learning frameworks in the context of large multimodal foundation models, i.e., catastrophic forgetting and update conflict. While the foundation models struggle with parameter-efficient continual learning, the hippocampus in the human brain has evolved highly efficient mechanisms for memory formation and consolidation. Inspired by the rapid **Bi**nding and pattern **se**paration mechanisms in the hippocampus, in this work, we propose **Bisecle** for video-language **c**ontinual **le**arning, where a multi-directional supervision module is used to capture more cross-modal relationships and a contrastive prompt learning scheme is designed to isolate task-specific knowledge to facilitate efficient memory storage. Binding and separation processes further strengthen the ability of VLMs to retain complex experiences, enabling robust and efficient continual learning in video understanding tasks. We perform a thorough evaluation of the proposed Bisecle, demonstrating its ability to mitigate forgetting and enhance cross-task generalization on several VideoQA benchmarks.", "arxiv_id": "2507.00469v1", "arxiv_authors": ["Yue Tan", "Xiaoqian Hu", "Hao Xue", "Celso De Melo", "Flora D. Salim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a120"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 903153, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a58f"}, "filepath": "data/2506.21209v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998950184493933, "type": "Poster", "name": "BitMark for Infinity: Watermarking Bitwise Autoregressive Image Generative Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117685", "abstract": "State-of-the-art text-to-image models like Infinity generate photorealistic images at an unprecedented speed. These models operate in a bitwise autoregressive manner over a discrete set of tokens that is practically infinite in size. However, their impressive generative power comes with a growing risk: as their outputs increasingly populate the Internet, they are likely to be scraped and reused as training data-potentially by the very same models. This phenomenon has been shown to lead to model collapse, where repeated training on generated content, especially from the models' own previous versions, causes a gradual degradation in performance. A promising mitigation strategy is watermarking, which embeds human-imperceptible yet detectable signals into generated images-enabling the identification of generated content. In this work, we introduce BitMark, a robust bitwise watermarking framework for Infinity. Our method embeds a watermark directly at the bit level of the token stream across multiple scales (also referred to as resolutions) during Infinity's image generation process. Our bitwise watermark subtly influences the bits to preserve visual fidelity and generation speed while remaining robust against a spectrum of removal techniques. Furthermore, it exhibits high radioactivity, i.e., when watermarked generated images are used to train another image generative model, this second model's outputs will also carry the watermark. The radioactive traces remain detectable even when only fine-tuning diffusion or image autoregressive models on images watermarked with our BitMark. Overall, our approach provides a principled step toward preventing model collapse in image generative models by enabling reliable detection of generated outputs.", "arxiv_id": "2506.21209v1", "arxiv_authors": ["Louis Kerner", "Michel Meintz", "Bihe Zhao", "Franziska Boenisch", "Adam Dziedzic"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a121"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100297, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a590"}, "filepath": "data/2510.09361v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994592612538499, "type": "Poster", "name": "BLINK-Twice: You see, but do you observe? A Reasoning Benchmark on Visual Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121522", "abstract": "Recently, Multimodal Large Language Models (MLLMs) have made rapid progress, particularly in enhancing their reasoning capabilities. However, existing reasoning benchmarks still primarily assess language-based reasoning, often treating visual input as replaceable context. To address this gap, we introduce BLINK-Twice, a vision-centric reasoning benchmark grounded in challenging perceptual tasks. Instead of relying on external knowledge, our tasks require models to reason from visual content alone, shifting the focus from language-based to image-grounded reasoning. Compared to prior perception benchmarks, it moves beyond shallow perception (\"see\") and requires fine-grained observation and analytical reasoning (\"observe\"). BLINK-Twice integrates three core components: seven types of visual challenges for testing visual reasoning, natural adversarial image pairs that enforce reliance on visual content, and annotated reasoning chains for fine-grained evaluation of the reasoning process rather than final answers alone. We evaluate 20 leading MLLMs, including 12 foundation models and 8 reasoning-enhanced models. BLINK-Twice poses a significant challenge to current models. While existing reasoning strategies in the language space\u2014such as chain-of-thought or self-criticism can improve performance, they often result in unstable and redundant reasoning. We observe that repeated image observation improves performance across models, and active visual interaction, as demonstrated by models like o3, highlights the need for a new paradigm for vision reasoning. The dataset is publicly available at https://huggingface.co/datasets/PicoTrex/BLINK-Twice.", "arxiv_id": "2510.09361v1", "arxiv_authors": ["Junyan Ye", "Dongzhi Jiang", "Jun He", "Baichuan Zhou", "Zilong Huang", "Zhiyuan Yan", "Hongsheng Li", "Conghui He", "Weijia Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a122"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086751, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a591"}, "filepath": "data/2510.21167v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997762189799114, "type": "Poster", "name": "Blockwise Flow Matching: Improving Flow Matching Models For Efficient High-Quality Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118395", "abstract": "Recently, Flow Matching models have pushed the boundaries of high-fidelity data generation across a wide range of domains. It typically employs a single large network to learn the entire generative trajectory from noise to data. Despite their effectiveness, this design struggles to capture distinct signal characteristics across timesteps simultaneously and incurs substantial inference costs due to the iterative evaluation of the entire model. To address these limitations, we propose Blockwise Flow Matching (BFM), a novel framework that partitions the generative trajectory into multiple temporal segments, each modeled by smaller but specialized velocity blocks. This blockwise design enables each block to specialize effectively in its designated interval, improving inference efficiency and sample quality. To further enhance generation fidelity, we introduce a Semantic Feature Guidance module that explicitly conditions velocity blocks on semantically rich features aligned with pretrained representations.Additionally, we propose a lightweight Feature Residual Approximation strategy that preserves semantic quality while significantly reducing inference cost.Extensive experiments on ImageNet 256$\\times$256 demonstrate that BFM establishes a substantially improved Pareto frontier over existing Flow Matching methods, achieving 2.1$\\times$ to 4.9$\\times$ accelerations in inference complexity at comparable generation performance.", "arxiv_id": "2510.21167v1", "arxiv_authors": ["Dogyun Park", "Taehoon Lee", "Minseok Joo", "Hyunwoo J. Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a123"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1011186, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a592"}, "filepath": "data/2501.01015v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995997229410292, "type": "Poster", "name": "Boosting Adversarial Transferability with Spatial Adversarial Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115672", "abstract": "Deep neural networks are vulnerable to adversarial examples that exhibit transferability across various models. Numerous approaches are proposed to enhance the transferability of adversarial examples, including advanced optimization, data augmentation, and model modifications. However, these methods still show limited transferability, partiovovocularly in cross-architecture scenarios, such as from CNN to ViT. To achieve high transferability, we propose a technique termed Spatial Adversarial Alignment (SAA), which employs an alignment loss and leverages a witness model to fine-tune the surrogate model. Specifically, SAA consists of two key parts: spatial-aware alignment and adversarial-aware alignment. First, we minimize the divergences of features between the two models in both global and local regions, facilitating spatial alignment. Second, we introduce a self-adversarial strategy that leverages adversarial examples to impose further constraints, aligning features from an adversarial perspective. Through this alignment, the surrogate model is trained to concentrate on the common features extracted by the witness model. This facilitates adversarial attacks on these shared features, thereby yielding perturbations that exhibit enhanced transferability. Extensive experiments on various architectures on ImageNet show that aligned surrogate models based on SAA can provide higher transferable adversarial examples, especially in cross-architecture attacks. Our source code is available at Supplementary Materials.", "arxiv_id": "2501.01015v1", "arxiv_authors": ["Zhaoyu Chen", "Haijing Guo", "Kaixun Jiang", "Jiyuan Fu", "Xinyu Zhou", "Dingkang Yang", "Hao Tang", "Bo Li", "Wenqiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a124"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.469Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1624974, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a593"}, "filepath": "data/2504.16064v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994610729108987, "type": "Poster", "name": "Boosting Generative Image Modeling via Joint Image-Feature Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116596", "abstract": "Latent diffusion models (LDMs) dominate high-quality image generation, yet integrating representation learning with generative modeling remains a challenge. We introduce a novel generative image modeling framework that seamlessly bridges this gap by leveraging a diffusion model to jointly model low-level image latents (from a variational autoencoder) and high-level semantic features (from a pretrained self-supervised encoder like DINO). Our latent-semantic diffusion approach learns to generate coherent image-feature pairs from pure noise, significantly enhancing both generative quality and training efficiency, all while requiring only minimal modifications to standard Diffusion Transformer architectures. By eliminating the need for complex distillation objectives, our unified design simplifies training and unlocks a powerful new inference strategy: Representation Guidance, which leverages learned semantics to steer and refine image generation. Evaluated in both conditional and unconditional settings, our method delivers substantial improvements in image quality and training convergence speed, establishing a new direction for representation-aware generative modeling.", "arxiv_id": "2504.16064v2", "arxiv_authors": ["Theodoros Kouzelis", "Efstathios Karypidis", "Ioannis Kakogeorgiou", "Spyros Gidaris", "Nikos Komodakis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a125"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2105213, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a594"}, "filepath": "data/2412.03565v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991592592114571, "type": "Poster", "name": "Boosting Multimodal Instance Understanding via Explicit Visual Prompt Instruction Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119587", "abstract": "Large Multimodal Models (LMMs) have made significant breakthroughs with the advancement of instruction tuning. However, while existing models can understand images and videos at a holistic level, they still struggle with instance-level understanding that requires a more fine-grained comprehension and alignment. Instance-level understanding is crucial for LMMs, as it focuses on the specific elements that we are most interested in. Excitingly, existing works find that the state-of-the-art LMMs exhibit strong instance understanding capabilities when provided with explicit visual cues. Motivated by this, we proposed Inst-IT, a solution to enhance LMMs in Instance understanding via explicit visual prompt Instruction Tuning for instance guidance. Inst-IT consists of a benchmark to diagnose multimodal instance-level understanding, a large-scale instruction-tuning dataset, and a continuous instruction-tuning training paradigm to effectively enhance spatial-temporal instance understanding capabilities of existing LMMs. Experimental results show that, enhanced by Inst-IT, our models not only achieve outstanding performance on Inst-IT-Bench and other instance understanding benchmarks, but also demonstrate significant improvements across various generic image and video understanding benchmarks. This highlights that our method not only boosts instance-level understanding but also strengthens the overall capabilities of generic image and video comprehension.", "arxiv_id": "2412.03565v1", "arxiv_authors": ["Wujian Peng", "Lingchen Meng", "Yitong Chen", "Yiweng Xie", "Yang Liu", "Tao Gui", "Hang Xu", "Xipeng Qiu", "Zuxuan Wu", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a126"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1939116, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a595"}, "filepath": "data/2506.00874v1.png", "tags": [], "_media_type": "image", "_rand": 0.999049545774816, "type": "Poster", "name": "Breaking Latent Prior Bias in Detectors for Generalizable AIGC Image Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115689", "abstract": "Current AIGC detectors often achieve near-perfect accuracy on images produced by the same generator used for training but struggle to generalize to outputs from unseen generators. We trace this failure in part to latent prior bias: detectors learn shortcuts tied to patterns stemming from the initial noise vector rather than learning robust generative artifacts. To address this, we propose \\textbf{On-Manifold Adversarial Training (OMAT)}: by optimizing the initial latent noise of diffusion models under fixed conditioning, we generate \\emph{on-manifold} adversarial examples that remain on the generator\u2019s output manifold\u2014unlike pixel-space attacks, which introduce off-manifold perturbations that the generator itself cannot reproduce and that can obscure the true discriminative artifacts. To test against state-of-the-art generative models, we introduce GenImage++, a test-only benchmark of outputs from advanced generators (Flux.1, SD3) with extended prompts and diverse styles. We apply our adversarial-training paradigm to ResNet50 and CLIP baselines and evaluate across existing AIGC forensic benchmarks and recent challenge datasets. Extensive experiments show that adversarially trained detectors significantly improve cross-generator performance without any network redesign. Our findings on latent-prior bias offer valuable insights for future dataset construction and detector evaluation, guiding the development of more robust and generalizable AIGC forensic methodologies.", "arxiv_id": "2506.00874v1", "arxiv_authors": ["Yue Zhou", "Xinan He", "KaiQing Lin", "Bin Fan", "Feng Ding", "Bin Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a127"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038912, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a596"}, "filepath": "data/2505.11293v2.png", "tags": [], "_media_type": "image", "_rand": 0.999808084047619, "type": "Poster", "name": "Breaking the Batch Barrier (B3) of Contrastive Learning via Smart Batch Mining", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118493", "abstract": "Contrastive learning (CL) is a prevalent technique for training embedding models, which pulls semantically similar examples (positives) closer in the representation space while pushing dissimilar ones (negatives) further apart. A key source of negatives are 'in-batch' examples, i.e., positives from other examples in the batch. Effectiveness of such models is hence strongly influenced by the size and quality of training batches. In this work, we propose 'Breaking the Batch Barrier' (B3), a novel batch construction strategy designed to curate high-quality batches for CL. Our approach begins by using a pretrained teacher embedding model to rank all examples in the dataset, from which a sparse similarity graph is constructed. A community detection algorithm is then applied to this graph to identify clusters of examples that serve as strong negatives for one another. The clusters are then used to construct batches that are rich in in-batch negatives. Empirical results on the MMEB multimodal embedding benchmark (36 tasks) demonstrate that our method sets a new state of the art, outperforming previous best methods by +1.3 and +2.9 points at the 7B and 2B model scales, respectively. Notably, models trained with \\bthm\\ surpass existing state-of-the-art results even with a batch size as small as 64, which is 4\u201316\u00d7 smaller than that required by other methods.", "arxiv_id": "2505.11293v2", "arxiv_authors": ["Raghuveer Thirukovalluru", "Rui Meng", "Ye Liu", "Karthikeyan K", "Mingyi Su", "Ping Nie", "Semih Yavuz", "Yingbo Zhou", "Wenhu Chen", "Bhuwan Dhingra"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a128"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 988833, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a597"}, "filepath": "data/2509.17955v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994820785190963, "type": "Poster", "name": "Breaking the Discretization Barrier of Continuous Physics Simulation Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115234", "abstract": "The modeling of complicated time-evolving physical dynamics from partial observations is a long-standing challenge. Particularly, observations can be sparsely distributed in a seemingly random or unstructured manner, making it difficult to capture highly nonlinear features in a variety of scientific and engineering problems. However, existing data-driven approaches are often constrained by fixed spatial and temporal discretization. While some researchers attempt to achieve spatio-temporal continuity by designing novel strategies, they either overly rely on traditional numerical methods or fail to truly overcome the limitations imposed by discretization. To address these, we propose CoPS, a purely data-driven methods, to effectively model continuous physics simulation from partial observations. Specifically, we employ multiplicative filter network to fuse and encode spatial information with the corresponding observations. Then we customize geometric grids and use message-passing mechanism to map features from original spatial domain to the customized grids. Subsequently, CoPS models continuous-time dynamics by designing multi-scale graph ODEs, while introducing a Marcov-based neural auto-correction module to assist and constrain the continuous extrapolations. Comprehensive experiments demonstrate that CoPS advances the state-of-the-art methods in space-time continuous modeling across various scenarios. Our codes are available at~\\url{https://anonymous.4open.science/r/CoPS-F625}.", "arxiv_id": "2509.17955v2", "arxiv_authors": ["Fan Xu", "Hao Wu", "Nan Wang", "Lilan Peng", "Kun Wang", "Wei Gong", "Xibin Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a129"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1072150, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a598"}, "filepath": "data/2506.07961v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999220100845531, "type": "Poster", "name": "BridgeVLA: Input-Output Alignment for Efficient 3D Manipulation Learning with Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116823", "abstract": "Recently, leveraging pre-trained vision-language models (VLMs) for building vision-language-action (VLA) models has emerged as a promising approach to effective robot manipulation learning. However, only few methods incorporate 3D signals into VLMs for action prediction, and they do not fully leverage the spatial structure inherent in 3D data, leading to low sample efficiency. In this paper, we introduce BridgeVLA, a novel 3D VLA model that (1) projects 3D inputs to multiple 2D images, ensuring input alignment with the VLM backbone, and (2) utilizes 2D heatmaps for action prediction, unifying the input and output spaces within a consistent 2D image space.In addition, we propose a scalable pre-training method that equips the VLM backbone with the capability to predict 2D heatmaps before downstream policy learning. Extensive experiments show the proposed method is able to learn 3D manipulation efficiently and effectively.BridgeVLA surpasses the state-of-the-art baseline method in RLBench, achieving a significant higher success rate (88.2% vs 81.4%), and in COLOSSEUM, demonstrating a substantially lower success rate drop (3.6% vs 15.6%). In real-robot experiments, BridgeVLA outperforms the state-of-the-art baseline method by 32% on average, and is able to generalize robustly in multiple out-of-distribution settings, including visual disturbance and unseen instructions. Remarkably, it is able to achieve a success rate of 96.8% on 10+ tasks with only 3 trajectories per task, highlighting its extraordinary sample efficiency. Videos can be found in https://anonymous1219-create.github.io/BridgeVLA_Web/.", "arxiv_id": "2506.07961v2", "arxiv_authors": ["Peiyan Li", "Yixiang Chen", "Hongtao Wu", "Xiao Ma", "Xiangnan Wu", "Yan Huang", "Liang Wang", "Tao Kong", "Tieniu Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12a"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1234332, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a599"}, "filepath": "data/2510.21356v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995904153952805, "type": "Poster", "name": "Bridging Gaze and VLMs through Attention Regularization for Egocentric Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120280", "abstract": "Eye gaze offers valuable cues about attention, short-term intent, and future actions, making it a powerful signal for modeling egocentric behavior. In this work, we propose a gaze-regularized framework that enhances VLMs for two key egocentric understanding tasks: fine-grained future event prediction and current activity understanding.Unlike prior approaches that rely solely on visual inputs or use gaze as an auxiliary input signal , our method uses gaze only during training. We introduce a gaze-regularized attention mechanism that aligns model focus with human visual gaze. This design is flexible and modular, allowing it to generalize across multiple VLM architectures that utilize attention.Experimental results show that our approach improves semantic prediction scores by up to 11 $\\%$ for future event prediction and around 7 $\\%$ for current activity understanding, compared to the corresponding baseline models trained without gaze regularization. These results highlight the value of gaze-guided training in improving the accuracy and robustness of egocentric VLMs. Overall, this work establishes a foundation for using human gaze to enhance the predictive capabilities of VLMs in real-world scenarios like assistive robots and human-machine collaboration.", "arxiv_id": "2510.21356v1", "arxiv_authors": ["Anupam Pani", "Yanchao Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1101471, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59a"}, "filepath": "data/2505.15438v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991508393668639, "type": "Poster", "name": "Bridging Sign and Spoken Languages: Pseudo Gloss Generation for Sign Language Translation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115986", "abstract": "Sign Language Translation (SLT) aims to map sign language videos to spoken language text. A common approach relies on gloss annotations as an intermediate representation, decomposing SLT into two sub-tasks: video-to-gloss recognition and gloss-to-text translation. While effective, this paradigm depends on expert-annotated gloss labels, which are costly and rarely available in existing datasets, limiting its scalability. To address this challenge, we propose a gloss-free pseudo gloss generation framework that eliminates the need for human-annotated glosses while preserving the structured intermediate representation.Specifically, we prompt a Large Language Model (LLM) with a few example text-gloss pairs using in-context learning to produce draft sign glosses from spoken language text. To enhance the correspondence between LLM-generated pseudo glosses and the sign sequences in video, we correct the ordering in the pseudo glosses for better alignment via a weakly supervised learning process.This reordering facilitates the incorporation of auxiliary alignment objectives, and allows for the use of efficient supervision via a Connectionist Temporal Classification (CTC) loss.We train our SLT model\u2014consisting of a vision encoder and a translator\u2014through a three-stage pipeline, which progressively narrows the modality gap between sign language and spoken language.Despite its simplicity, our approach outperforms previous state-of-the-art gloss-free frameworks on two SLT benchmarks and achieves competitive results compared to gloss-based methods.", "arxiv_id": "2505.15438v1", "arxiv_authors": ["Jianyuan Guo", "Peike Li", "Trevor Cohn"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028895, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59b"}, "filepath": "data/2510.21412v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994828798063251, "type": "Poster", "name": "Briding the gap to real-world language-grounded visual concept learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118837", "abstract": "Human intelligence effortlessly interprets visual scenes along a rich spectrum of semantic dimensions. However, existing approaches for language-grounded visual concept learning are limited to a few predefined primitive axes such as color and shape, and are explored in synthetic datasets.In this work, we propose a scalable framework that adaptively identifies image-related concept axes and grounds visual concepts along these axes in real-world scenes. Leveraging a pretrained vision-language model with our simple universal prompting strategy, our framework identifies a diverse image-related axes without requiring any prior knowledge. Our universal concept encoder then adaptively binds visual features to the discovered axes without introducing additional model parameters per concept. To ground visual concepts along discovered axes, we maximize the compositional consistency of concept representations, which ensures each axis to be independently manipulated without affecting other axes.We demonstrate the effectiveness of our framework on CelebA-HQ and AFHQ datasets, achieving superior editing capabilities across diverse concepts and strong compositional generalization compared to existing visual concept learning method and text-based editing methods.", "arxiv_id": "2510.21412v1", "arxiv_authors": ["Whie Jung", "Semin Kim", "Junee Kim", "Seunghoon Hong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1117991, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59c"}, "filepath": "data/2506.04970v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994009027479949, "type": "Poster", "name": "Bringing SAM to new heights: leveraging elevation data for tree crown segmentation from drone imagery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120185", "abstract": "Information on trees at the individual level is crucial for monitoring forest ecosystems and planning forest management. Current monitoring methods involve ground measurements, requiring extensive cost, time and labour. Advances in drone remote sensing and computer vision offer great potential for mapping individual trees from aerial imagery at broad-scale. Large pre-trained vision models, such as the Segment Anything Model (SAM), represent a particularly compelling choice given limited labeled data. In this work, we compare methods leveraging SAM for the task of automatic tree crown instance segmentation in high resolution drone imagery in three use cases: 1) boreal plantations, 2) temperate forests, and 3) tropical forests. We also look into integrating elevation data into models, in the form of Digital Surface Model (DSM) information, which can readily be obtained at no additional cost from RGB drone imagery. We present BalSAM, a model leveraging SAM and DSM information, which shows potential over other methods, particularly in the context of plantations. We find that methods using SAM out-of-the-box do not outperform a custom Mask R-CNN, even with well-designed prompts. However, efficiently tuning SAM further and integrating DSM information are both promising avenues for tree crown instance segmentation models.", "arxiv_id": "2506.04970v1", "arxiv_authors": ["M\u00e9lisande Teng", "Arthur Ouaknine", "Etienne Lalibert\u00e9", "Yoshua Bengio", "David Rolnick", "Hugo Larochelle"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1007340, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59d"}, "filepath": "data/2509.15566v4.png", "tags": [], "_media_type": "image", "_rand": 0.9990620242945014, "type": "Poster", "name": "BTL-UI: Blink-Think-Link Reasoning Model for GUI Agent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119419", "abstract": "In the field of AI-driven human-GUI interaction automation, while rapid advances in multimodal large language models and reinforcement fine-tuning techniques have yielded remarkable progress, a fundamental challenge persists: their interaction logic significantly deviates from natural human-GUI communication patterns. To fill this gap, we propose ``Blink-Think-Link'' (BTL), a brain-inspired framework for human-GUI interaction that mimics the human cognitive process between users and graphical interfaces. The system decomposes interactions into three biologically plausible phases: (1) \\textbf{Blink} - rapid detection and attention to relevant screen areas, analogous to saccadic eye movements; (2) \\textbf{Think} - higher-level reasoning and decision-making, mirroring cognitive planning; and (3) \\textbf{Link} - generation of executable commands for precise motor control, emulating human action selection mechanisms. Additionally, we introduce two key technical innovations for BTL framework:(1) Blink Data Generation - an automated annotation pipeline specifically optimized for blink data, and(2) {BTL Reward \u2013 the first rule-based reward mechanism that enables reinforcement learning driven by both process and outcome.}Building upon this framework, we develop a GUI agent model named BTL-UI, which demonstrates consistent state-of-the-art performance across both static GUI understanding and dynamic interaction tasks in comprehensive benchmarks. These results provide conclusive empirical validation of the framework's efficacy in developing advanced GUI Agents. We will soon release the relevant data and models.", "arxiv_id": "2509.15566v4", "arxiv_authors": ["Shaojie Zhang", "Ruoceng Zhang", "Pei Fu", "Shaokang Wang", "Jiahui Yang", "Xin Du", "Shiqi Cui", "Bin Qin", "Ying Huang", "Zhenbo Luo", "Jian Luan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a12f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058707, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59e"}, "filepath": "data/2510.09996v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992077892241149, "type": "Poster", "name": "BurstDeflicker: A Benchmark Dataset for Flicker Removal in Dynamic Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121576", "abstract": "Flicker artifacts in short-exposure images are caused by the interplay between the row-wise exposure mechanism of rolling shutter cameras and the temporal intensity variations of alternating current (AC)-powered lighting. These artifacts typically appear as uneven brightness distribution across the image, forming noticeable dark bands. Beyond compromising image quality, this structured noise also affects high-level tasks, such as object detection and tracking, where reliable lighting is crucial. Despite the prevalence of flicker, the lack of a large-scale, realistic dataset has been a significant barrier to advancing research in flicker removal. To address this issue, we present BurstDeflicker, a robust and scalable benchmark constructed using three complementary data acquisition strategies. First, we develop a Retinex-based synthesis pipeline that redefines the goal of flicker removal and enables controllable manipulation of key flicker-related attributes (e.g., intensity, area, and frequency), thereby facilitating the generation of diverse flicker patterns. Second, we capture 4,000 real-world flicker images from different scenes, which help the model better understand the spatial and temporal characteristics of real flicker artifacts and generalize more effectively to wild scenarios. Finally, due to the non-repeatable nature of dynamic scenes, we propose a green-screen method to incorporate motion into image pairs while preserving real flicker degradation. Comprehensive experiments demonstrate the effectiveness of our dataset and its potential to advance research in flicker removal. The code and dataset are available in the supplementary materials.", "arxiv_id": "2510.09996v1", "arxiv_authors": ["Lishen Qu", "Zhihao Liu", "Shihao Zhou", "Yaqi Luo", "Jie Liang", "Hui Zeng", "Lei Zhang", "Jufeng Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a130"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1066728, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a59f"}, "filepath": "data/2505.19713v3.png", "tags": [], "_media_type": "image", "_rand": 0.999443399867792, "type": "Poster", "name": "CAD-Coder: Text-to-CAD Generation with Chain-of-Thought and Geometric Reward", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118098", "abstract": "In this work, we introduce CAD-Coder, a novel framework that reformulates text-to-CAD as the generation of CadQuery scripts\u2014a Python-based, parametric CAD language.This representation enables direct geometric validation, a richer modeling vocabulary, and seamless integration with existing LLMs. To further enhance code validity and geometric fidelity, we propose a two-stage learning pipeline: (1) supervised fine-tuning on paired text\u2013CadQuery data, and (2) reinforcement learning with Group Reward Policy Optimization (GRPO), guided by a CAD-specific reward comprising both a geometric reward (Chamfer Distance) and a format reward.We also introduce a chain-of-thought (CoT) planning process to improve model reasoning, and construct a large-scale, high-quality dataset of 110K text\u2013CadQuery\u20133D model triplets and 1.5K CoT samples via an automated pipeline. Extensive experiments demonstrate that CAD-Coder enables LLMs to generate diverse, valid, and complex CAD models directly from natural language, advancing the state of the art of text-to-CAD generation and geometric reasoning.", "arxiv_id": "2505.19713v3", "arxiv_authors": ["Yandong Guan", "Xilin Wang", "Ximing Xing", "Jing Zhang", "Dong Xu", "Qian Yu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a131"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.470Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 994836, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a0"}, "filepath": "data/2509.15459v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993991556473513, "type": "Poster", "name": "CAGE: Continuity-Aware edGE Network Unlocks Robust Floorplan Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117912", "abstract": "We present CAGE (Continuity-Aware edGE) network, an end-to-end framework for reconstructing vector floorplans directly from point-cloud density maps. Traditional corner-based polygon representations are highly sensitive to noise and incomplete observations, often resulting in fragmented or implausible layouts. Recent line grouping methods leverage structural cues to improve robustness but still struggle to recover fine geometric details. To address these limitations, we propose a native edge-centric formulation, modeling each wall segment as a directed, geometrically continuous edge. This representation enables inference of coherent floorplan structures, ensuring watertight, topologically valid room boundaries while improving robustness and reducing artifacts. Towards this design, we develop a dual-query transformer decoder that integrates perturbed and latent queries within a denoising framework, which not only stabilizes optimization but also accelerates convergence. Extensive experiments on Structured3D and SceneCAD show that CAGE achieves state-of-the-art performance, with F1 scores of 99.1% (rooms), 91.7% (corners), and 89.3% (angles). The method also demonstrates strong cross-dataset generalization, underscoring the efficacy of our architectural innovations. Code and pretrained models will be released upon acceptance.", "arxiv_id": "2509.15459v2", "arxiv_authors": ["Yiyi Liu", "Chunyang Liu", "Bohan Wang", "Weiqin Jiao", "Bojian Wu", "Lubin Fan", "Yuwei Chen", "Fashuai Li", "Biao Xiong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a132"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1161780, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a1"}, "filepath": "data/2509.19731v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997582750144135, "type": "Poster", "name": "CAMILA: Context-Aware Masking for Image Editing with Language Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119101", "abstract": "Text-guided image editing has been allowing users to transform and synthesize images through natural language instructions, offering considerable flexibility. However, most existing image editing models naively attempt to follow all user instructions, even if those instructions are inherently infeasible or contradictory, often resulting in nonsensical output. To address these challenges, we propose a context-aware method for image editing named as CAMILA (Context-Aware Masking for Image Editing with Language Alignment). CAMILA is designed to validate the contextual coherence between instructions and the image, ensuring that only relevant edits are applied to the designated regions while ignoring non-executable instructions. For comprehensive evaluation of this new method, we constructed datasets for both single- and multi-instruction image editing, incorporating the presence of infeasible requests. Our method achieves better performance and higher semantic alignment than state-of-the-art models, demonstrating its effectiveness in handling complex instruction challenges while preserving image integrity.", "arxiv_id": "2509.19731v2", "arxiv_authors": ["Hyunseung Kim", "Chiho Choi", "Srikanth Malla", "Sai Prahladh Padmanabhan", "Saurabh Bagchi", "Joon Hee Choi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a133"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060905, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a2"}, "filepath": "data/2510.17626v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990923192567325, "type": "Poster", "name": "CaMiT: A Time-Aware Car Model Dataset for Classification and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121609", "abstract": "AI systems must adapt to the evolving visual landscape, especially in domains where object appearance shifts over time. While prior work on time-aware vision models has primarily addressed commonsense-level categories, we introduce Car Models in Time (CaMiT). This fine-grained dataset captures the temporal evolution of this representative subset of technological artifacts. CaMiT includes 787K labeled samples of 190 car models (2007\u20132023) and 5.1M unlabeled samples (2005\u20132023), supporting supervised and self-supervised learning. We show that static pretraining on in-domain data achieves competitive performance with large-scale generalist models, offering a more resource-efficient solution. However, accuracy degrades when testing a year's models backward and forward in time. To address this, we evaluate CaMiT in a time-incremental classification setting, a realistic continual learning scenario with emerging, evolving, and disappearing classes. We investigate two mitigation strategies: time-incremental pretraining, which updates the backbone model, and time-incremental classifier learning, which updates the final classification layer, with positive results in both cases. Finally, we introduce time-aware image generation by consistently using temporal metadata during training. Results indicate improved realism compared to standard generation. CaMiT provides a rich resource for exploring temporal adaptation in a fine-grained visual context for discriminative and generative AI systems.", "arxiv_id": "2510.17626v2", "arxiv_authors": ["Fr\u00e9d\u00e9ric LIN", "Biruk Abere Ambaw", "Adrian Popescu", "Hejer Ammar", "Romaric Audigier", "Herv\u00e9 Le Borgne"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a134"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1358979, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a3"}, "filepath": "data/2502.17821v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994516852485366, "type": "Poster", "name": "CAML: Collaborative Auxiliary Modality Learning for Multi-Agent Systems", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118277", "abstract": "Multi-modal learning has become a crucial technique for improving the performance of machine learning applications across domains such as autonomous driving, robotics, and perception systems. However, in certain scenarios, particularly in resource-constrained environments, some modalities available during training may be absent during inference. While existing frameworks effectively utilize multiple data sources during training and enable inference with reduced modalities, they are primarily designed for single-agent settings. This poses a critical limitation in dynamic environments such as connected autonomous vehicles (CAV), where incomplete data coverage can lead to decision-making blind spots. Conversely, some works explore multi-agent collaboration but without addressing missing modality at test time. To overcome these limitations, we propose Collaborative Auxiliary Modality Learning (CAML), a novel multi-modal multi-agent framework that enables agents to collaborate and share multi-modal data during training, while allowing inference with reduced modalities during testing. Experimental results in collaborative decision-making for CAV in accident-prone scenarios demonstrate that CAML achieves up to a 58.1% improvement in accident detection. Additionally, we validate CAML on real-world aerial-ground robot data for collaborative semantic segmentation, achieving up to a 10.6% improvement in mIoU.", "arxiv_id": "2502.17821v2", "arxiv_authors": ["Rui Liu", "Yu Shen", "Peng Gao", "Pratap Tokekar", "Ming Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a135"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 985954, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a4"}, "filepath": "data/2503.19730v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993220287465359, "type": "Poster", "name": "CamSAM2: Segment Anything Accurately in Camouflaged Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117583", "abstract": "Video camouflaged object segmentation (VCOS), aiming at segmenting camouflaged objects that seamlessly blend into their environment, is a fundamental vision task with various real-world applications. With the release of SAM2, video segmentation has witnessed significant progress. However, SAM2's capability of segmenting camouflaged videos is suboptimal, especially when given simple prompts such as point and box. To address the problem, we propose Camouflaged SAM2 (CamSAM2), which enhances SAM2's ability to handle camouflaged scenes without modifying SAM2's parameters. Specifically, we introduce a decamouflaged token to provide the flexibility of feature adjustment for VCOS. To make full use of fine-grained and high-resolution features from the current frame and previous frames, we propose implicit object-aware fusion (IOF) and explicit object-aware fusion (EOF) modules, respectively. Object prototype generation (OPG) is introduced to abstract and memorize object prototypes with informative details using high-quality features from previous frames. Extensive experiments are conducted to validate the effectiveness of our approach. While CamSAM2 only adds negligible learnable parameters to SAM2, it substantially outperforms SAM2 on three VCOS datasets, especially achieving 12.2 mDice gains with click prompt on MoCA-Mask and 19.6 mDice gains with mask prompt on SUN-SEG-Hard, with Hiera-T as the backbone. The code will be released.", "arxiv_id": "2503.19730v2", "arxiv_authors": ["Yuli Zhou", "Guolei Sun", "Yawei Li", "Yuqian Fu", "Luca Benini", "Ender Konukoglu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a136"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2505775, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a5"}, "filepath": "data/2505.12207v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996760831230038, "type": "Poster", "name": "Can Large Multimodal Models Understand Agricultural Scenes? Benchmarking with AgroMind", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121372", "abstract": "Large Multimodal Models (LMMs) has demonstrated capabilities across various domains, but comprehensive benchmarks for agricultural remote sensing (RS) remain scarce. Existing benchmarks designed for agricultural RS scenarios exhibit notable limitations, primarily in terms of insufficient scene diversity in the dataset and oversimplified task design. To bridge this gap, we introduce AgroMind, a comprehensive agricultural remote sensing benchmark covering four task dimensions: spatial perception, object understanding, scene understanding, and scene reasoning, with a total of 13 task types, ranging from crop identification and health monitoring to environmental analysis. We curate a high-quality evaluation set by integrating eight public datasets and one private farmland plot dataset, containing 25,026 QA pairs and 15,556 images. The pipeline begins with multi-source data preprocessing, including collection, format standardization, and annotation refinement. We then generate a diverse set of agriculturally relevant questions through the systematic definition of tasks. Finally, we employ LMMs for inference, generating responses, and performing detailed examinations. We evaluated 18 open-source LMMs and 3 closed-source models on AgroMind. Experiments reveal significant performance gaps, particularly in spatial reasoning and fine-grained recognition, it is notable that human performance lags behind several leading LMMs. By establishing a standardized evaluation framework for agricultural RS, AgroMind reveals the limitations of LMMs in domain knowledge and highlights critical challenges for future work. Data and code can be accessed at https://rssysu.github.io/AgroMind/.", "arxiv_id": "2505.12207v3", "arxiv_authors": ["Qingmei Li", "Yang Zhang", "Zurong Mai", "Yuhang Chen", "Shuohong Lou", "Henglian Huang", "Jiarui Zhang", "Zhiwei Zhang", "Yibin Wen", "Weijia Li", "Haohuan Fu", "Jianxi Huang", "Juepeng Zheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a137"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2993386, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a6"}, "filepath": "data/2505.22441v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994478470317473, "type": "Poster", "name": "Can NeRFs See without Cameras?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119765", "abstract": "Neural Radiance Fields (NeRFs) have been remarkably successful at synthesizing novel views of 3D scenes by optimizing a volumetric scene function. This scene function models how optical rays bring color information from a 3D object to the camera pixels. Radio frequency (RF) or audio signals can also be viewed as a vehicle for delivering information about the environment to a sensor. However, unlike camera pixels, an RF/audio sensor receives a mixture of signals that contain many environmental reflections (also called \u201cmultipath\u201d). Is it still possible to infer the environment using such multipath signals? We show that with redesign, NeRFs can be taught to learn from multipath signals, and thereby \u201csee\u201d the environment. As a grounding application, we aim to infer the indoor floorplan of a home from sparse WiFi measurements made at multiple locations inside the home. Although a difficult inverse problem, our implicitly learnt floorplans look promising, and enables forward applications, such as indoor signal prediction and basic ray tracing.", "arxiv_id": "2505.22441v2", "arxiv_authors": ["Chaitanya Amballa", "Sattwik Basu", "Yu-Lin Wei", "Zhijian Yang", "Mehmet Ergezer", "Romit Roy Choudhury"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a138"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1196201, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a7"}, "filepath": "data/2502.14914v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999036564694065, "type": "Poster", "name": "CAPability: A Comprehensive Visual Caption Benchmark for Evaluating Both Correctness and Thoroughness", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121398", "abstract": "Visual captioning benchmarks have become outdated with the emergence of modern multimodal large language models (MLLMs), as the brief ground-truth sentences and traditional metrics fail to assess detailed captions effectively. While recent benchmarks attempt to address this by focusing on keyword extraction or object-centric evaluation, they remain limited to vague-view or object-view analyses and incomplete visual element coverage. In this paper, we introduce CAPability, a comprehensive multi-view benchmark for evaluating visual captioning across 12 dimensions spanning six critical views. We curate nearly 11K human-annotated images and videos with visual element annotations to evaluate the generated captions. CAPability stably assesses both the correctness and thoroughness of captions with \\textit{precision} and \\textit{hit} metrics. By converting annotations to QA pairs, we further introduce a heuristic metric, \\textit{know but cannot tell} ($K\\bar{T}$), indicating a significant performance gap between QA and caption capabilities. Our work provides a holistic analysis of MLLMs' captioning abilities, as we identify their strengths and weaknesses across various dimensions, guiding future research to enhance specific aspects of their capabilities.", "arxiv_id": "2502.14914v3", "arxiv_authors": ["Zhihang Liu", "Chen-Wei Xie", "Bin Wen", "Feiwu Yu", "Jixuan Chen", "Pandeng Li", "Boqiang Zhang", "Nianzu Yang", "Yinglu Li", "Zuan Gao", "Yun Zheng", "Hongtao Xie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a139"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1768275, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a8"}, "filepath": "data/2505.21538v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998278966827665, "type": "Poster", "name": "Caption This, Reason That: VLMs Caught in the Middle", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116234", "abstract": "Vision-Language Models (VLMs) have shown remarkable progress in visual understanding in recent years. Yet, they still lag behind human capabilities in specific visual tasks such as counting or relational reasoning. To understand the underlying limitations, we adopt methodologies from cognitive science, analyzing VLM performance along core cognitive axes: Perception, Attention, and Memory. Using a suite of tasks targeting these abilities, we evaluate state-of-the-art VLMs, including GPT-4o. Our analysis reveals distinct cognitive profiles: while advanced models approach ceiling performance on some tasks (e.g. category identification), a significant gap persists, particularly in tasks requiring spatial understanding or selective attention. Investigating the source of these failures and potential methods for improvement, we employ a vision-text decoupling analysis, finding that models struggling with direct visual reasoning show marked improvement when reasoning over their own generated text captions. These experiments reveal a strong need for improved VLM CoT abilities, even in models that consistently exceed human performance. Furthermore, we demonstrate the potential of targeted fine-tuning on composite visual reasoning tasks and show that fine-tuning smaller VLMs substantially improves core cognitive abilities. While this improvement does not translate to large enhancements on challenging, out-of-distribution benchmarks, we show broadly that VLM performance on our datasets strongly correlates with performance on these other benchmarks. Our work provides a detailed analysis of VLM cognitive strengths and weaknesses and identifies key bottlenecks in simultaneous perception and reasoning while also providing an effective and simple solution.", "arxiv_id": "2505.21538v1", "arxiv_authors": ["Zihan Weng", "Lucas Gomez", "Taylor Whittington Webb", "Pouya Bashivan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 914649, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5a9"}, "filepath": "data/2509.19300v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992379006729942, "type": "Poster", "name": "CAR: Condition-Aware Reparameterization Aligns Source and Target for Better Flow Matching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116543", "abstract": "Conditional generative modeling aims to learn a conditional data distribution from samples containing data-condition pairs. For this, diffusion and flow-based methods have attained compelling results. These methods use a learned (flow) model to transport an initial standard Gaussian noise that ignores the condition to the conditional data distribution. The model is hence required to learn both mass transport \\emph{and} conditional injection. To ease the demand on the model, we propose \\emph{Condition-Aware Reparameterization} (CAR)--a lightweight, learned \\emph{shift} that conditions the source, the target, or both distributions. By relocating these distributions, CAR shortens the probability path the model must learn, leading to faster training in practice. On low-dimensional synthetic data, we visualize and quantify the effects of CAR. On higher-dimensional natural image data (ImageNet-256), we show that adding CAR to SiT-XL/2 reduces FID from 2.07 to 1.68, while introducing less than \\(0.6\\%\\) additional parameters.", "arxiv_id": "2509.19300v2", "arxiv_authors": ["Chen Chen", "Pengsheng Guo", "Liangchen Song", "Jiasen Lu", "Rui Qian", "Xinze Wang", "Tsu-Jui Fu", "Wei Liu", "Yinfei Yang", "Alex Schwing"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028968, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5aa"}, "filepath": "data/2510.04312v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995052490743502, "type": "Poster", "name": "Care-PD: A Multi-Site Anonymized Clinical Dataset for Parkinson\u2019s Disease Gait Assessment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121554", "abstract": "Objective gait assessment in Parkinson\u2019s Disease (PD) is limited by the absence of large, diverse, and clinically annotated motion datasets. We introduce Care-PD, the largest publicly available archive of 3D mesh gait data for PD, and the first multi-site collection spanning 9 cohorts from 8 clinical centers. All recordings (RGB video or motion capture) are converted into anonymized SMPL meshes via a harmonized preprocessing pipeline. Care-PD supports two key benchmarks: supervised clinical score prediction (estimating Unified Parkinson\u2019s Disease Rating Scale, UPDRS, gait scores) and unsupervised motion pretext tasks (2D-to-3D keypoint lifting and full-body 3D reconstruction). Clinical prediction is evaluated under four generalization protocols: within-dataset, cross-dataset, leave-one-dataset-out, and multi-dataset in-domain adaptation.To assess clinical relevance, we compare state-of-the-art motion encoders with a traditional gait-feature baseline, finding that encoders consistently outperform handcrafted features. Pretraining on Care-PD reduces MPJPE (from 60.8mm to 7.5mm) and boosts PD severity macro-F1 by 17\\%, underscoring the value of clinically curated, diverse training data. Care-PD and all benchmark code are released for non-commercial research (Code, Data).", "arxiv_id": "2510.04312v1", "arxiv_authors": ["Vida Adeli", "Ivan Klabucar", "Javad Rajabi", "Benjamin Filtjens", "Soroush Mehraban", "Diwei Wang", "Hyewon Seo", "Trung-Hieu Hoang", "Minh N. Do", "Candice Muller", "Claudia Oliveira", "Daniel Boari Coelho", "Pieter Ginis", "Moran Gilat", "Alice Nieuwboer", "Joke Spildooren", "Lucas Mckay", "Hyeokhyen Kwon", "Gari Clifford", "Christine Esper", "Stewart Factor", "Imari Genias", "Amirhossein Dadashzadeh", "Leia Shum", "Alan Whone", "Majid Mirmehdi", "Andrea Iaboni", "Babak Taati"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.471Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1103078, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ab"}, "filepath": "data/2501.03120v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995606626681282, "type": "Poster", "name": "CAT: Content-Adaptive Image Tokenization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117055", "abstract": "Most existing image tokenizers encode images into a fixed number of tokens or patches, overlooking the inherent variability in image complexity and introducing unnecessary computate overhead for simpler images. To address this, we propose Content-Adaptive Tokenizer (CAT), which dynamically adjusts representation capacity based on the image content and encodes simpler images into fewer tokens. We design (1) a caption-based evaluation system that leverages LLMs to predict content complexity and determine the optimal compression ratio for an image, and (2) a novel nested VAE architecture that performs variable-rate compression in a single model.Trained on images with varying complexity, CAT achieves an average of 15% reduction in rFID across seven detail-rich datasets containing text, humans, and complex textures. On natural image datasets like ImageNet and COCO, it reduces token usage by 18% while maintaining high-fidelity reconstructions. We further evaluate CAT on two downstream tasks. For image classification, CAT consistently improves top-1 accuracy across five datasets spanning diverse domains. For image generation, it boosts training throughput by 23% on ImageNet, leading to more efficient learning and improved FIDs over fixed-token baselines.", "arxiv_id": "2501.03120v1", "arxiv_authors": ["Junhong Shen", "Kushal Tirumala", "Michihiro Yasunaga", "Ishan Misra", "Luke Zettlemoyer", "Lili Yu", "Chunting Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1415788, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ac"}, "filepath": "data/2505.17590v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990599691459596, "type": "Poster", "name": "CGS-GAN: 3D Consistent Gaussian Splatting GANs for High Resolution Human Head Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118694", "abstract": "Recently, 3D GANs based on 3D Gaussian splatting have been proposed for high quality synthesis of human heads. However, existing methods stabilize training and enhance rendering quality from steep viewpoints by conditioning the random latent vector on the current camera position. This compromises 3D consistency, as we observe significant identity changes when re-synthesizing the 3D head with each camera shift. Conversely, fixing the camera to a single viewpoint yields high-quality renderings for that perspective but results in poor performance for novel views. Removing view-conditioning typically destabilizes GAN training, often causing the training to collapse. In response to these challenges, we introduce CGS-GAN, a novel 3D Gaussian Splatting GAN framework that enables stable training and high-quality 3D-consistent synthesis of human heads without relying on view-conditioning. To ensure training stability, we introduce a multi-view regularization technique that enhances generator convergence with minimal computational overhead. Additionally, we adapt the conditional loss used in existing 3D Gaussian splatting GANs and propose a generator architecture designed to not only stabilize training but also facilitate efficient rendering and straightforward scaling, enabling output resolutions up to $2048^2$. To evaluate the capabilities of CGS-GAN, we curate a new dataset derived from FFHQ. This dataset enables very high resolutions, focuses on larger portions of the human head, reduces view-dependent artifacts for improved 3D consistency, and excludes images where subjects are obscured by hands or other objects. As a result, our approach achieves very high rendering quality, supported by competitive FID scores, while ensuring consistent 3D scene generation.", "arxiv_id": "2505.17590v2", "arxiv_authors": ["Florian Barthel", "Wieland Morgenstern", "Paul Hinzer", "Anna Hilsmann", "Peter Eisert"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3554072, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ad"}, "filepath": "data/2506.09990v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991498607227248, "type": "Poster", "name": "Chain-of-Action: Trajectory Autoregressive Modeling for Robotic Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116620", "abstract": "We present Chain-of-Action (CoA), a novel visuo-motor policy paradigm built upon Trajectory Autoregressive Modeling. Unlike conventional approaches that predict next step action(s) forward, CoA generates an entire trajectory by explicit backward reasoning with task-specific goals through an action-level Chain-of-Thought (CoT) process. This process is unified within a single autoregressive structure: (1) the first token corresponds to a stable keyframe action that encodes the task-specific goals; and (2) subsequent action tokens are generated autoregressively, conditioned on the initial keyframe and previously predicted actions. This backward action reasoning enforces a global-to-local structure, allowing each local action to be tightly constrained by the final goal. To further realize the action reasoning structure, CoA incorporates four complementary designs: continuous action token representation; dynamic stopping for variable-length trajectory generation; reverse temporal ensemble; and multi-token prediction to balance action chunk modeling with global structure. As a result, CoA gives strong spatial generalization capabilities while preserving the flexibility and simplicity of a visuo-motor policy. Empirically, we observe CoA achieves the state-of-the-art performance across 60 RLBench tasks and 8 real-world manipulation tasks.", "arxiv_id": "2506.09990v1", "arxiv_authors": ["Wenbo Zhang", "Tianrun Hu", "Yanyuan Qiao", "Hanbo Zhang", "Yuchu Qin", "Yang Li", "Jiajun Liu", "Tao Kong", "Lingqiao Liu", "Xiao Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a13f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 994816, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ae"}, "filepath": "data/2505.18600v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996235884982365, "type": "Poster", "name": "Chain-of-Zoom: Extreme Super-Resolution via Scale Autoregression and Preference Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118846", "abstract": "Modern single-image super-resolution (SISR) models deliver photo-realistic results at the scale factors on which they are trained, but collapse when asked to magnify far beyond that regime. We address this scalability bottleneck with Chain-of-Zoom (CoZ), a model-agnostic framework that factorizes SISR into an autoregressive chain of intermediate scale-states with multi-scale-aware prompts. CoZ repeatedly re-uses a backbone SR model, decomposing the conditional probability into tractable sub-problems to achieve extreme resolutions without additional training. Because visual cues diminish at high magnifications, we augment each zoom step with multi-scale-aware text prompts generated by a vision-language model (VLM). The prompt extractor itself is fine-tuned using Generalized Reward Policy Optimization (GRPO) with a critic VLM, aligning text guidance towards human preference. Experiments show that a standard $4\\times$ diffusion SR model wrapped in CoZ attains beyond $256\\times$ enlargement with high perceptual quality and fidelity.", "arxiv_id": "2505.18600v2", "arxiv_authors": ["Bryan Sangwoo Kim", "Jeongsol Kim", "Jong Chul Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a140"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5666782, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5af"}, "filepath": "data/2503.19331v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991573976324961, "type": "Poster", "name": "ChA-MAEViT: Unifying Channel-Aware Masked Autoencoders and Multi-Channel Vision Transformers for Improved Cross-Channel Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118668", "abstract": "Prior work using Masked Autoencoders (MAEs) typically relies on random patch masking based on the assumption that images have significant redundancies across different channels, allowing for the reconstruction of masked content using cross-channel correlations. However, this assumption does not hold in Multi-Channel Imaging (MCI), where channels may provide complementary information with minimal feature overlap. Thus, these MAEs primarily learn local structures within individual channels from patch reconstruction, failing to fully leverage cross-channel interactions and limiting their MCI effectiveness. In this paper, we present ChA-MAEViT, an MAE-based method that enhances feature learning across MCI channels via four key strategies: (1) dynamic channel-patch masking, which compels the model to reconstruct missing channels in addition to masked patches, thereby enhancing cross-channel dependencies and improving robustness to varying channel configurations; (2) memory tokens, which serve as long-term memory aids to promote information sharing across channels, addressing the challenges of reconstructing structurally diverse channels; (3) hybrid token fusion module, which merges fine-grained patch tokens with a global class token to capture richer representations; and (4) Channel-Aware Decoder, a lightweight decoder utilizes channel tokens to effectively reconstruct image patches. Experiments on satellite and microscopy datasets, CHAMMI, JUMP-CP, and So2Sat, show that ChA-MAEViT significantly outperforms state-of-the-art MCI-ViTs by 3.0-21.5%, highlighting the importance of cross-channel interactions in MCI.", "arxiv_id": "2503.19331v3", "arxiv_authors": ["Chau Pham", "Juan C. Caicedo", "Bryan A. Plummer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a141"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1107296, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b0"}, "filepath": "data/2510.23589v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993150130846424, "type": "Poster", "name": "ChangeIn: A Benchmark for Self-Calibration of Dynamic Intrinsics of Video Cameras", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121582", "abstract": "Accurately tracking camera intrinsics is crucial for achieving 3D understanding from 2D video. However, most 3D algorithms assume that camera intrinsics stay constant throughout a video, which is often not true for many real-world in-the-wild videos. A major obstacle in this field is a lack of dynamic camera intrinsics benchmarks--existing benchmarks typically offer limited diversity in scene content and intrinsics variation, and none provide per-frame intrinsic changes for consecutive video frames. In this paper, we present ChangeIn, a real-world benchmark that provides per-frame ground truth intrinsics annotations for videos with dynamic intrinsics. Compared to prior benchmarks, ChangeIn captures a wider range of intrinsic variations and scene diversity, featuring 143K+ annotated frames from 386 high-resolution indoor and outdoor videos with dynamic camera intrinsics. To ensure accurate per-frame intrinsics, we build a comprehensive look-up table of calibration experiments and extend the Kalibr toolbox to improve its accuracy and robustness. Using our benchmark, we evaluate existing baseline methods for predicting camera intrinsics and find that most struggle to achieve accurate predictions on videos with dynamic intrinsics.", "arxiv_id": "2510.23589v1", "arxiv_authors": ["Erich Liang", "Roma Bhattacharjee", "Sreemanti Dey", "Rafael Moschopoulos", "Caitlin Wang", "Michel Liao", "Grace Tan", "Andrew Wang", "Karhan Kayan", "Stamatis Alexandropoulos", "Jia Deng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a142"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1018940, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b1"}, "filepath": "data/2505.19076v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993776923173426, "type": "Poster", "name": "ChartSketcher: Reasoning with Multimodal Feedback and Reflection for Chart Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119332", "abstract": "Charts are high-density visualization carriers for complex data, serving as a crucial medium for information extraction and analysis. Automated chart understanding poses significant challenges to existing multimodal large language models (MLLMs) due to the need for precise and complex visual reasoning. Current step-by-step reasoning models primarily focus on text-based logical reasoning for chart understanding. However, they struggle to refine or correct their reasoning when errors stem from flawed visual understanding, as they lack the ability to leverage multimodal interaction for deeper comprehension. Inspired by human cognitive behavior, we propose ChartSketcher, a multimodal feedback-driven step-by-step reasoning method designed to address these limitations. ChartSketcher is a chart understanding model that employs Sketch-CoT, enabling MLLMs to annotate intermediate reasoning steps directly onto charts using a programmatic sketching library, iteratively feeding these visual annotations back into the reasoning process. This mechanism enables the model to visually ground its reasoning and refine its understanding over multiple steps. We employ a two-stage training strategy: a cold start phase to learn sketch-based reasoning patterns, followed by off-policy reinforcement learning to enhance reflection and generalization. Experiments demonstrate that ChartSketcher achieves promising performance on chart understanding benchmarks and general vision tasks, providing an interactive and interpretable approach to chart comprehension.", "arxiv_id": "2505.19076v1", "arxiv_authors": ["Muye Huang", "Lingling Zhang", "Jie Ma", "Han Lai", "Fangzhi Xu", "Yifei Li", "Wenjun Wu", "Yaqiang Wu", "Jun Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a143"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029214, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b2"}, "filepath": "data/2509.08502v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997548777164771, "type": "Poster", "name": "Chirality in Action: Time-Aware Video Representation Learning by Latent Straightening", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116636", "abstract": "Our objective is to develop compact video representations that are sensitive to visual change over time. To measure such time-sensitivity, we introduce a new task: chiral action recognition, where one needs to distinguish between a pair of temporally opposite actions, such as \u201copening vs. closing a door\", \u201capproaching vs. moving away from something\", \u201cfolding vs. unfolding paper\", etc. Such actions (i) occur frequently in everyday life, (ii) require understanding of simple visual change over time (in object state, size, spatial position, count . . . ), and (iii) are known to be poorly represented by many video embeddings. Our goal is to build time aware video representations which offer linear separability between these chiral pairs. To that end, we propose a self-supervised adaptation recipe to inject time-sensitivity into a sequence of frozen image features. Our model is based on an auto-encoder with a latent space with inductive bias inspired by perceptual straightening. We show that this results in a compact but time-sensitive video representation for the proposed task across three datasets: Something-Something, EPIC-Kitchens, and Charade. Our method (i) outperforms much larger video models pre-trained on large-scale video datasets, and (ii) leads to an improvement in classification performance on standard benchmarks when combined with these existing models.", "arxiv_id": "2509.08502v2", "arxiv_authors": ["Piyush Bagad", "Andrew Zisserman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a144"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100771, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b3"}, "filepath": "data/2506.16962v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996316849062493, "type": "Poster", "name": "Chiron-o1: Igniting Multimodal Large Language Models towards Generalizable Medical Reasoning via Mentor-Intern Collaborative Search", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116739", "abstract": "Multimodal large language models (MLLMs) have begun to demonstrate robust reasoning capabilities on general tasks, yet their application in the medical domain remains in its early stages. Constructing chain-of-thought (CoT) training data is essential for bolstering the reasoning abilities of medical MLLMs. However, existing approaches exhibit a deficiency in offering a comprehensive framework for searching and evaluating effective reasoning paths towards critical diagnosis. To address this challenge, we propose Mentor-Intern Collaborative Search (MICS), a novel reasoning-path searching scheme to generate rigorous and effective medical CoT data. MICS first leverages mentor models to initialize the reasoning, one step at a time, then prompts each intern model to continue the thinking along those initiated paths, and finally selects the optimal reasoning path according to the overall reasoning performance of multiple intern models. The reasoning performance is determined by an MICS-Score, which assesses the quality of generated reasoning paths. Eventually, we construct MMRP, a multi-task medical reasoning dataset with ranked difficulty, and Chiron-o1, a new medical MLLM devised via a curriculum learning strategy, with robust visual question-answering and generalizable reasoning capabilities. Extensive experiments demonstrate that Chiron-o1, trained on our CoT dataset constructed using MICS, achieves state-of-the-art performance across a list of medical visual question answering and reasoning benchmarks. Our model and code will be publicly available.", "arxiv_id": "2506.16962v2", "arxiv_authors": ["Haoran Sun", "Yankai Jiang", "Wenjie Lou", "Yujie Zhang", "Wenjie Li", "Lilong Wang", "Mianxin Liu", "Lei Liu", "Xiaosong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a145"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1030363, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b4"}, "filepath": "data/2411.18145v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999111068736726, "type": "Poster", "name": "CHOICE: Benchmarking the Remote Sensing Capabilities of Large Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121749", "abstract": "The rapid advancement of Large Vision-Language Models (VLMs), both general-domain models and those specifically tailored for remote sensing, has demonstrated exceptional perception and reasoning capabilities in Earth observation tasks. However, a benchmark for systematically evaluating their capabilities in this domain is still lacking. To bridge this gap, we propose CHOICE, an extensive benchmark designed to objectively evaluate the hierarchical remote sensing capabilities of VLMs. Focusing on 2 primary capability dimensions essential to remote sensing: perception and reasoning, we further categorize 6 secondary dimensions and 23 leaf tasks to ensure a well-rounded assessment coverage. CHOICE guarantees the quality of all 10,507 problems through a rigorous process of data collection from 50 globally distributed cities, question construction and quality control. The newly curated data and the format of multiple-choice questions with definitive answers allow for an objective and straightforward performance assessment. Our evaluation of 3 proprietary and 21 open-source VLMs highlights their critical limitations within this specialized context. We hope that CHOICE will serve as a valuable resource and offer deeper insights into the challenges and potential of VLMs in the field of remote sensing. Code and dataset are available at [this https URL](https://github.com/ShawnAn-WHU/CHOICE).", "arxiv_id": "2411.18145v3", "arxiv_authors": ["Xiao An", "Jiaxing Sun", "Zihan Gui", "Wei He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a146"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083756, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b5"}, "filepath": "data/2505.15145v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999596793229284, "type": "Poster", "name": "CineTechBench: A Benchmark for Cinematographic Technique Understanding and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121706", "abstract": "Cinematography is a cornerstone of film production and appreciation, shaping mood, emotion, and narrative through visual elements such as camera movement, shot composition, and lighting. Despite recent progress in multimodal large language models (MLLMs) and video generation models, the capacity of current models to grasp and reproduce cinematographic techniques remains largely uncharted, hindered by the scarcity of expert-annotated data. To bridge this gap, we present CineTechBench, a pioneering benchmark founded on precise, manual annotation by seasoned cinematography experts across key cinematography dimensions. Our benchmark covers seven essential aspects\u2014shot scale, shot angle, composition, camera movement, lighting, color, and focal length\u2014and includes over 600 annotated movie images and 120 movie clips with clear cinematographic techniques. For the understanding task, we design question\u2013answer pairs and annotated descriptions to assess MLLMs\u2019 ability to interpret and explain cinematographic techniques. For the generation task, we assess advanced video generation models on their capacity to reconstruct cinema-quality camera movements given conditions such as textual prompts or keyframes. We conduct a large-scale evaluation on 15+ MLLMs and 5+ video generation models. Our results offer insights into the limitations of current models and future directions for cinematography understanding and generation in automatically film production and appreciation. The code and benchmark can be accessed at \\url{https://github.com/PRIS-CV/CineTechBench}.", "arxiv_id": "2505.15145v1", "arxiv_authors": ["Xinran Wang", "Songyu Xu", "Xiangxuan Shan", "Yuxuan Zhang", "Muxi Diao", "Xueyan Duan", "Yanhua Huang", "Kongming Liang", "Zhanyu Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a147"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6748001, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b6"}, "filepath": "data/2510.12150v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998588948522451, "type": "Poster", "name": "Class-aware Domain Knowledge Fusion and Fission for Continual Test-Time Adaptation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119068", "abstract": "Continual Test-Time Adaptation (CTTA) aims to quickly fine-tune the model during the test phase so that it can adapt to multiple unknown downstream domain distributions without pre-acquiring downstream domain data. To this end, existing advanced CTTA methods mainly reduce the catastrophic forgetting of historical knowledge caused by irregular switching of downstream domain data by restoring the initial model or reusing historical models. However, these methods are usually accompanied by serious insufficient learning of new knowledge and interference from potentially harmful historical knowledge, resulting in severe performance degradation. To this end, we propose a class-aware domain Knowledge Fusion and Fission method for continual test-time adaptation, called KFF, which adaptively expands and merges class-aware domain knowledge in old and new domains according to the test-time data from different domains, where discriminative historical knowledge can be dynamically accumulated. Specifically, considering the huge domain gap within streaming data, a domain Knowledge FIssion (KFI) module is designed to adaptively separate new domain knowledge from a paired class-aware domain prompt pool, alleviating the impact of negative knowledge brought by old domains that are distinct from the current domain. Besides, to avoid the cumulative computation and storage overheads from continuously fissioning new knowledge, a domain Knowledge FUsion (KFU) module is further designed to merge the fissioned new knowledge into the existing knowledge pool with minimal cost, where a greedy knowledge dynamic merging strategy is designed to improve the compatibility of new and old knowledge while keeping the computational efficiency.", "arxiv_id": "2510.12150v1", "arxiv_authors": ["Jiahuan Zhou", "Chao Zhu", "Zhenyu Cui", "Zichen Liu", "Xu Zou", "Gang Hua"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a148"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.472Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1071732, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b7"}, "filepath": "data/2507.08776v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993863776714297, "type": "Poster", "name": "CLiFT: Compressive Light-Field Tokens for Compute Efficient and Adaptive Neural Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118484", "abstract": "This paper proposes a neural rendering approach that represents a scene as \"compressed light-field tokens (CLiFTs)\", retaining rich appearance and geometric information of a scene. CLiFT enables compute-efficient rendering by compressed tokens, while being capable of changing the number of tokens to represent a scene or render a novel view with one trained network. Concretely, given a set of images, multi-view encoder tokenizes the images with the camera poses. Latent-space K-means selects a reduced set of rays as cluster centroids using the tokens. The multi-view ``condenser'' compresses the information of all the tokens into the centroid tokens to construct CLiFTs. At test time, given a target view and a compute budget (i.e., the number of CLiFTs), the system collects the specified number of nearby tokens and synthesizes a novel view using a compute-adaptive renderer. trained to handle a variable number of tokens. Extensive experiments on RealEstate10K and DL3DV datasets quantitatively and qualitatively validate our approach, achieving significant data reduction with comparable rendering quality and the highest overall rendering score, while providing trade-offs of data size, rendering quality, and rendering speed.", "arxiv_id": "2507.08776v2", "arxiv_authors": ["Zhengqing Wang", "Yuefan Wu", "Jiacheng Chen", "Fuyang Zhang", "Yasutaka Furukawa"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a149"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038956, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b8"}, "filepath": "data/2505.22854v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994171549123079, "type": "Poster", "name": "CLIPGaussian: Universal and Multimodal Style Transfer Based on Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116356", "abstract": "Gaussian Splatting (GS) has recently emerged as an efficient representation for rendering 3D scenes from 2D images and has been extended to images, videos, and dynamic 4D content. However, applying style transfer to GS-based representations, especially beyond simple color changes, remains challenging. In this work, we introduce CLIPGaussians, the first unified style transfer framework that supports text- and image-guided stylization across multiple modalities: 2D images, videos, 3D objects, and 4D scenes. Our method operates directly on Gaussian primitives and integrates into existing GS pipelines as a plug-in module, without requiring large generative models or retraining from scratch. CLIPGaussians approach enables joint optimization of color and geometry in 3D and 4D settings, and achieves temporal coherence in videos, while preserving a model size. We demonstrate superior style fidelity and consistency across all tasks, validating CLIPGaussians as a universal and efficient solution for multimodal style transfer.", "arxiv_id": "2505.22854v1", "arxiv_authors": ["Kornel Howil", "Joanna Waczy\u0144ska", "Piotr Borycki", "Tadeusz Dziarmaga", "Marcin Mazur", "Przemys\u0142aw Spurek"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2700220, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5b9"}, "filepath": "data/2508.02329v4.png", "tags": [], "_media_type": "image", "_rand": 0.9992136594374582, "type": "Poster", "name": "CLIP-IN: Enhancing Fine-Grained Visual Understanding in CLIP via Instruction-Editing Data and Long Captions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118953", "abstract": "Despite the success of Vision-Language Models (VLMs) like CLIP in aligning vision and language, their proficiency in detailed, fine-grained visual comprehension remains a key challenge. We present CLIP-IN, a novel framework that bolsters CLIP's fine-grained perception through two core innovations. Firstly, we leverage instruction-editing datasets, originally designed for image manipulation, as a unique source of hard negative image-text pairs. Coupled with a symmetric hard negative contrastive loss, this enables the model to effectively distinguish subtle visual-semantic differences. Secondly, CLIP-IN incorporates long descriptive captions, utilizing rotary positional encodings to capture rich semantic context often missed by standard CLIP. Our experiments demonstrate that CLIP-IN achieves substantial gains on the MMVP benchmark and various fine-grained visual recognition tasks, without compromising robust zero-shot performance on broader classification and retrieval tasks. Critically, integrating CLIP-IN's visual representations into Multimodal Large Language Models significantly reduces visual hallucinations and enhances reasoning abilities. This work underscores the considerable potential of synergizing targeted, instruction-based contrastive learning with comprehensive descriptive information to elevate the fine-grained understanding of VLMs.", "arxiv_id": "2508.02329v4", "arxiv_authors": ["Ziteng Wang", "Siqi Yang", "Limeng Qiao", "Lin Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 969653, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ba"}, "filepath": "data/2507.14312v2.png", "tags": [], "_media_type": "image", "_rand": 0.999735884242305, "type": "Poster", "name": "CLIPTTA: Robust Contrastive Vision-Language Test-Time Adaptation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115350", "abstract": "Vision-language models (VLMs) like CLIP exhibit strong zero-shot capabilities but often fail to generalize under distribution shifts. Test-time adaptation (TTA) allows models to update at inference time without labeled data, typically via entropy minimization. However, this objective is fundamentally misaligned with the contrastive image-text training of VLMs, limiting adaptation performance and introducing failure modes such as pseudo-label drift and class collapse. We propose CLIPTTA, a new gradient-based TTA method for vision-language models that leverages a soft contrastive loss aligned with CLIP\u2019s pre-training objective. We provide a theoretical analysis of CLIPTTA\u2019s gradients, showing how its batch-aware design mitigates the risk of collapse. We further extend CLIPTTA to the open-set setting, where both in-distribution (ID) and out-of-distribution (OOD) samples are encountered, using an Outlier Contrastive Exposure (OCE) loss to improve OOD detection. Evaluated on 75 datasets spanning diverse distribution shifts, CLIPTTA consistently outperforms entropy-based objectives and is highly competitive with state-of-the-art TTA methods, outperforming them on a large number of datasets and exhibiting more stable performance across diverse shifts.", "arxiv_id": "2507.14312v2", "arxiv_authors": ["Marc Lafon", "Gustavo Adolfo Vargas Hakim", "Cl\u00e9ment Rambour", "Christian Desrosier", "Nicolas Thome"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1016590, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5bb"}, "filepath": "data/2510.20685v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996528423770519, "type": "Poster", "name": "C-NAV: Continual Object Navigation with Dual-Path Anti-Forgetting and Adaptive Experience Selection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117919", "abstract": "Embodied agents are expected to perform object navigation in dynamic, open-world environments. However, existing approaches typically rely on static trajectories and a fixed set of object categories during training, overlooking the real-world requirement for continual adaptation to evolving scenarios. To facilitate related studies, we introduce the continual object navigation benchmark, which requires agents to acquire navigation skills for new object categories while avoiding catastrophic forgetting of previously learned knowledge. To tackle this challenge, we propose C-Nav, a continual visual navigation framework that integrates two key innovations: (1) A dual-path anti-forgetting mechanism, which comprises feature distillation that aligns multi-modal inputs into a consistent representation space to ensure representation consistency, and feature replay that retains temporal features within the action decoder to ensure policy consistency. (2) An adaptive sampling strategy that selects diverse and informative experiences, thereby reducing redundancy and minimizing memory overhead. Extensive experiments across multiple model architectures demonstrate that C-Nav consistently outperforms existing approaches, achieving superior performance even compared to baselines with full trajectory retention, while significantly lowering memory requirements. The benchmark and code will be publicly available.", "arxiv_id": "2510.20685v1", "arxiv_authors": ["Ming-Ming Yu", "Fei Zhu", "Wenzhuo Liu", "Yirong Yang", "Qunbo Wang", "Wenjun Wu", "Jing Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14d"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051711, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5bc"}, "filepath": "data/2502.02589v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996205481472018, "type": "Poster", "name": "COCONut-PanCap: Joint Panoptic Segmentation and Grounded Captions for Fine-Grained Understanding and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121708", "abstract": "This paper introduces the COCONut-PanCap dataset, created to enhance panoptic segmentation and grounded image captioning. Building upon the COCO dataset with advanced COCONut panoptic masks, this dataset aims to overcome limitations in existing image-text datasets that often lack detailed, scene-comprehensive descriptions. The COCONut-PanCap dataset incorporates fine-grained, region-level captions grounded in panoptic segmentation masks, ensuring consistency and improving the detail of generated captions.Through human-edited, densely annotated descriptions, COCONut-PanCap supports improved training of vision-language models (VLMs) for image understanding and generative models for text-to-image tasks.Experimental results demonstrate that COCONut-PanCap significantly boosts performance across understanding and generation tasks, offering complementary benefits to large-scale datasets. This dataset sets a new benchmark for evaluating models on joint panoptic segmentation and grounded captioning tasks, addressing the need for high-quality, detailed image-text annotations in multi-modal learning.", "arxiv_id": "2502.02589v1", "arxiv_authors": ["Xueqing Deng", "Qihang Yu", "Ali Athar", "Chenglin Yang", "Linjie Yang", "Xiaojie Jin", "Xiaohui Shen", "Liang-Chieh Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4649148, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5bd"}, "filepath": "data/2505.21437v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999619649118718, "type": "Poster", "name": "CoDA: Coordinated Diffusion Noise Optimization for Whole-Body Manipulation of Articulated Objects", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117350", "abstract": "Synthesizing whole-body manipulation of articulated objects, including body motion, hand motion, and object motion, is a critical yet challenging task with broad applications in virtual humans and robotics.The core challenges are twofold.First, achieving realistic whole-body motion requires tight coordination between the hands and the rest of the body, as their movements are interdependent during manipulation. Second, articulated object manipulation typically involves high degrees of freedom and demands higher precision, often requiring the fingers to be placed at specific regions to actuate movable parts.To address these challenges, we propose a novel coordinated diffusion noise optimization framework.Specifically, we perform noise-space optimization over three specialized diffusion models for the body, left hand, and right hand, each trained on its own motion dataset to improve generalization.Coordination naturally emerges through gradient flow along the human kinematic chain, allowing the global body posture to adapt in response to hand motion objectives with high fidelity.To further enhance precision in hand-object interaction, we adopt a unified representation based on basis point sets (BPS), where end-effector positions are encoded as distances to the same BPS used for object geometry.This unified representation captures fine-grained spatial relationships between the hand and articulated object parts, and the resulting trajectories serve as targets to guide the optimization of diffusion noise, producing highly accurate interaction motion.We conduct extensive experiments demonstrating that our method outperforms existing approaches in motion quality and physical plausibility, and enables various capabilities such as object pose control, simultaneous walking and manipulation, and whole-body generation from hand-only data.The code will be released for reproducibility.", "arxiv_id": "2505.21437v1", "arxiv_authors": ["Huaijin Pi", "Zhi Cen", "Zhiyang Dou", "Taku Komura"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a14f"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2253161, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5be"}, "filepath": "data/2505.16524v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991475562755892, "type": "Poster", "name": "CodeMerge: Codebook-Guided Model Merging for Robust Test-Time Adaptation in Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119523", "abstract": "Maintaining robust 3D perception under dynamic and unpredictable test-time conditions remains a critical challenge for autonomous driving systems. Existing test-time adaptation (TTA) methods often fail in high-variance tasks like 3D object detection due to unstable optimization and sharp minima. While recent model merging strategies based on linear mode connectivity (LMC) offer improved stability by interpolating between fine-tuned checkpoints, they are computationally expensive, requiring repeated checkpoint access and multiple forward passes. In this paper, we introduce CodeMerge, a lightweight and scalable model merging framework that bypasses these limitations by operating in a compact latent space. Instead of loading full models, CodeMerge represents each checkpoint with a low-dimensional fingerprint derived from the source model\u2019s penultimate features and constructs a key-value codebook. We compute merging coefficients using regularized leverage scores on these fingerprints, enabling efficient model composition without compromising adaptation quality. Our method achieves strong performance across challenging benchmarks, improving end-to-end 3D detection 14.9\\% NDS on nuScenes-C and LiDAR-based detection by over 7.6\\% mAP on nuScenes-to-KITTI, while benefiting downstream tasks such as online mapping, motion prediction and planning even without training. Code and pretrained models are released in the supplementary material.", "arxiv_id": "2505.16524v1", "arxiv_authors": ["Huitong Yang", "Zhuoxiao Chen", "Fengyi Zhang", "Zi Huang", "Yadan Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a150"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1093571, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5bf"}, "filepath": "data/2510.17847v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997709663164712, "type": "Poster", "name": "CoDIO: Efficient Data Selection for Visual Instruction Tuning via Coupled Importance-Diversity Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115955", "abstract": "Multimodal large language models (MLLMs) rely heavily on instruction tuning to align vision and language capabilities, yet the computational cost of training on large-scale datasets remains a major bottleneck. Existing data selection methods aim to mitigate this by selecting important and diverse subsets, but they often suffer from two critical drawbacks: high computational overhead from processing the entire dataset and suboptimal data selection due to separate treatment of importance and diversity.We introduce CoIDO, a novel dual-objective framework that jointly optimizes data importance and diversity to overcome these challenges. Unlike existing approaches that require costly evaluations across the whole dataset, CoIDO employs a lightweight plug-in scorer. This scorer is trained on just a small random sample of data to learn the distribution of the candidate set, drastically reducing computational demands. By leveraging a homoscedastic uncertainty-based formulation, CoIDO effectively balances importance and diversity during training, enabling the scorer to assign CoIDO scores to all data points. This unified scoring approach allows for direct ranking and selection of the most valuable subsets \u2014 completely bypassing the need for specialized algorithms.In our experiments, we trained the CoIDO Scorer using only 20% of randomly sampled data. Once trained, CoIDO was applied to the entire dataset to select a 20% subset for instruction tuning. On the widely-used LLaVA-1.5-7B model across ten downstream tasks, this selected subset achieved an impressive 98.2% of the performance of full-data fine-tuning, on average. Moreover, CoIDO outperforms all competitors in terms of both efficiency (lowest training FLOPs) and aggregated accuracy. Our code is available at: https://anonymous.4open.science/r/CoIDO", "arxiv_id": "2510.17847v1", "arxiv_authors": ["Yichen Yan", "Ming Zhong", "Qi Zhu", "Xiaoling Gu", "Jinpeng Chen", "Huan Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a151"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086038, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c0"}, "filepath": "data/2509.22010v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999322182330799, "type": "Poster", "name": "CoFFT: Chain of Foresight-Focus Thought for Visual Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119460", "abstract": "Despite significant advances in Vision Language Models (VLMs), they remain constrained by the complexity and redundancy of visual input.When images contain large amounts of irrelevant information, VLMs are susceptible to interference, thus generating excessive task-irrelevant reasoning processes or even hallucinations.This limitation stems from their inability to discover and process the required regions during reasoning precisely.To address this limitation, we present the Chain of Foresight-Focus Thought (CoFFT), a novel training-free approach that enhances VLMs' visual reasoning by emulating human visual cognition.Each Foresight-Focus Thought consists of three stages:(1) Diverse Sample Generation: generates diverse reasoning samples to explore potential reasoning paths, where each sample contains several reasoning steps;(2) Dual Foresight Decoding: rigorously evaluates these samples based on both visual focus and reasoning progression, adding the first step of optimal sample to the reasoning process; (3) Visual Focus Adjustment: precisely adjust visual focus toward regions most beneficial for future reasoning, before returning to stage (1) to generate subsequent reasoning samples until reaching the final answer.These stages function iteratively, creating an interdependent cycle where reasoning guides visual focus and visual focus informs subsequent reasoning.Empirical results across multiple benchmarks using Qwen2.5-VL, InternVL-2.5, and Llava-Next demonstrate consistent performance improvements of 3.1-5.8\\% with controllable increasing computational overhead.", "arxiv_id": "2509.22010v3", "arxiv_authors": ["Xinyu Zhang", "Yuxuan Dong", "Lingling Zhang", "Chengyou Jia", "Zhuohang Dang", "Basura Fernando", "Jun Liu", "Mike Zheng Shou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a152"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068154, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c1"}, "filepath": "data/2508.21046v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992884515447582, "type": "Poster", "name": "CogVLA: Cognition-Aligned Vision-Language-Action Models via Instruction-Driven Routing & Sparsification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119023", "abstract": "Recent Vision-Language-Action (VLA) models built on pre-trained Vision-Language Models (VLMs) require extensive post-training, resulting in high computational overhead that limits scalability and deployment. Existing sparsification strategies\u2014such as Mixture-of-Depths, layer skipping, and early exit\u2014fall short by neglecting the semantic coupling across vision-language-action modalities, and focusing narrowly on intra-LLM computation while overlooking end-to-end coherence from perception to control. To address these challenges, we propose **CogVLA**, a Cognition-Aligned Vision-Language-Action framework that leverages instruction-driven routing and sparsification to improve both efficiency and performance. CogVLA draws inspiration from human multimodal coordination and introduces a 3-stage progressive architecture. 1) **Encoder-FiLM based Aggregation Routing (EFA-Routing)** injects instruction information into the vision encoder to selectively aggregate and compress dual-stream visual tokens, forming a instruction-aware latent representation. 2) Building upon this compact visual encoding, **LLM-FiLM based Pruning Routing (LFP-Routing)** introduces action intent into the language model by pruning instruction-irrelevant visually grounded tokens, thereby achieving token-level sparsity. 3) To ensure that compressed perception inputs can still support accurate and coherent action generation, we introduce **V\u2011L\u2011A Coupled Attention (CAtten)**, which combines causal vision-language attention with bidirectional action parallel decoding.Extensive experiments on the LIBERO benchmark and real-world robotic tasks demonstrate that CogVLA achieves state-of-the-art performance with success rates of 97.4\\% and 70.0\\%, respectively, while reducing training costs by 2.5$\\times$ and decreasing inference latency by 2.8$\\times$ compared to OpenVLA.", "arxiv_id": "2508.21046v2", "arxiv_authors": ["Wei Li", "Renshan Zhang", "Rui Shao", "Jie He", "Liqiang Nie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a153"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4363331, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c2"}, "filepath": "data/2509.24741v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995621315004319, "type": "Poster", "name": "Collaborating Vision, Depth, and Thermal Signals for Multi-Modal Tracking: Dataset and Algorithm", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121786", "abstract": "Existing multi-modal object tracking approaches primarily focus on dual-modal paradigms, such as RGB-Depth or RGB-Thermal, yet remain challenged in complex scenarios due to limited input modalities. To address this gap, this work introduces a novel multi-modal tracking task that leverages three complementary modalities, including visible RGB, Depth (D), and Thermal Infrared (TIR), aiming to enhance robustness in complex scenarios. To support this task, we construct a new multi-modal tracking dataset, coined RGBDT500, which consists of 500 videos with synchronised frames across the three modalities. Each frame provides spatially aligned RGB, depth, and thermal infrared images with precise object bounding box annotations. Furthermore, we propose a novel multi-modal tracker, dubbed RDTTrack. RDTTrack integrates tri-modal information for robust tracking by leveraging a pretrained RGB-only tracking model and prompt learning techniques. In specific, RDTTrack fuses thermal infrared and depth modalities under a proposed orthogonal projection constraint, then integrates them with RGB signals as prompts for the pre-trained foundation tracking model, effectively harmonising tri-modal complementary cues. The experimental results demonstrate the effectiveness and advantages of the proposed method, showing significant improvements over existing dual-modal approaches in terms of tracking accuracy and robustness in complex scenarios.", "arxiv_id": "2509.24741v1", "arxiv_authors": ["Xue-Feng Zhu", "Tianyang Xu", "Yifan Pan", "Jinjie Gu", "Xi Li", "Jiwen Lu", "Xiao-Jun Wu", "Josef Kittler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a154"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.473Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039498, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c3"}, "filepath": "data/2504.10514v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997647609540263, "type": "Poster", "name": "ColorBench: Can VLMs See and Understand the Colorful World? A Comprehensive Benchmark for Color Perception, Reasoning, and Robustness", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121433", "abstract": "Color plays an important role in human perception and usually provides critical clues in visual reasoning. However, it is unclear whether and how vision-language models (VLMs) can perceive, understand, and leverage color as humans.This paper introduces ColorBench, an innovative benchmark meticulously crafted to assess the capabilities of VLMs in color understanding, including color perception, reasoning, and robustness. By curating a suite of diverse test scenarios, with grounding in real applications, ColorBench evaluates how these models perceive colors, infer meanings from color-based cues, and maintain consistent performance under varying color transformations. Through an extensive evaluation of 32 VLMs with varying language models and vision encoders, our paper reveals some undiscovered findings: (i) The scaling law (larger models are better) still holds on ColorBench, while the language model plays a more important role than the vision encoder. (ii) However, the performance gaps across models are relatively small, indicating that color understanding has been largely neglected by existing VLMs. (iii) CoT reasoning improves color understanding accuracies and robustness, though they are vision-centric tasks. (iv) Color clues are indeed leveraged by VLMs on ColorBench but they can also mislead models in some tasks.These findings highlight the critical limitations of current VLMs and underscore the need to enhance color comprehension. Our ColorBench can serve as a foundational tool for advancing the study of human-level color understanding of multimodal AI.", "arxiv_id": "2504.10514v2", "arxiv_authors": ["Yijun Liang", "Ming Li", "Chenrui Fan", "Ziyue Li", "Dang Nguyen", "Kwesi Cobbina", "Shweta Bhardwaj", "Jiuhai Chen", "Fuxiao Liu", "Tianyi Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a155"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073477, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c4"}, "filepath": "data/2503.19034v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994559485251465, "type": "Poster", "name": "Color Conditional Generation with Sliced Wasserstein Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115823", "abstract": "We propose SW-Guidance, a training-free approach for image generation conditioned on the color distribution of a reference image. While it is possible to generate an image with fixed colors by first creating an image from a text prompt and then applying a color style transfer method, this approach often results in semantically meaningless colors in the generated image. Our method solves this problem by modifying the sampling process of a diffusion model to incorporate the differentiable Sliced 1-Wasserstein distance between the color distribution of the generated image and the reference palette. Our method outperforms state-of-the-art techniques for color-conditional generation in terms of color similarity to the reference, producing images that not only match the reference colors but also maintain semantic coherence with the original text prompt. Our source code is available at https://anonymous.4open.science/r/sw-guidance-3E7D.", "arxiv_id": "2503.19034v1", "arxiv_authors": ["Alexander Lobashev", "Maria Larchenko", "Dmitry Guskov"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a156"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 8540007, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c5"}, "filepath": "data/2506.13260v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997253974699456, "type": "Poster", "name": "COME: Adding Scene-Centric Forecasting Control to Occupancy World Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119126", "abstract": "World models are critical for autonomous driving to simulate environmental dynamics and generate synthetic data.Existing methods struggle to disentangle ego-vehicle motion (perspective shifts) from scene evolvement (agent interactions), leading to suboptimal predictions. Instead, we propose to separate environmental changes from ego-motion by leveraging the scene-centric coordinate systems. In this paper, we introduce COME: a framework that integrates scene-centric forecasting Control into the Occupancy world ModEl. Specifically, COME first generates ego-irrelevant, spatially consistent future features through a scene-centric prediction branch, which are then converted into scene condition using a tailored ControlNet. These condition features are subsequently injected into the occupancy world model, enabling more accurate and controllable future occupancy predictions. Experimental results on the nuScenes-Occ3D dataset show that COME achieves consistent and significant improvements over state-of-the-art (SOTA) methods across diverse configurations, including different input sources (ground-truth, camera-based, fusion-based occupancy) and prediction horizons (3s and 8s). For example, under the same settings, COME achieves 26.3% better mIoU metric than DOME and 23.7% better mIoU metric than UniScene. These results highlight the efficacy of disentangled representation learning in enhancing spatio-temporal prediction fidelity for world models. Code will be released.", "arxiv_id": "2506.13260v1", "arxiv_authors": ["Yining Shi", "Kun Jiang", "Qiang Meng", "Ke Wang", "Jiabao Wang", "Wenchao Sun", "Tuopu Wen", "Mengmeng Yang", "Diange Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a157"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096736, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c6"}, "filepath": "data/2506.16685v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997885439105372, "type": "Poster", "name": "Compliant Residual DAgger: Improving Real-World Contact-Rich Manipulation with Human Corrections", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117065", "abstract": "We address key challenges in Dataset Aggregation (DAgger) for real-world contact-rich manipulation: how to collect informative human correction data and how to effectively update policies with this new data. We introduce Compliant Residual DAgger (CR-DAgger), which contains two novel components: 1) a Compliant Intervention Interface that leverages compliance control, allowing humans to provide gentle, accurate delta action corrections without interrupting the ongoing robot policy execution; and 2) a Compliant Residual Policy formulation that learns from human corrections while incorporating force feedback and force control. Our system significantly enhances performance on precise contact-rich manipulation tasks using minimal correction data, improving base policy success rates by over 50\\% on two challenging tasks (book flipping and belt assembly) while outperforming both retraining-from-scratch and finetuning approaches. Through extensive real-world experiments, we provide practical guidance for implementing effective DAgger in real-world robot learning tasks.", "arxiv_id": "2506.16685v2", "arxiv_authors": ["Xiaomeng Xu", "Yifan Hou", "Zeyi Liu", "Shuran Song"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a158"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3277968, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c7"}, "filepath": "data/2507.12318v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998441735181933, "type": "Poster", "name": "Compositional Discrete Latent Code for High Fidelity, Productive Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120203", "abstract": "We argue that diffusion models' success in modeling complex distributions is, for the most part, coming from their conditioning. This paper investigates the representation used to condition diffusion models from the perspective that ideal representations should improve modeling the data distribution, be easy to generate, and be compositional to allow generalizing outside the training distribution. We introduce Discrete Latent Code (DLC), an image representation derived from Simplicial Embeddings trained with a self-supervised learning objective. DLCs are sequences of discrete tokens, as opposed to the standard continuous image embeddings. They are easy to generate and their compositionality enables sampling of novel images beyond the training distribution.Diffusion models trained with DLCsimprove generation fidelity, establishing a new state-of-the-art for unconditional image generation on ImageNet. Additionally, we show that composing DLCs allows the image generator to produce interesting out-of-distribution samples that coherently combine the semantics of images in diverse ways.Finally, we showcase how DLCs can enable text-to-image generation by leveraging large-scale pretrained language models. Using only 9M image-caption pairs, we efficiently finetune a text diffusion model to generate novel DLCs that produces samples outside of the data distribution used to train the image generator.", "arxiv_id": "2507.12318v2", "arxiv_authors": ["Samuel Lavoie", "Michael Noukhovitch", "Aaron Courville"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a159"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1164705, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c8"}, "filepath": "data/2505.15450v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990297428605299, "type": "Poster", "name": "Comprehensive Assessment and Analysis for NSFW Content Erasure in Text-to-Image Diffusion models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121540", "abstract": "Text-to-image diffusion models have gained widespread application across various domains, demonstrating remarkable creative potential. However, the strong generalization capabilities of diffusion models can inadvertently lead to the generation of not-safe-for-work (NSFW) content, posing significant risks to their safe deployment. While several concept erasure methods have been proposed to mitigate the issue associated with NSFW content, a comprehensive evaluation of their effectiveness across various scenarios remains absent. To bridge this gap, we introduce a full-pipeline toolkit specifically designed for concept erasure and conduct the first systematic study of NSFW concept erasure methods. By examining the interplay between the underlying mechanisms and empirical observations, we provide in-depth insights and practical guidance for the effective application of concept erasure methods in various real-world scenarios, with the aim of advancing the understanding of content safety in diffusion models and establishing a solid foundation for future research and development in this critical area.We publicly release our code at https://anonymous.4open.science/r/ErasureBenchmark-7BBB to provide an open platform for further exploration and research.", "arxiv_id": "2505.15450v2", "arxiv_authors": ["Die Chen", "Zhiwen Li", "Cen Chen", "Yuexiang Xie", "Xiaodan Li", "Jinyan Ye", "Yingda Chen", "Yaliang Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 912056, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5c9"}, "filepath": "data/2503.21757v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994793013991221, "type": "Poster", "name": "Compress & Cache: Vision token compression for efficient generation and retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116153", "abstract": "This work aims to compress the vision tokens of an LVLM into a representation that is simultaneously suitable for (a) generative and (b) discriminative tasks, (c) is nearly lossless, and (d) storage-efficient. To this end, we propose C&C, a novel compression method that leverages the LVLM itself for task-agnostic visual token compression.Unlike prior methods that perform token reduction on-the-fly, our approach offloads computation to a dedicated, upfront indexing stage, effectively decoupling compression from generation. This enables learning more powerful representations for generation during inference. At the core of C&C is a ``double-forward pass'' training strategy. During the first forward pass, the LLM (of the LVLM) creates a bottleneck by compressing the dense visual tokens into a few summary tokens. Subsequently, the second forward pass processes the language instruction(s) alongside the summary tokens, used as a direct replacement for the image ones. The training of C&C is guided by two key losses: an autoregressive loss applied after the second pass that provides a direct optimization objective for reconstructing the original information flow, and a contrastive loss applied after the first pass to bolster the representational strength of the summary tokens, particularly for discriminative tasks. Moreover, we propose stage-specific adapters for further enhancing performance. C&C produces highly informative compressed representations. An in-depth ablation study confirms the efficacy of our approach. For generative tasks, we achieve a 2x higher compression rate without compromising capabilities, setting a new state-of-the-art. For discriminative tasks, we establish new state-of-the-art results on image retrieval and compositionality benchmarks.", "arxiv_id": "2503.21757v1", "arxiv_authors": ["Adrian Bulat", "Yassine Ouali", "Georgios Tzimiropoulos"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1733158, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ca"}, "filepath": "data/2510.23607v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990284325865093, "type": "Poster", "name": "Concerto: Joint 2D-3D Self-Supervised Learning Emerges Spatial Representations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116839", "abstract": "Humans learn abstract concepts through multisensory synergy, and once formed, such representations can often be recalled from a single modality. Inspired by this principle, we introduce Concerto, a minimalist simulation of human concept learning for spatial cognition, combining 3D intra-modal self-distillation with 2D-3D cross-modal joint embedding. Despite its simplicity, Concerto learns more coherent and informative spatial features, as demonstrated by zero-shot visualizations. It outperforms both standalone SOTA 2D and 3D self-supervised models by 14.2% and 4.8%, respectively, as well as their feature concatenation, in linear probing for 3D scene perception. With full fine-tuning, Concerto sets new SOTA results across multiple scene understanding benchmarks (e.g., 80.7% mIoU on ScanNet). We further present a variant of Concerto tailored for video-lifted point cloud spatial understanding, and a translator that linearly projects Concerto representations into CLIP\u2019s language space, enabling open-world perception. These results highlight that Concerto emerges spatial representations with superior fine-grained geometric and semantic consistency. Code and weights will be released.", "arxiv_id": "2510.23607v1", "arxiv_authors": ["Yujia Zhang", "Xiaoyang Wu", "Yixing Lao", "Chengyao Wang", "Zhuotao Tian", "Naiyan Wang", "Hengshuang Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6796208, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5cb"}, "filepath": "data/2505.16862v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998743860533026, "type": "Poster", "name": "Conditional Panoramic Image Generation via Masked Autoregressive Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120131", "abstract": "Recent progress in panoramic image generation has underscored two critical limitations in existing approaches. First, most methods are built upon diffusion models, which are inherently ill-suited for equirectangular projection (ERP) panoramas due to the violation of the identically and independently distributed (i.i.d.) Gaussian noise assumption caused by their spherical mapping. Second, these methods often treat text-conditioned generation (text-to-panorama) and image-conditioned generation (panorama outpainting) as separate tasks, relying on distinct architectures and task-specific data. In this work, we propose a unified framework, Panoramic AutoRegressive model (PAR), which leverages masked autoregressive modeling to address these challenges. PAR avoids the i.i.d. assumption constraint and integrates text and image conditioning into a cohesive architecture, enabling seamless generation across tasks. To address the inherent discontinuity in existing generative models, we introduce circular padding to enhance spatial coherence and propose a consistency alignment strategy to improve generation quality. Extensive experiments demonstrate competitive performance in text-to-panorama generation and panorama outpainting tasks while showcasing promising scalability and generalization capabilities. Code and models will be available.", "arxiv_id": "2505.16862v1", "arxiv_authors": ["Chaoyang Wang", "Xiangtai Li", "Lu Qi", "Xiaofan Lin", "Jinbin Bai", "Qianyu Zhou", "Yunhai Tong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3881665, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5cc"}, "filepath": "data/2510.04564v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992228961149642, "type": "Poster", "name": "Conditional Representation Learning for Customized Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119073", "abstract": "Conventional representation learning methods learn a universal representation that primarily captures dominant semantics, which may not always align with customized downstream tasks. For instance, in animal habitat analysis, researchers prioritize scene-related features, whereas universal embeddings emphasize categorical semantics, leading to suboptimal results. As a solution, existing approaches resort to supervised fine-tuning, which however incurs high computational and annotation costs. In this paper, we propose Conditional Representation Learning (CRL), aiming to extract representations tailored to arbitrary user-specified criteria. Specifically, we reveal that the semantics of a space are determined by its basis, thereby enabling a set of descriptive words to form the basis for a customized feature space. Building upon this insight, given a user-specified criterion, CRL first employs a large language model (LLM) to generate descriptive texts to construct the semantic basis, then projects the image representation into this conditional feature space leveraging a vision-language model (VLM). The transformed representation better captures semantics for the specific criterion, which could be utilized for customized tasks. Extensive experiments on customized downstream classification and retrieval demonstrate the superiority and generality of the proposed CRL. The code will be released.", "arxiv_id": "2510.04564v1", "arxiv_authors": ["Honglin Liu", "Chao Sun", "Peng Hu", "Yunfan Li", "Xi Peng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051147, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5cd"}, "filepath": "data/2505.11123v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991912166103577, "type": "Poster", "name": "Conditioning Matters: Training Diffusion Policies is Faster Than You Think", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115968", "abstract": "Diffusion policies have emerged as a mainstream paradigm for building vision-language-action (VLA) models. Although they demonstrate strong robot control capabilities, their training efficiency remains suboptimal. In this work, we identify a fundamental challenge in conditional diffusion policy training: when generative conditions are hard to distinguish, the training objective degenerates into modeling the marginal action distribution, a phenomenon we term loss collapse. To overcome this, we propose Cocos, a simple yet general solution that modifies the source distribution in the conditional flow matching to be condition-dependent. By anchoring the source distribution around semantics extracted from condition inputs, Cocos encourages stronger condition integration and prevents the loss collapse. We provide theoretical justification and extensive empirical results across simulation and real-world benchmarks. Our method achieves faster convergence and higher success rates than existing approaches, matching the performance of large-scale pre-trained VLAs using significantly fewer gradient steps and parameters. Cocos is lightweight, easy to implement, and compatible with diverse policy architectures, offering a general-purpose improvement to diffusion policy training.", "arxiv_id": "2505.11123v1", "arxiv_authors": ["Zibin Dong", "Yicheng Liu", "Yinchuan Li", "Hang Zhao", "Jianye Hao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a15f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1458706, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ce"}, "filepath": "data/2506.09612v4.png", "tags": [], "_media_type": "image", "_rand": 0.9991354696539558, "type": "Poster", "name": "Consistent Story Generation: Unlocking the Potential of Zigzag Sampling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115906", "abstract": "Text-to-image generation models have made significant progress in producing high-quality images from textual descriptions, yet they continue to struggle with maintaining subject consistency across multiple images, a fundamental requirement for visual storytelling. Existing methods attempt to address this by either fine-tuning models on large-scale story visualization datasets, which is resource-intensive, or by using training-free techniques that share information across generations, which still yield limited success. In this paper, we introduce a novel training-free sampling strategy called Zigzag Sampling with Asymmetric Prompts and Visual Sharing to enhance subject consistency in visual story generation. Our approach proposes a zigzag sampling mechanism that alternates between asymmetric prompting to retain subject characteristics, while a visual sharing module transfers visual cues across generated images to %further enforce consistency. Experimental results, based on both quantitative metrics and qualitative evaluations, demonstrate that our method significantly outperforms previous approaches in generating coherent and consistent visual stories.", "arxiv_id": "2506.09612v4", "arxiv_authors": ["Mingxiao Li", "Mang Ning", "Marie-Francine Moens"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a160"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042476, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5cf"}, "filepath": "data/2507.04725v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992190739640027, "type": "Poster", "name": "Consistent Supervised-Unsupervised Alignment for Generalized Category Discovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118414", "abstract": "Generalized Category Discovery (GCD) focuses on classifying known categories while simultaneously discovering novel categories from unlabeled data. However, previous GCD methods face challenges due to inconsistent optimization objectives and category confusion. This leads to feature overlap and ultimately hinders performance on novel categories. To address these issues, we propose the Neural Collapse-inspired Generalized Category Discovery (NC-GCD) framework. By pre-assigning and fixing Equiangular Tight Frame (ETF) prototypes, our method ensures an optimal geometric structure and a consistent optimization objective for both known and novel categories. We introduce a Consistent ETF Alignment Loss that unifies supervised and unsupervised ETF alignment and enhances category separability. Additionally, a Semantic Consistency Matcher (SCM) is designed to maintain stable and consistent label assignments across clustering iterations. Our method significantly enhancing novel category accuracy and demonstrating its effectiveness.", "arxiv_id": "2507.04725v1", "arxiv_authors": ["Jizhou Han", "Shaokun Wang", "Yuhang He", "Chenhao Ding", "Qiang Wang", "Xinyuan Gao", "SongLin Dong", "Yihong Gong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a161"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.474Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2197999, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d0"}, "filepath": "data/2412.00580v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994904483252086, "type": "Poster", "name": "Continuous Concepts Removal in Text-to-image Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115202", "abstract": "Text-to-image diffusion models have shown an impressive ability to generate high-quality images from input textual descriptions/prompts. However, concerns have been raised about the potential for these models to create content that infringes on copyrights or depicts disturbing subject matter.Removing specific concepts from these models is a promising solution to this issue. However, existing methods for concept removal do not work well in practical but challenging scenarios where concepts need to be continuously removed. Specifically, these methods lead to poor alignment between the text prompts and the generated image after the continuous removal process.To address this issue, we propose a novel concept removal approach called CCRT that includes a designed knowledge distillation paradigm.CCRT constrains the text-image alignment behavior during the continuous concept removal process by using a set of text prompts.These prompts are generated through our genetic algorithm, which employs a designed fuzzing strategy. To evaluate the effectiveness of CCRT, we conduct extensive experiments involving the removal of various concepts, algorithmic metrics, and human studies.The results demonstrate that CCRT can effectively remove the targeted concepts from the model in a continuous manner while maintaining the high image generation quality (e.g., text-image alignment).The code of CCRT is available at https://anonymous.4open.science/r/CCRT-F3EE.", "arxiv_id": "2412.00580v2", "arxiv_authors": ["Tingxu Han", "Weisong Sun", "Yanrong Hu", "Chunrong Fang", "Yonglong Zhang", "Shiqing Ma", "Tao Zheng", "Zhenyu Chen", "Zhenting Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a162"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1509817, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d1"}, "filepath": "data/2505.11816v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996856284260536, "type": "Poster", "name": "Continuous Subspace Optimization for Continual Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116567", "abstract": "Continual learning aims to learn multiple tasks sequentially while preserving prior knowledge, but faces the challenge of catastrophic forgetting when acquiring new knowledge. Recently, approaches leveraging pre-trained models have gained increasing popularity to mitigate this issue, due to the strong generalization ability of foundation models. To adjust pre-trained models for new tasks, existing methods usually employ low-rank adaptation, which restricts parameter updates to a fixed low-rank subspace. However, constraining the optimization space inherently compromises the model's learning capacity, resulting in inferior performance. To address the limitation, we propose Continuous Subspace Optimization for Continual Learning (CoSO) to fine-tune the model in a series of subspaces rather than a single one. These sequential subspaces are dynamically determined through the singular value decomposition of gradients. CoSO updates the model by projecting gradients into these subspaces, ensuring memory-efficient optimization. To mitigate forgetting, the optimization subspaces of each task are set to be orthogonal to the historical task subspace. During task learning, CoSO maintains a task-specific component that captures the critical update directions associated with the current task. Upon completing a task, this component is used to update the historical task subspace, laying the groundwork for subsequent learning. Extensive experiments on multiple datasets demonstrate that CoSO significantly outperforms state-of-the-art methods, especially in challenging scenarios with long task sequences.", "arxiv_id": "2505.11816v1", "arxiv_authors": ["Quan Cheng", "Yuanyu Wan", "Lingyu Wu", "Chenping Hou", "Lijun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a163"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1046512, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d2"}, "filepath": "data/2503.23356v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995250600286378, "type": "Poster", "name": "ControlFusion: A Controllable Image Fusion Network with Language-Vision Degradation Prompts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117275", "abstract": "Current image fusion methods struggle with real-world composite degradations and lack the flexibility to accommodate user-specific needs. To address this, we propose ControlFusion, a controllable fusion network guided by language-vision prompts that adaptively mitigates composite degradations. On the one hand, we construct a degraded imaging model based on physical mechanisms, such as the Retinex theory and atmospheric scattering principle, to simulate composite degradations and provide a data foundation for addressing realistic degradations. On the other hand, we devise a prompt-modulated restoration and fusion network that dynamically enhances features according to degradation prompts, enabling adaptability to varying degradation levels. To support user-specific preferences in visual quality, a text encoder is incorporated to embed user-defined degradation types and levels as degradation prompts. Moreover, a spatial-frequency collaborative visual adapter is designed to autonomously perceive degradations from source images, thereby reducing complete reliance on user instructions. Extensive experiments demonstrate that ControlFusion outperforms SOTA fusion methods in fusion quality and degradation handling, particularly under real-world and compound degradations.", "arxiv_id": "2503.23356v2", "arxiv_authors": ["Linfeng Tang", "Yeda Wang", "Zhanchuan Cai", "Junjun Jiang", "Jiayi Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a164"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054490, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d3"}, "filepath": "data/2506.03119v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992995675054366, "type": "Poster", "name": "Controllable Human-centric Keyframe Interpolation with Generative Prior", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118268", "abstract": "Existing interpolation methods use pre\u2011trained video diffusion priors to generate intermediate frames between sparsely sampled keyframes. In the absence of 3D geometric guidance, these methods struggle to produce plausible results for complex, articulated human motions and offer limited control over the synthesized dynamics. In this paper, we introduce PoseFuse3D Keyframe Interpolator (PoseFuse3D-KI), a novel framework that integrates 3D human guidance signals into the diffusion process for Controllable Human-centric Keyframe Interpolation (CHKI). To provide rich spatial and structural cues for interpolation, our PoseFuse3D, a 3D\u2011informed control model, features a novel SMPL\u2011X encoder that encodes and aggregates 3D geometry and shape into the 2D latent conditioning space, alongside a fusion network that integrates these 3D cues with 2D pose embeddings. For evaluation, we build CHKI-Video, a new dataset annotated with both 2D poses and 3D SMPL\u2011X parameters. We show that PoseFuse3D-KI consistently outperforms state-of-the-art baselines on CHKI-Video, achieving a 9\\% improvement in PSNR and a 38\\% reduction in LPIPS. Comprehensive ablations demonstrate that our PoseFuse3D model improves interpolation fidelity.", "arxiv_id": "2506.03119v1", "arxiv_authors": ["Zujin Guo", "Size Wu", "Zhongang Cai", "Wei Li", "Chen Change Loy"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a165"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053134, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d4"}, "filepath": "data/2505.21665v1.png", "tags": [], "_media_type": "image", "_rand": 0.999944241871534, "type": "Poster", "name": "Convergent Functions, Divergent Forms", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119410", "abstract": "We introduce LOKI, a compute-efficient framework for co-designing morphologies and control policies that generalize across unseen tasks. Inspired by biological adaptation\u2014where animals quickly adjust to morphological changes\u2014our method overcomes the inefficiencies of traditional evolutionary and quality-diversity algorithms. We propose learning convergent functions: shared control policies trained across clusters of morphologically similar designs in a learned latent space, drastically reducing the training cost per design. Simultaneously, we promote divergent forms by replacing mutation with dynamic local search, enabling broader exploration and preventing premature convergence. The policy reuse allows us to explore $\\sim780\\times$ more designs using 78\\% fewer simulation steps and 40\\% less compute per design. Local competition paired with a broader search results in a diverse set of high-performing final morphologies. Using the UNIMAL design space and a flat-terrain locomotion task, LOKI discovers a rich variety of designs\u2014ranging from quadrupeds to crabs, bipedals, and spinners\u2014far more diverse than those produced by prior work. These morphologies also transfer better to unseen downstream tasks in agility, stability, and manipulation domains (e.g. $2 \\times$ higher reward on bump and push box incline tasks). Overall, our approach produces designs that are both diverse and adaptable, with substantially greater sample efficiency than existing co-design methods.", "arxiv_id": "2505.21665v1", "arxiv_authors": ["Hyeonseong Jeon", "Ainaz Eftekhar", "Aaron Walsman", "Kuo-Hao Zeng", "Ali Farhadi", "Ranjay Krishna"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a166"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1518916, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d5"}, "filepath": "data/2509.19245v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999643512095626, "type": "Poster", "name": "ConViS-Bench: Estimating Video Similarity Through Semantic Concepts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121680", "abstract": "What does it mean for two videos to be similar? Videos may appear similar when judged by the actions they depict, yet entirely different if evaluated based on the locations where they were filmed. While humans naturally compare videos by taking different aspects into account, this ability has not been thoroughly studied and presents a challenge for models that often depend on broad global similarity scores. Large Multimodal Models (LMMs) with video understanding capabilities open new opportunities for leveraging natural language in comparative video tasks. We introduce Concept-based Video Similarity estimation (ConViS), a novel task that compares pairs of videos by computing interpretable similarity scores across a predefined set of key semantic concepts. ConViS allows for human-like reasoning about video similarity and enables new applications such as concept-conditioned video retrieval. To support this task, we also introduce ConViS-Bench, a new benchmark comprising carefully annotated video pairs spanning multiple domains. Each pair comes with concept-level similarity scores and textual descriptions of both differences and similarities. Additionally, we benchmark several state-of-the-art models on ConViS, providing insights into their alignment with human judgments. Our results reveal significant performance differences on ConViS, indicating that some concepts present greater challenges for estimating video similarity. We believe that ConViS-Bench will serve as a valuable resource for advancing research in language-driven video understanding.", "arxiv_id": "2509.19245v1", "arxiv_authors": ["Benedetta Liberatori", "Alessandro Conti", "Lorenzo Vaquero", "Yiming Wang", "Elisa Ricci", "Paolo Rota"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a167"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2217954, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d6"}, "filepath": "data/2412.06740v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998114180491708, "type": "Poster", "name": "Convolution Goes Higher-Order: A Biologically Inspired Mechanism Empowers Image Classification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115531", "abstract": "We propose a novel enhancement to Convolutional Neural Networks (CNNs) by incorporating learnable higher-order convolutions inspired by nonlinear biological visual processing. Our model extends the classical convolution operator using a Volterra-like expansion to capture multiplicative interactions observed in biological vision. Through extensive evaluation on standard benchmarks and synthetic datasets, we demonstrate that our architecture consistently outperforms traditional CNN baselines, achieving optimal performance with 3rd/4th order expansions. Systematic perturbation analysis and Representational Similarity Analysis reveal that different orders of convolution process distinct aspects of visual information, aligning with the statistical properties of natural images. This biologically-inspired approach offers both improved performance and deeper insights into visual information processing.", "arxiv_id": "2412.06740v1", "arxiv_authors": ["Simone Azeglio", "Olivier Marre", "Peter Neri", "Ulisse Ferrari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a168"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 951746, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d7"}, "filepath": "data/2510.23495v1.png", "tags": [], "_media_type": "image", "_rand": 0.999326343625516, "type": "Poster", "name": "COOPERA: Continual Open-Ended Human-Robot Assistance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115330", "abstract": "To understand and collaborate with humans, robots must account for individual human traits, habits, and activities over time. However, most robotic assistants lack these abilities, as they primarily focus on predefined tasks in structured environments and lack a human model to learn from. This work introduces COOPERA, a novel framework for COntinual, OPen-Ended human-Robot Assistance, where simulated humans, driven by psychological traits and long-term intentions, interact with robots in complex environments. By integrating continuous human feedback, our framework, for the first time, enables the study of long-term, open-ended human-robot collaboration (HRC) in different collaborative tasks across various time-scales. Within COOPERA, we introduce a benchmark and an approach to personalize the robot's collaborative actions by learning human traits and context-dependent intents. Experiments validate the extent to which our simulated humans reflect realistic human behaviors and demonstrate the value of inferring and personalizing to human intents for open-ended and long-term HRC.", "arxiv_id": "2510.23495v1", "arxiv_authors": ["Chenyang Ma", "Kai Lu", "Ruta Desai", "Xavier Puig", "Andrew Markham", "Niki Trigoni"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a169"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 999128, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d8"}, "filepath": "data/2507.10449v1.png", "tags": [], "_media_type": "image", "_rand": 0.999533979099285, "type": "Poster", "name": "CoralVQA: A Large-Scale Visual Question Answering Dataset for Coral Reef Image Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121508", "abstract": "Coral reefs are vital yet vulnerable ecosystems that require continuous monitoring to support conservation. While coral reef images provide essential information in coral monitoring, interpreting such images remains challenging due to the need for domain expertise. Visual Question Answering (VQA), powered by Large Vision-Language Models (LVLMs), has great potential in user-friendly interaction with coral reef images. However, applying VQA to coral imagery demands a dedicated dataset that addresses two key challenges: domain-specific annotations and multidimensional questions. In this work, we introduce CoralVQA, the first large-scale VQA dataset for coral reef analysis. It contains 12,805 real-world coral images from 67 coral genera collected from 3 oceans, along with 277,653 question-answer pairs that comprehensively assess ecological and health-related conditions. To construct this dataset, we develop a semi-automatic data construction pipeline in collaboration with marine biologists to ensure both scalability and professional-grade data quality. CoralVQA presents novel challenges and provides a comprehensive benchmark for studying vision-language reasoning in the context of coral reef images. By evaluating several state-of-the-art LVLMs, we reveal key limitations and opportunities. These insights form a foundation for future LVLM development, with a particular emphasis on supporting coral conservation efforts.", "arxiv_id": "2507.10449v1", "arxiv_authors": ["Hongyong Han", "Wei Wang", "Gaowei Zhang", "Mingjie Li", "Yi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113492, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5d9"}, "filepath": "data/2505.17534v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998939802127603, "type": "Poster", "name": "Co-Reinforcement Learning for Unified Multimodal Understanding and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117287", "abstract": "This paper presents a pioneering exploration of reinforcement learning (RL) via group relative policy optimization for unified multimodal large language models (ULMs), aimed at simultaneously reinforcing generation and understanding capabilities. Through systematic pilot studies, we uncover the significant potential of ULMs to enable the synergistic co-evolution of dual capabilities within a shared policy optimization framework. Building on this insight, we introduce \\textbf{CoRL}, a co-reinforcement learning framework comprising a unified RL stage for joint optimization and a refined RL stage for task-specific enhancement. With the proposed CoRL, our resulting model, \\textbf{ULM-R1}, achieves average improvements of \\textbf{7\\%} on three text-to-image generation datasets and \\textbf{23\\%} on nine multimodal understanding benchmarks. These results demonstrate the effectiveness of CoRL and highlight the substantial benefit of reinforcement learning in facilitating cross-task synergy and optimization for ULMs.", "arxiv_id": "2505.17534v2", "arxiv_authors": ["Jingjing Jiang", "Chongjie Si", "Jun Luo", "Hanwang Zhang", "Chao Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1095218, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5da"}, "filepath": "data/2510.20238v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993332536061086, "type": "Poster", "name": "COS3D: Collaborative Open-Vocabulary 3D Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119686", "abstract": "Open-vocabulary 3D segmentation is a fundamental yet challenging task, requiring a mutual understanding of both segmentation and language. However, existing Gaussian-splatting-based methods rely either on a single 3D language field, leading to inferior segmentation, or on pre-computed class-agnostic segmentations, suffering from error accumulation. To address these limitations, we present COS3D, a new collaborative prompt-segmentation framework that contributes to effectively integrating complementary language and segmentation cues throughout its entire pipeline. We first introduce the new concept of collaborative field, comprising an instance field and a language field, as the cornerstone for collaboration. During training, to effectively construct the collaborative field, our key idea is to capture the intrinsic relationship between the instance field and language field, through a novel instance-to-language feature mapping and designing an efficient two-stage training strategy. During inference, to bridge distinct characteristics of the two fields, we further design an adaptive language-to-instance prompt refinement, promoting high-quality prompt-segmentation inference. Extensive experiments not only demonstrate COS3D's leading performance over existing methods on two widely-used benchmarks but also show its high potential to various applications,~\\ie, novel image-based 3D segmentation, hierarchical segmentation, and robotics.", "arxiv_id": "2510.20238v1", "arxiv_authors": ["Runsong Zhu", "Ka-Hei Hui", "Zhengzhe Liu", "Qianyi Wu", "Weiliang Tang", "Shi Qiu", "Pheng-Ann Heng", "Chi-Wing Fu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1123548, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5db"}, "filepath": "data/2507.04451v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996889837810339, "type": "Poster", "name": "CoT-lized Diffusion: Let's Reinforce T2I Generation Step-by-step", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119569", "abstract": "Current text-to-image (T2I) generation models struggle to align spatial composition with the input text, especially in complex scenes. Even layout-based approaches yield suboptimal spatial control, as their generation process is decoupled from layout planning, making it difficult to refine the layout during synthesis.We present CoT-Diff, a framework that brings step-by-step CoT-style reasoning into T2I generation by tightly integrating Multimodal Large Language Model (MLLM)-driven 3D layout planning with the diffusion process.CoT-Diff enables layout-aware reasoning inline within a single diffusion round: at each denoising step, the MLLM evaluates intermediate predictions, dynamically updates the 3D scene layout, and continuously guides the generation process. The updated layout is converted into semantic conditions and depth maps, which are fused into the diffusion model via a condition-aware attention mechanism, enabling precise spatial control and semantic injection. Experiments on 3D Scene benchmarks show that CoT-Diff significantly improves spatial alignment and compositional fidelity, and outperforms the state-of-the-art method by 34.7% in complex scene spatial accuracy, thereby validating the effectiveness of this entangled generation paradigm.", "arxiv_id": "2507.04451v1", "arxiv_authors": ["Zheyuan Liu", "Munan Ning", "Qihui Zhang", "Shuo Yang", "Zhongrui Wang", "Yiwei Yang", "Xianzhe Xu", "Yibing Song", "Weihua Chen", "Fan Wang", "Li Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.475Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2988523, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5dc"}, "filepath": "data/2510.18583v1.png", "tags": [], "_media_type": "image", "_rand": 0.999140385264758, "type": "Poster", "name": "CovMatch: Cross-Covariance Guided Multimodal Dataset Distillation with Trainable Text Encoder", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120122", "abstract": "Multimodal dataset distillation aims to synthesize a small set of image-text pairs that enables efficient training of large-scale vision-language models. While dataset distillation has shown promise in unimodal tasks, extending it to multimodal contrastive learning presents key challenges: learning cross-modal alignment and managing the high computational cost of large encoders. Prior approaches address scalability by freezing the text encoder and update only the image encoder and text projection layer. However, we find this severely limits semantic alignment and becomes a bottleneck for performance scaling.We propose CovMatch, a scalable dataset distillation framework that aligns the cross-covariance of real and synthetic features while regularizing feature distributions within each modality. Unlike prior approaches, CovMatch enables joint optimization of both encoders, leading to stronger cross-modal alignment and improved performance. Evaluated on Flickr30K and COCO, CovMatch outperforms state-of-the-art multimodal distillation methods and achieves up to 6.8\\% absolute gains in retrieval accuracy using only 500 synthetic pairs.", "arxiv_id": "2510.18583v1", "arxiv_authors": ["Yongmin Lee", "Hye Won Chung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063891, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5dd"}, "filepath": "data/2505.20510v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996140141774583, "type": "Poster", "name": "CPathAgent: An Agent-based Foundation Model for Interpretable High-Resolution Pathology Image Analysis Mimicking Pathologists' Diagnostic Logic", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117525", "abstract": "Recent advances in computational pathology have led to the emergence of numerous foundation models. However, these approaches fail to replicate the diagnostic process of pathologists, as they either simply rely on general-purpose encoders with multi-instance learning for classification or directly apply multimodal models to generate reports from images. A significant limitation is their inability to emulate the diagnostic logic employed by pathologists, who systematically examine slides at low magnification for overview before progressively zooming in on suspicious regions to formulate comprehensive diagnoses. To address this gap, we introduce CPathAgent, an innovative agent-based model that mimics pathologists' reasoning processes by autonomously executing zoom-in/out and navigation operations across pathology images based on observed visual features. To achieve this, we develop a multi-stage training strategy unifying patch-level, region-level, and whole-slide capabilities within a single model, which is essential for mimicking pathologists, who require understanding and reasoning capabilities across all three scales. This approach generates substantially more detailed and interpretable diagnostic reports compared to existing methods, particularly for huge region understanding. Additionally, we construct an expert-validated PathMMU-HR\u00b2, the first benchmark for huge region analysis, a critical intermediate scale between patches and whole slides, as diagnosticians typically examine several key regions rather than entire slides at once. Extensive experiments demonstrate that CPathAgent consistently outperforms existing approaches across three scales of benchmarks, validating the effectiveness of our agent-based diagnostic approach and highlighting a promising direction for the future development of computational pathology.", "arxiv_id": "2505.20510v1", "arxiv_authors": ["Yuxuan Sun", "Yixuan Si", "Chenglu Zhu", "Kai Zhang", "Zhongyi Shui", "Bowen Ding", "Tao Lin", "Lin Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a16f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1000617, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5de"}, "filepath": "data/2503.18430v4.png", "tags": [], "_media_type": "image", "_rand": 0.9996930445684982, "type": "Poster", "name": "CQ-DINO: Mitigating Gradient Dilution via Category Queries for Vast Vocabulary Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116878", "abstract": "With the exponential growth of data, traditional object detection methods are increasingly struggling to handle vast vocabulary object detection tasks effectively. We analyze two key limitations of classification-based detectors: positive gradient dilution, where rare positive categories receive insufficient learning signals, and hard negative gradient dilution, where discriminative gradients are overwhelmed by numerous easy negatives. To address these challenges, we propose CQ-DINO, a category query-based object detection framework that reformulates classification as a contrastive task between object queries and learnable category queries. Our method introduces image-guided query selection, which reduces the negative space by adaptively retrieving top-K relevant categories per image via cross-attention, thereby rebalancing gradient distributions and facilitating implicit hard example mining.Furthermore, CQ-DINO flexibly integrates explicit hierarchical category relationships in structured datasets (e.g., V3Det) or learns implicit category correlations via self-attention in generic datasets (e.g., COCO). Experiments demonstrate that CQ-DINO achieves superior performance on the challenging V3Det benchmark (surpassing previous methods by 2.1% AP) while maintaining competitiveness in COCO. Our work provides a scalable solution for real-world detection systems requiring wide category coverage. The code is available in the supplemental material.", "arxiv_id": "2503.18430v4", "arxiv_authors": ["Zhichao Sun", "Huazhang Hu", "Yidong Ma", "Gang Liu", "Yibo Chen", "Xu Tang", "Yao Hu", "Yongchao Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a170"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086842, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5df"}, "filepath": "data/2504.05306v1.png", "tags": [], "_media_type": "image", "_rand": 0.999518917607882, "type": "Poster", "name": "CREA: A Collaborative Multi-Agent Framework for Creative Image Editing and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117635", "abstract": "Creativity in AI imagery remains a fundamental challenge, requiring not only the generation of visually compelling content but also the capacity to add novel, expressive, and artistically rich transformations to images. Unlike conventional editing tasks that rely on direct prompt-based modifications, creative image editing demands an autonomous, iterative approach that balances originality, coherence, and artistic intent. To address this, we introduce CREA, a novel multi-agent collaborative framework that mimics the human creative process. Our framework leverages a team of specialized AI agents who dynamically collaborate to conceptualize, generate, critique, and enhance images. Through extensive qualitative and quantitative evaluations, we demonstrate that CREA significantly outperforms state-of-the-art methods in diversity, semantic alignment, and creative transformation. To the best of our knowledge, this is the first work to introduce the task of creative editing.", "arxiv_id": "2504.05306v1", "arxiv_authors": ["Kavana Venkatesh", "Connor Dunlop", "Pinar Yanardag"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a171"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3948402, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e0"}, "filepath": "data/2506.00568v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998870183766959, "type": "Poster", "name": "CReFT-CAD: Boosting Orthographic Projection Reasoning for CAD via Reinforcement Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119008", "abstract": "Computer-Aided Design (CAD) plays a pivotal role in industrial manufacturing. Orthographic projection reasoning underpins the entire CAD workflow, encompassing design, manufacturing, and simulation. However, prevailing deep\u2010learning approaches employ standard 3D reconstruction pipelines as an alternative, which often introduce imprecise dimensions and limit the parametric editability required for CAD workflows. Many researchers adopt the vision\u2013language models (VLMs) method, particularly supervised fine-tuning (SFT), to tackle CAD-related challenges. SFT shows promise but often devolves into pattern memorization, yielding poor out\u2010of\u2010distribution performance on complex reasoning tasks. To address these gaps, we introduce CReFT-CAD, a two-stage fine-tuning paradigm that first employs a curriculum\u2010driven reinforcement learning stage with difficulty\u2010aware rewards to build reasoning ability steadily, and then applies supervised post-tuning to hone instruction following and semantic extraction. Complementing this, we release TriView2CAD, the first large-scale, open-source benchmark for orthographic projection reasoning, comprising 200,000 synthetic and 3,000 real-world orthographic projections with precise dimension annotations and six interoperable data modalities. We benchmark leading VLMs on orthographic projection reasoning and demonstrate that CReFT-CAD substantially improves reasoning accuracy and out-of-distribution generalizability in real-world scenarios, offering valuable insights for advancing CAD reasoning research.", "arxiv_id": "2506.00568v2", "arxiv_authors": ["Ke Niu", "Zhuofan Chen", "Haiyang Yu", "Yuwen Chen", "Teng Fu", "Mengyang Zhao", "Bin Li", "Xiangyang Xue"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a172"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054339, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e1"}, "filepath": "data/2507.10013v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991695763198856, "type": "Poster", "name": "Cross-modal Associations in Vision and Language Models: Revisiting the bouba-kiki effect", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116707", "abstract": "Recent advances in multimodal models have raised questions about whether vision-and-language models (VLMs) integrate cross-modal information in ways that reflect human cognition. One well-studied test case in this domain is the bouba-kiki effect, where humans reliably associate pseudowords like \"bouba\" with round shapes and \"kiki\" with jagged ones. Given the mixed evidence found in prior studies for this effect in VLMs, we present a comprehensive re-evaluation focused on two variants of CLIP, ResNet and Vision Transformer (ViT), given their centrality in many state-of-the-art VLMs. We apply two complementary methods closely modelled after human experiments: a prompt-based evaluation that uses probabilities as model preference, and we use Grad-CAM as a novel way to interpret visual attention in shape-word matching tasks. Our findings show that these models do not consistently exhibit the bouba-kiki effect. While ResNet shows a preference for round shapes, overall performance across both models lacks the expected associations. Moreover, direct comparison with prior human data on the same task shows that the models' responses fall markedly short of the robust, modality-integrated behaviour characteristic of human cognition. These results contribute to the ongoing debate about the extent to which VLMs truly understand cross-modal concepts, highlighting limitations in their internal representations and alignment with human intuitions.", "arxiv_id": "2507.10013v2", "arxiv_authors": ["Tom Kouwenhoven", "Kiana Shahrasbi", "Tessa Verhoef"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a173"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1066553, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e2"}, "filepath": "data/2505.14707v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996843877078138, "type": "Poster", "name": "CrypticBio: A Large Multimodal Dataset for Visually Confusing Biodiversity", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121654", "abstract": "We present CrypticBio, the largest publicly available multimodal dataset of visually confusing species, specifically curated to support the development of AI models in the context of biodiversity applications. Visually confusing or cryptic species are groups of two or more taxa that are nearly indistinguishable based on visual characteristics alone. While much existing work addresses taxonomic identification in a broad sense, datasets that directly address the morphological confusion of cryptic species are small, manually curated, and target only a single taxon. Thus, the challenge of identifying such subtle differences in a wide range of taxa remains unaddressed. Curated from real-world trends in species misidentification among community annotators of iNaturalist, CrypticBio contains 52K unique cryptic groups spanning 67K species, represented in 166 million images. Rich research-grade image annotations\u2014including scientific, multicultural, and multilingual species terminology, hierarchical taxonomy, spatiotemporal context, and associated cryptic groups\u2014address multimodal AI in biodiversity research. For easy dataset curation, we provide an open-source pipeline CrypticBio-Curate. The multimodal nature of the dataset beyond vision-language arises from the integration of geographical and temporal data as complementary cues to identifying cryptic species. To highlight the importance of the dataset, we benchmark a suite of state-of-the-art foundation models across CrypticBio subsets of common, unseen, endangered, and invasive species, and demonstrate the substantial impact of geographical context on vision-language zero-shot learning for cryptic species. By introducing CrypticBio, we aim to catalyze progress toward real-world-ready biodiversity AI models capable of handling the nuanced challenges of species ambiguity. The data and the code are publicly available in the https://georgianagmanolache.github.io/crypticbio.", "arxiv_id": "2505.14707v1", "arxiv_authors": ["Georgiana Manolache", "Gerard Schouten", "Joaquin Vanschoren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a174"}, "_cls": "Classification", "tags": [], "label": "cs.MM"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1077559, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e3"}, "filepath": "data/2408.16766v2.png", "tags": [], "_media_type": "image", "_rand": 0.999473136448151, "type": "Poster", "name": "CSGO: Content-Style Composition in Text-to-Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116165", "abstract": "The advancement of image style transfer has been fundamentally constrained by the absence of large-scale, high-quality datasets with explicit content-style-stylized supervision. Existing methods predominantly adopt training-free paradigms (e.g., image inversion), which limit controllability and generalization due to the lack of structured triplet data. To bridge this gap, we design a scalable and automated pipeline that constructs and purifies high-fidelity content-style-stylized image triplets. Leveraging this pipeline, we introduce IMAGStyle\u2014the first large-scale dataset of its kind, containing 210K diverse and precisely aligned triplets for style transfer research. Empowered by IMAGStyle, we propose CSGO, a unified, end-to-end trainable framework that decouples content and style representations via independent feature injection. CSGO jointly supports image-driven style transfer, text-driven stylized generation, and text-editing-driven stylized synthesis within a single architecture. Extensive experiments show that CSGO achieves state-of-the-art controllability and fidelity, demonstrating the critical role of structured synthetic data in unlocking robust and generalizable style transfer.", "arxiv_id": "2408.16766v2", "arxiv_authors": ["Peng Xing", "Haofan Wang", "Yanpeng Sun", "Qixun Wang", "Xu Bai", "Hao Ai", "Renyuan Huang", "Zechao Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a175"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5391188, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e4"}, "filepath": "data/2505.12677v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990733633875435, "type": "Poster", "name": "CURE: Concept Unlearning via Orthogonal Representation Editing in Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115013", "abstract": "As Text-to-Image models continue to evolve, so does the risk of generating unsafe, copyrighted, or privacy-violating content. Existing safety interventions - ranging from training data curation and model fine-tuning to inference-time filtering and guidance - often suffer from incomplete concept removal, susceptibility to jail-breaking, computational inefficiency, or collateral damage to unrelated capabilities. In this paper, we introduce CURE, a training-free concept unlearning framework that operates directly in the weight space of pre-trained diffusion models, enabling fast, interpretable, and highly specific suppression of undesired concepts. At the core of our method is the Spectral Eraser, a closed-form, orthogonal projection module that identifies discriminative subspaces using Singular Value Decomposition over token embeddings associated with the concepts to forget and retain. Intuitively, the Spectral Eraser identifies and isolates features unique to the undesired concept while preserving safe attributes. This operator is then applied in a single step update to yield an edited model in which the target concept is effectively unlearned - without retraining, supervision, or iterative optimization. To balance the trade-off between filtering toxicity and preserving unrelated concepts, we further introduce an Expansion Mechanism for spectral regularization which selectively modulates singular vectors based on their relative significance to control the strength of forgetting. All the processes above are in closed-form, guaranteeing extremely efficient erasure in only $2$ seconds. Benchmarking against prior approaches, CURE achieves a more efficient and thorough removal for targeted artistic styles, objects, identities, or explicit content, with minor damage to original generation ability and demonstrates enhanced robustness against red-teaming. Code will be released.", "arxiv_id": "2505.12677v2", "arxiv_authors": ["Shristi Das Biswas", "Arani Roy", "Kaushik Roy"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a176"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4825699, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e5"}, "filepath": "data/2505.18087v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992601755809201, "type": "Poster", "name": "CXReasonBench: A Benchmark for Evaluating Structured Diagnostic Reasoning in Chest X-rays", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121386", "abstract": "Recent progress in Large Vision-Language Models (LVLMs) has enabled promising applications in medical tasks, such as report generation and visual question answering. However, existing benchmarks focus mainly on the final diagnostic answer, offering limited insight into whether models engage in clinically meaningful reasoning. To address this, we present CheXStruct and CXReasonBench, a structured pipeline and benchmark built on the publicly available MIMIC-CXR-JPG dataset. CheXStruct automatically derives a sequence of intermediate reasoning steps directly from chest X-rays, such as segmenting anatomical regions, deriving anatomical landmarks and diagnostic measurements, computing diagnostic indices, and applying clinical thresholds. CXReasonBench leverages this pipeline to evaluate whether models can perform clinically valid reasoning steps and to what extent they can learn from structured guidance, enabling fine-grained and transparent assessment of diagnostic reasoning.The benchmark comprises 18,988 QA pairs across 12 diagnostic tasks and 1,200 cases, each paired with up to 4 visual inputs, and supports multi-path, multi-stage evaluation including visual grounding via anatomical region selection and diagnostic measurements.Even the strongest of 10 evaluated LVLMs struggle with structured reasoning and generalization, often failing to link abstract knowledge with anatomically grounded visual interpretation.", "arxiv_id": "2505.18087v1", "arxiv_authors": ["Hyungyung Lee", "Geon Choi", "Jung-Oh Lee", "Hangyul Yoon", "Hyuk Gi Hong", "Edward Choi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a177"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1131334, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e6"}, "filepath": "data/2510.13245v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990350141757623, "type": "Poster", "name": "CymbaDiff: Structured Spatial Diffusion for Sketch-based 3D Semantic Urban Scene Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116666", "abstract": "Outdoor 3D semantic scene generation produces realistic and semantically rich environments for applications such as urban simulation and autonomous driving. However, advances in this direction are constrained by the absence of publicly available, well-annotated datasets. We introduce SketchSem3D, the first large\u2011scale benchmark for generating 3D outdoor semantic scenes from abstract freehand sketches and pseudo\u2011labeled annotations of satellite images. SketchSem3D includes two subsets, Sketch-based SemanticKITTI and Sketch-based KITTI-360 (containing LiDAR voxels along with their corresponding sketches and annotated satellite images), to enable standardized, rigorous, and diverse evaluations. We also propose Cylinder Mamba Diffusion (CymbaDiff) that significantly enhances spatial coherence in outdoor 3D scene generation. CymbaDiff imposes structured spatial ordering, explicitly captures cylindrical continuity and vertical hierarchy, and preserves both physical neighborhood relationships and global context within the generated scenes. Extensive experiments on SketchSem3D demonstrate that CymbaDiff achieves superior semantic consistency, spatial realism, and cross-dataset generalization. An anonymous download link for SketchSem3D is available here. We will make the benchmark and code public.", "arxiv_id": "2510.13245v2", "arxiv_authors": ["Li Liang", "Bo Miao", "Xinyu Wang", "Naveed Akhtar", "Jordan Vice", "Ajmal Mian"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a178"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1170066, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e7"}, "filepath": "data/2503.20815v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999705019531244, "type": "Poster", "name": "D2SA: Dual-Stage Distribution and Slice Adaptation for Efficient Test-Time Adaptation in MRI Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116601", "abstract": "Variations in Magnetic resonance imaging (MRI) scanners and acquisition protocols cause distribution shifts that degrade reconstruction performance on unseen data. Test-time adaptation (TTA) offers a promising solution to address this discrepancies. However, previous single-shot TTA approaches are inefficient due to repeated training and suboptimal distributional models. Self-supervised learning methods may risk over-smoothing in scarce data scenarios. To address these challenges, we propose a novel Dual-Stage Distribution and Slice Adaptation (D2SA) via MRI implicit neural representation (MR-INR) to improve MRI reconstruction performance and efficiency, which features two stages. In the first stage, an MR-INR branch performs patient-wise distribution adaptation by learning shared representations across slices and modelling patient-specific shifts with mean and variance adjustments. In the second stage, single-slice adaptation refines the output from frozen convolutional layers with a learnable anisotropic diffusion module, preventing over-smoothing and reducing computation. Experiments across five MRI distribution shifts demonstrate that our method can integrate well with various self-supervised learning (SSL) framework, improving performance and accelerating convergence under diverse conditions.", "arxiv_id": "2503.20815v2", "arxiv_authors": ["Lipei Zhang", "Rui Sun", "Zhongying Deng", "Yanqi Cheng", "Carola-Bibiane Sch\u00f6nlieb", "Angelica I Aviles-Rivero"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a179"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.476Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040298, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e8"}, "filepath": "data/2502.12627v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991379305693788, "type": "Poster", "name": "DAMamba: Vision State Space Model with Dynamic Adaptive Scan", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118629", "abstract": "State space models (SSMs) have recently garnered significant attention in computer vision. However, due to the unique characteristics of image data, adapting SSMs from natural language processing to computer vision has not outperformed the state-of-the-art convolutional neural networks (CNNs) and Vision Transformers (ViTs). Existing vision SSMs primarily leverage manually designed scans to flatten image patches into sequences locally or globally. This approach disrupts the original semantic spatial adjacency of the image and lacks flexibility, making it difficult to capture complex image structures. To address this limitation, we propose Dynamic Adaptive Scan (DAS), a data-driven method that adaptively allocates scanning orders and regions. This enables more flexible modeling capabilities while maintaining linear computational complexity and global modeling capacity. Based on DAS, we further propose the vision backbone DAMamba, which significantly outperforms popular vision Mamba models in vision tasks such as image classification, object detection, instance segmentation, and semantic segmentation. Notably, it surpasses some of the latest state-of-the-art CNNs and ViTs.", "arxiv_id": "2502.12627v1", "arxiv_authors": ["Tanzhe Li", "Caoshuo Li", "Jiayi Lyu", "Hongjuan Pei", "Baochang Zhang", "Taisong Jin", "Rongrong Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.477Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1508978, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5e9"}, "filepath": "data/2503.22154v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992955551912066, "type": "Poster", "name": "Dataset Distillation of 3D Point Clouds via Distribution Matching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117394", "abstract": "Large-scale datasets are usually required to train deep neural networks, but it increases the computational complexity hindering the practical applications. Recently, dataset distillation for images and texts has been attracting a lot of attention, that reduces the original dataset to a synthetic dataset to alleviate the computational burden of training while preserving essential task-relevant information. However, the dataset distillation for 3D point clouds remains largely unexplored, as the point clouds exhibit fundamentally different characteristics from that of images, making the dataset distillation more challenging. In this paper, we propose a distribution matching-based distillation framework for 3D point clouds that jointly optimizes the geometric structures as well as the orientations of the synthetic 3D objects. To address the semantic misalignment caused by unordered indexing of points, we introduce a Semantically Aligned Distribution Matching loss computed on the sorted features in each channel. Moreover, to address the rotation variation, we jointly learn the optimal rotation angles while updating the synthetic dataset to better align with the original feature distribution. Extensive experiments on widely used benchmark datasets demonstrate that the proposed method consistently outperforms existing dataset distillation methods, achieving superior accuracy and strong cross-architecture generalization.", "arxiv_id": "2503.22154v2", "arxiv_authors": ["Jae-Young Yim", "Dongwook Kim", "Jae-Young Sim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.477Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1019055, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ea"}, "filepath": "data/2503.09321v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997860650595659, "type": "Poster", "name": "DAVE: Diagnostic benchmark for Audio Visual Evaluation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121842", "abstract": "Audio-visual understanding is a rapidly evolving field that seeks to integrate and interpret information from both auditory and visual modalities. Despite recent advances in multi-modal learning, existing benchmarks often suffer from strong visual bias -- when answers can be inferred from visual data alone -- and provide only aggregate scores that conflate multiple sources of error. This makes it difficult to determine whether models struggle with visual understanding, audio interpretation, or audio-visual alignment. In this work, we introduce DAVE: Diagnostic Audio Visual Evaluation), a novel benchmark dataset designed to systematically evaluate audio-visual models across controlled settings. DAVE alleviates existing limitations by (i) ensuring both modalities are necessary to answer correctly and (ii) decoupling evaluation into atomic subcategories. Our detailed analysis of state-of-the-art models reveals specific failure modes and provides targeted insights for improvement. By offering this standardized diagnostic framework, we aim to facilitate more robust development of audio-visual models.Dataset: https://huggingface.co/datasets/gorjanradevski/daveCode: https://github.com/gorjanradevski/dave", "arxiv_id": "2503.09321v1", "arxiv_authors": ["Gorjan Radevski", "Teodora Popordanoska", "Matthew B. Blaschko", "Tinne Tuytelaars"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.477Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1857280, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5eb"}, "filepath": "data/2506.02560v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996963098427895, "type": "Poster", "name": "DCI: Dual-Conditional Inversion for Boosting Diffusion-Based Image Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118739", "abstract": "Diffusion models have achieved remarkable success in image generation and editing tasks. Inversion within these models aims to recover the latent noise representation for a real or generated image, enabling reconstruction, editing, and other downstream tasks. However, to date, most inversion approaches suffer from an intrinsic trade-off between reconstruction accuracy and editing flexibility. This limitation arises from the difficulty of maintaining both semantic alignment and structural consistency during the inversion process. In this work, we introduce **Dual-Conditional Inversion (DCI)**, a novel framework that jointly conditions on the source prompt and reference image to guide the inversion process. Specifically, DCI formulates the inversion process as a dual-condition fixed-point optimization problem, minimizing both the latent noise gap and the reconstruction error under the joint guidance. This design anchors the inversion trajectory in both semantic and visual space, leading to more accurate and editable latent representations. Our novel setup brings new understanding to the inversion process. Extensive experiments demonstrate that DCI achieves state-of-the-art performance across multiple editing tasks, significantly improving both reconstruction quality and editing precision. Furthermore, we also demonstrate that our method achieves strong results in reconstruction tasks, implying a degree of robustness and generalizability approaching the ultimate goal of the inversion process.", "arxiv_id": "2506.02560v1", "arxiv_authors": ["Zixiang Li", "Haoyu Wang", "Wei Wang", "Chuangchuang Tan", "Yunchao Wei", "Yao Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.477Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044468, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ec"}, "filepath": "data/2502.03810v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997358135514931, "type": "Poster", "name": "DeblurDiff: Real-Word Image Deblurring with Generative Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117332", "abstract": "Diffusion models have achieved significant progress in image generation and the pre-trained Stable Diffusion (SD) models are helpful for image deblurring by providing clear image priors. However, directly using a blurry image or a pre-deblurred one as a conditional control for SD will either hinder accurate structure extraction or make the results overly dependent on the deblurring network. In this work, we propose a Latent Kernel Prediction Network (LKPN) to achieve robust real-world image deblurring. Specifically, we co-train the LKPN in the latent space with conditional diffusion. The LKPN learns a spatially variant kernel to guide the restoration of sharp images in the latent space. By applying element-wise adaptive convolution (EAC), the learned kernel is utilized to adaptively process the blurry feature, effectively preserving the information of the blurry input. This process thereby more effectively guides the generative process of SD, enhancing both the deblurring efficacy and the quality of detail reconstruction. Moreover, the results at each diffusion step are utilized to iteratively estimate the kernels in LKPN to better restore the sharp latent by EAC in the subsequent step. This iterative refinement enhances the accuracy and robustness of the deblurring process. Extensive experimental results demonstrate that the proposed method outperforms state-of-the-art image deblurring methods on both benchmark and real-world images.", "arxiv_id": "2502.03810v1", "arxiv_authors": ["Lingshun Kong", "Jiawei Zhang", "Dongqing Zou", "Jimmy Ren", "Xiaohe Wu", "Jiangxin Dong", "Jinshan Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.477Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1656014, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ed"}, "filepath": "data/2510.14427v1.png", "tags": [], "_media_type": "image", "_rand": 0.999994790548153, "type": "Poster", "name": "Deep Compositional Phase Diffusion for Long Motion Sequence Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116418", "abstract": "Recent research on motion generation has shown significant progress in generating semantically aligned motion with singular semantics. However, when employing these models to create composite sequences containing multiple semantically generated motion clips, they often struggle to preserve the continuity of motion dynamics at the transition boundaries between clips, resulting in awkward transitions and abrupt artifacts. To address these challenges, we present Compositional Phase Diffusion, which leverages the Semantic Phase Diffusion Module (SPDM) and Transitional Phase Diffusion Module (TPDM) to progressively incorporate semantic guidance and phase details from adjacent motion clips into the diffusion process. Specifically, SPDM and TPDM operate within the latent motion frequency domain established by the pre-trained Action-Centric Motion Phase Autoencoder (ACT-PAE). This allows them to learn semantically important and transition-aware phase information from variable-length motion clips during training. Experimental results demonstrate the competitive performance of our proposed framework in generating compositional motion sequences that align semantically with the input conditions, while preserving phase transitional continuity between preceding and succeeding motion clips. Additionally, motion inbetweening task is made possible by keeping the phase parameter of the input motion sequences fixed throughout the diffusion process, showcasing the potential for extending the proposed framework to accommodate various application scenarios.", "arxiv_id": "2510.14427v1", "arxiv_authors": ["Ho Yin Au", "Jie Chen", "Junkun Jiang", "Jingyu Xiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a17f"}, "_cls": "Classification", "tags": [], "label": "cs.MM"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2723275, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ee"}, "filepath": "data/2411.15779v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997261132568713, "type": "Poster", "name": "Deep Gaussian from Motion: Exploring 3D Geometric Foundation Models for Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115661", "abstract": "Neural radiance fields (NeRF) and 3D Gaussian Splatting (3DGS) are popular techniques to reconstruct and render photorealistic images. However, the prerequisite of running Structure-from-Motion (SfM) to get camera poses limits their completeness. Although previous methods can reconstruct a few unposed images, they are not applicable when images are unordered or densely captured. In this work, we propose a method to train 3DGS from unposed images. Our method leverages a pre-trained 3D geometric foundation model as the neural scene representation. Since the accuracy of the predicted pointmaps does not suffice for accurate image registration and high-fidelity image rendering, we propose to mitigate the issue by initializing and fine-tuning the pre-trained model from a seed image. The images are then progressively registered and added to the training buffer, which is used to train the model further. We also propose to refine the camera poses and pointmaps by minimizing a point-to-camera ray consistency loss across multiple views. When evaluated on diverse challenging datasets, our method outperforms state-of-the-art pose-free NeRF/3DGS methods in terms of both camera pose accuracy and novel view synthesis, and even renders higher fidelity images than 3DGS trained with COLMAP poses.", "arxiv_id": "2411.15779v1", "arxiv_authors": ["Yu Chen", "Rolandos Alexandros Potamias", "Evangelos Ververas", "Jifei Song", "Jiankang Deng", "Gim Hee Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a180"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 10138744, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ef"}, "filepath": "data/2505.15133v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996268763566803, "type": "Poster", "name": "DeepKD: A Deeply Decoupled and Denoised Knowledge Distillation Trainer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115324", "abstract": "Recent advances in knowledge distillation have emphasized the importance of decoupling different knowledge components. While existing methods utilize momentum mechanisms to separate task-oriented and distillation gradients, they overlook the inherent conflict between target-class and non-target-class knowledge flows. Furthermore, low-confidence dark knowledge in non-target classes introduces noisy signals that hinder effective knowledge transfer. To address these limitations, we propose DeepKD, a novel training framework that integrates dual-level decoupling with adaptive denoising. First, through theoretical analysis of gradient signal-to-noise ratio (GSNR) characteristics in task-oriented and non-task-oriented knowledge distillation, we design independent momentum updaters for each component to prevent mutual interference. We observe that the optimal momentum coefficients for task-oriented gradient (TOG), target-class gradient (TCG), and non-target-class gradient (NCG) should be positively related to their GSNR. Second, we introduce a dynamic top-k mask (DTM) mechanism that gradually increases K from a small initial value to incorporate more non-target classes as training progresses, following curriculum learning principles. The DTM jointly filters low-confidence logits from both teacher and student models, effectively purifying dark knowledge during early training. Extensive experiments on CIFAR-100, ImageNet, and MS-COCO demonstrate DeepKD's effectiveness.", "arxiv_id": "2505.15133v1", "arxiv_authors": ["Haiduo Huang", "Jiangcheng Song", "Yadong Zhang", "Pengju Ren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a181"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1088510, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f0"}, "filepath": "data/2509.23602v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991093979919808, "type": "Poster", "name": "Deep Taxonomic Networks for Unsupervised Hierarchical Prototype Discovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118417", "abstract": "Inspired by the human ability to learn and organize knowledge into hierarchical taxonomies with prototypes, this paper addresses key limitations in current deep hierarchical clustering methods. Existing methods often tie the structure to the number of classes and underutilize the rich prototype information available at intermediate hierarchical levels. We introduce deep taxonomic networks, a novel deep latent variable approach designed to bridge these gaps.Our method optimizes a large latent taxonomic hierarchy, specifically a complete binary tree structured mixture-of-Gaussian prior within a variational inference framework, to automatically discover taxonomic structures and associated prototype clusters directly from unlabeled data without assuming true label sizes.We analytically show that optimizing the ELBO of our method encourages the discovery of hierarchical relationships among prototypes. Empirically, our learned models demonstrate strong hierarchical clustering performance, outperforming baselines across diverse image classification datasets using our novel evaluation mechanism that leverages prototype clusters discovered at all hierarchical levels.Qualitative results further reveal that deep taxonomic networks discover rich and interpretable hierarchical taxonomies, capturing both coarse-grained semantic categories and fine-grained visual distinctions.", "arxiv_id": "2509.23602v2", "arxiv_authors": ["Zekun Wang", "Ethan Haarer", "Tianyi Zhu", "Zhiyi Dai", "Christopher J. MacLellan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a182"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1079777, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f1"}, "filepath": "data/2505.18079v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991267827329396, "type": "Poster", "name": "Deep Video Discovery: Agentic Search with Tool Use for Long-form Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116039", "abstract": "Long-form video understanding presents significant challenges due to extensive temporal-spatial complexity and the difficulty of question answering under such extended contexts. While Large Language Models (LLMs) have demonstrated considerable advancements in video analysis capabilities and long context handling, they continue to exhibit limitations when processing information-dense hour-long videos. To overcome such limitations, we propose the $\\textbf{D}eep \\ \\textbf{V}ideo \\ \\textbf{D}iscovery \\ (\\textbf{DVD})$ agent to leverage an $\\textit{agentic search}$ strategy over segmented video clips. Different from previous video agents manually designing a rigid workflow, our approach emphasizes the autonomous nature of agents.By providing a set of search-centric tools on multi-granular video database,our DVD agent leverages the advanced reasoning capability of LLM to plan on its current observation state, strategically selects tools, formulates appropriate parameters for actions, and iteratively refines its internal reasoning in light of the gathered information.We perform comprehensive evaluation on multiple long video understanding benchmarks that demonstrates the advantage of the entire system design. Our DVD agent achieves SOTA performance, significantly surpassing prior works by a large margin on the challenging LVBench dataset. Comprehensive ablation studies and in-depth tool analyses are also provided, yielding insights to further advance intelligent agents tailored for long-form video understanding tasks. The code will be released later.", "arxiv_id": "2505.18079v3", "arxiv_authors": ["Xiaoyi Zhang", "Zhaoyang Jia", "Zongyu Guo", "Jiahao Li", "Bin Li", "Houqiang Li", "Yan Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a183"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1259649, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f2"}, "filepath": "data/2506.07464v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991724329034001, "type": "Poster", "name": "DeepVideo-R1: Video Reinforcement Fine-Tuning via Difficulty-aware Regressive GRPO", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120134", "abstract": "Recent works have demonstrated the effectiveness of reinforcement learning (RL)-based post-training for enhancing the reasoning capabilities of large language models (LLMs). In particular, Group Relative Policy Optimization (GRPO) has shown impressive success using a PPO-style reinforcement algorithm with group-based normalized rewards. However, GRPO has been less explored in Video Large Language Models (VideoLLMs). In this paper, we explore GRPO and identify two problems that deteriorate the effective learning: (1) reliance on safeguards, and (2) vanishing advantage. To mitigate these challenges, we propose DeepVideo-R1, a video large language model trained with Reg-GRPO (Regressive GRPO) and difficulty-aware data augmentation. Reg-GRPO reformulates the GRPO loss function into a regression task that directly predicts the advantage in GRPO, eliminating the need for heuristic safeguards such as the clipping and min functions. It aligns VideoLLMs with advantages, providing effective guidance. The difficulty-aware data augmentation strategy augments input prompts/videos to generate samples at solvable difficulty levels, enabling diverse reward signals. Experimental results show that our approach significantly improves video reasoning performance across multiple benchmarks. Our codes are included in the supplement.", "arxiv_id": "2506.07464v2", "arxiv_authors": ["Jinyoung Park", "Jeehye Na", "Jinyoung Kim", "Hyunwoo J. Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a184"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076120, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f3"}, "filepath": "data/2412.20392v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990442911291935, "type": "Poster", "name": "Defending Multimodal Backdoored Models by Repulsive Visual Prompt Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118614", "abstract": "Multimodal contrastive learning models (e.g., CLIP) can learn high-quality representations from large-scale image-text datasets, while they exhibit significant vulnerabilities to backdoor attacks, raising serious safety concerns. In this paper, we reveal that CLIP's vulnerabilities primarily stem from its tendency to encode features beyond in-dataset predictive patterns, compromising its visual feature resistivity to input perturbations. This makes its encoded features highly susceptible to being reshaped by backdoor triggers. To address this challenge, we propose Repulsive Visual Prompt Tuning (RVPT), a novel defense approach that employs deep visual prompt tuning with a specially designed feature-repelling loss. Specifically, RVPT adversarially repels the encoded features from deeper layers while optimizing the standard cross-entropy loss, ensuring that only predictive features in downstream tasks are encoded, thereby enhancing CLIP\u2019s visual feature resistivity against input perturbations and mitigating its susceptibility to backdoor attacks. Unlike existing multimodal backdoor defense methods that typically require the availability of poisoned data or involve fine-tuning the entire model, RVPT leverages few-shot downstream clean samples and only tunes a small number of parameters. Empirical results demonstrate that RVPT tunes only 0.27\\% of the parameters in CLIP, yet it significantly outperforms state-of-the-art defense methods, reducing the attack success rate from 89.70\\% to 2.76\\% against the most advanced multimodal attacks on ImageNet and effectively generalizes its defensive capabilities across multiple datasets. Our code is available on https://anonymous.4open.science/r/rvpt-anonymous.", "arxiv_id": "2412.20392v3", "arxiv_authors": ["Zhifang Zhang", "Shuo He", "Haobo Wang", "Bingquan Shen", "Lei Feng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a185"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1049077, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f4"}, "filepath": "data/2508.17054v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996902998140421, "type": "Poster", "name": "DeltaFlow: An Efficient Multi-frame Scene Flow Estimation Method", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117870", "abstract": "Previous dominant methods for scene flow estimation focus mainly on input from two consecutive frames, neglecting valuable information in the temporal domain. While recent trends shift towards multi-frame reasoning, they suffer from rapidly escalating computational costs as the number of frames grows. To leverage temporal information more efficiently, we propose DeltaFlow ($\\Delta$Flow), a lightweight 3D framework that captures motion cues via a $\\Delta$ scheme, extracting temporal features with minimal computational cost, regardless of the number of frames. Additionally, scene flow estimation faces challenges such as imbalanced object class distributions and motion inconsistency. To tackle these issues, we introduce a Category-Balanced Loss to enhance learning across underrepresented classes and an Instance Consistency Loss to enforce coherent object motion, improving flow accuracy. Extensive evaluations on the Argoverse 2 and Waymo datasets show that $\\Delta$Flow achieves state-of-the-art performance with up to 22\\% lower error and $2\\times$ faster inference compared to the next-best multi-frame supervised method, while also demonstrating a strong cross-domain generalization ability. The source code will be made publicly available upon acceptance.", "arxiv_id": "2508.17054v2", "arxiv_authors": ["Qingwen Zhang", "Xiaomeng Zhu", "Yushan Zhang", "Yixi Cai", "Olov Andersson", "Patric Jensfelt"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a186"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070505, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f5"}, "filepath": "data/2505.17017v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993530869540855, "type": "Poster", "name": "Delving into RL for Image Generation with CoT: A Study on DPO vs. GRPO", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115601", "abstract": "Recent advancements underscore the significant role of Reinforcement Learning (RL) in enhancing the Chain-of-Thought (CoT) reasoning capabilities of large language models (LLMs). Two prominent RL algorithms, Direct Preference Optimization (DPO) and Group Relative Policy Optimization (GRPO), are central to these developments, showcasing different pros and cons. Autoregressive image generation, also interpretable as a sequential CoT reasoning process, presents unique challenges distinct from LLM-based CoT reasoning. These encompass ensuring text-image consistency, improving image aesthetic quality, and designing sophisticated reward models, rather than relying on simpler rule-based rewards. While recent efforts have extended RL to this domain, these explorations typically lack an in-depth analysis of the domain-specific challenges and the characteristics of different RL strategies. To bridge this gap, we provide the first comprehensive investigation of the GRPO and DPO algorithms in autoregressive image generation, evaluating their ***in-domain*** performance and ***out-of-domain*** generalization, while scrutinizing the impact of ***different reward models*** on their respective capabilities. Our findings reveal that GRPO and DPO exhibit distinct advantages, and crucially, that reward models possessing stronger intrinsic generalization capabilities potentially enhance the generalization potential of the applied RL algorithms. Furthermore, we systematically explore ***three prevalent scaling strategies*** to enhance both their in-domain and out-of-domain proficiency, deriving unique insights into efficiently scaling performance for each paradigm. We hope our study paves a new path for inspiring future work on developing more effective RL algorithms to achieve robust CoT reasoning in the realm of autoregressive image generation.", "arxiv_id": "2505.17017v2", "arxiv_authors": ["Chengzhuo Tong", "Ziyu Guo", "Renrui Zhang", "Wenyu Shan", "Xinyu Wei", "Zhenghao Xing", "Hongsheng Li", "Pheng-Ann Heng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a187"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083931, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f6"}, "filepath": "data/2506.03517v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990155019115846, "type": "Poster", "name": "DenseDPO: Fine-Grained Temporal Preference Optimization for Video Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117435", "abstract": "Direct Preference Optimization (DPO) has recently been applied as a post\u2011training technique for text-to-video diffusion models.To obtain training data, annotators are asked to provide preferences between two videos generated from independent noise.However, this approach prohibits fine-grained comparisons, and we point out that it biases the annotators towards low-motion clips as they often contain fewer visual artifacts.In this work, we introduce DenseDPO, a method that addresses these shortcomings by making three contributions.First, we create each video pair for DPO by denoising corrupted copies of a ground truth video.This results in aligned pairs with similar motion structures while differing in local details, effectively neutralizing the motion bias.Second, we leverage the resulting temporal alignment to label preferences on short segments rather than entire clips, yielding a denser and more precise learning signal.With only one\u2011third of the labeled data, DenseDPO greatly improves motion generation over vanilla DPO, while matching it in text alignment, visual quality, and temporal consistency.Finally, we show that DenseDPO unlocks automatic preference annotation using off-the-shelf Vision Language Models (VLMs): GPT accurately predicts segment-level preferences similar to task-specifically fine-tuned video reward models, and DenseDPO trained on these labels achieves performance close to using human labels.", "arxiv_id": "2506.03517v2", "arxiv_authors": ["Ziyi Wu", "Anil Kag", "Ivan Skorokhodov", "Willi Menapace", "Ashkan Mirzaei", "Igor Gilitschenski", "Sergey Tulyakov", "Aliaksandr Siarohin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a188"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.478Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1132759, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f7"}, "filepath": "data/2510.21396v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992486975468611, "type": "Poster", "name": "Depth-Supervised Fusion Network for Seamless-Free Image Stitching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115047", "abstract": "Image stitching synthesizes images captured from multiple perspectives into a single image with a broader field of view. The significant variations in object depth often lead to large parallax, resulting in ghosting and misalignment in the stitched results. To address this, we propose a depth-consistency-constrained seamless-free image stitching method. First, to tackle the multi-view alignment difficulties caused by parallax, a multi-stage mechanism combined with global depth regularization constraints is developed to enhance the alignment accuracy of the same apparent target across different depth ranges. Second, during the multi-view image fusion process, an optimal stitching seam is determined through graph-based low-cost computation, and a soft-seam region is diffused to precisely locate transition areas, thereby effectively mitigating alignment errors induced by parallax and achieving natural and seamless stitching results. Furthermore, considering the computational overhead in the shift regression process, a reparameterization strategy is incorporated to optimize the structural design, significantly improving algorithm efficiency while maintaining optimal performance. Extensive experiments demonstrate the superior performance of the proposed method against the existing methods.", "arxiv_id": "2510.21396v1", "arxiv_authors": ["Zhiying Jiang", "Ruhao Yan", "Zengxi Zhang", "Bowei Zhang", "Jinyuan Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a189"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1027037, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f8"}, "filepath": "data/2506.16690v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992822005077774, "type": "Poster", "name": "DepthVanish: Optimizing Adversarial Interval Structures for Stereo-Depth-Invisible Patches", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116943", "abstract": "Depth estimation is a critical task in autonomous driving and robotics, where inaccuracies (such as misidentifying nearby objects as distant) can lead to dangerous situations. Adversarial attacks against stereo depth estimation can help reveal vulnerabilities before deployment. Previous work has shown that repeating optimized textures within patches can effectively mislead stereo depth estimation in digital settings, i.e., when digitally inserted into images. However, our research reveals that these naively repeated texture structures perform poorly in physical world implementations, limiting their practical utility for testing depth estimation systems. In this work, for the first time, we discover that introducing regular intervals between repeated textures, creating a striped structure, significantly enhances physical-world effectiveness. Through extensive experimentation, we analyze how variations of this novel structure influence attack performance. Based on these insights, we develop a novel stereo depth attack that jointly optimizes both the striped structure and texture elements. We also discover that binary black and white textures demonstrate substantially higher effectiveness than colorful textures. Our generated adversarial patches can be inserted into any scenes and successfully attack state-of-the-art stereo depth estimation methods and even commercial RGB-D cameras (Intel RealSense) in real-world conditions, demonstrating their practical relevance for security assessment of depth estimation systems.", "arxiv_id": "2506.16690v1", "arxiv_authors": ["Yun Xing", "Yue Cao", "Nhat Chung", "Jie Zhang", "Ivor Tsang", "Ming-Ming Cheng", "Yang Liu", "Lei Ma", "Qing Guo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043161, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5f9"}, "filepath": "data/2504.15863v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998653838821362, "type": "Poster", "name": "DERD-Net: Learning Depth from Event-based Ray Densities", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120329", "abstract": "Event cameras offer a promising avenue for multi-view stereo depth estimation and Simultaneous Localization And Mapping (SLAM) due to their ability to detect blur-free 3D edges at high-speed and over broad illumination conditions. However, traditional deep learning frameworks designed for conventional cameras struggle with the asynchronous, stream-like nature of event data, as their architectures are optimized for discrete, image-like inputs. We propose a scalable, flexible and adaptable framework for pixel-wise depth estimation with event cameras in both monocular and stereo setups. The 3D scene structure is encoded into disparity space images (DSIs), representing spatial densities of rays obtained by back-projecting events into space via known camera poses. Our neural network processes local subregions of the DSIs combining 3D convolutions and a recurrent structure to recognize valuable patterns for depth prediction. Local processing enables fast inference with full parallelization and ensures constant ultra-low model complexity and memory costs, regardless of camera resolution. Experiments on standard benchmarks (MVSEC and DSEC datasets) demonstrate unprecedented effectiveness:(i) using purely monocular data, our method achieves comparable results to existing stereo methods; (ii) when applied to stereo data, it strongly outperforms all state-of-the-art (SOTA) approaches, reducing the mean absolute error by at least 42\\%; (iii) our method also allows for increases in depth completeness by more than 3-fold while still yielding a reduction in median absolute error of at least 30\\%. Given its remarkable performance and effective processing of event-data, our framework holds strong potential to become a standard approach for using deep learning for event-based depth estimation and SLAM.", "arxiv_id": "2504.15863v2", "arxiv_authors": ["Diego Hitzges", "Suman Ghosh", "Guillermo Gallego"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1968474, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5fa"}, "filepath": "data/2506.06099v1.png", "tags": [], "_media_type": "image", "_rand": 0.999196146993096, "type": "Poster", "name": "DermaCon-IN: A Multiconcept-Annotated Dermatological Image Dataset of Indian Skin Disorders for Clinical AI Research", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121561", "abstract": "Artificial intelligence is poised to augment dermatological care by enabling scalable image-based diagnostics. Yet, the development of robust and equitable models remains hindered by datasets that fail to capture the clinical and demographic complexity of real-world practice. This complexity stems from region-specific disease distributions, wide variation in skin tones, and the underrepresentation of outpatient scenarios from non-Western populations. We introduce DermaCon-IN, a prospectively curated dermatology dataset comprising over 5,450 clinical images from approximately 3,000 patients across outpatient clinics in South India. Each image is annotated by board-certified dermatologists with over 240 distinct diagnoses, structured under a hierarchical, etiology-based taxonomy adapted from Rook\u2019s classification. The dataset captures a wide spectrum of dermatologic conditions and tonal variation commonly seen in Indian outpatient care. We benchmark a range of architectures\u2014including convolutional models (ResNet, DenseNet, EfficientNet), transformer-based models (ViT, MaxViT, Swin), and Concept Bottleneck Models to establish baseline performance and explore how anatomical and concept-level cues may be integrated. These results are intended to guide future efforts toward interpretable and clinically realistic models. DermaCon-IN provides a scalable and representative foundation for advancing dermatology AI in real-world settings.", "arxiv_id": "2506.06099v1", "arxiv_authors": ["Shanawaj S Madarkar", "Mahajabeen Madarkar", "Madhumitha V", "Teli Prakash", "Konda Reddy Mopuri", "Vinaykumar MV", "KVL Sathwika", "Adarsh Kasturi", "Gandla Dilip Raj", "PVN Supranitha", "Harsh Udai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18c"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1007493, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5fb"}, "filepath": "data/2509.19230v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995229971218391, "type": "Poster", "name": "DevFD : Developmental Face Forgery Detection by Learning Shared and Orthogonal LoRA Subspaces", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117472", "abstract": "The rise of realistic digital face generation/manipulation poses significant social risks. The primary challenge lies in the rapid and diverse evolution of generation techniques, which often outstrip the detection capabilities of existing models. To defend against the ever-evolving new types of forgery, we need to enable our model to quickly adapt to new domains with limited computation and data, while avoiding forgetting previously learned forgery types. In this work, we posit that genuine facial samples are abundant and relatively stable in acquisition methods, while forgery faces continuously evolve with the iteration of manipulation techniques. Given the practical infeasibility of exhaustively collecting all forgery variants, we frame face forgery detection as a continual learning problem and allow the model to scale in complexity as new forgery types emerge. Specifically, we employ a Developmental Mixture of Experts (MoE) architecture, utilizing LoRA models as the individual experts, allocating the experts into two groups: a Real-LoRA to refine the real face knowledge modeled by the backbone and Fake-LoRAs to capture incremental fake face information from different types for each sub-task. To prevent catastrophic forgetting, we ensure that the learning direction of Fake-LoRAs is orthogonal to the established subspace. Moreover, we integrate orthogonal gradients into the orthogonal loss of Fake-LoRAs to alleviate the interference of gradients on previously learned tasks during the early training phase. Experimental results under both the datasets and manipulation types incremental protocols demonstrate the effectiveness of our method.", "arxiv_id": "2509.19230v2", "arxiv_authors": ["Tianshuo Zhang", "Li Gao", "Siran Peng", "Xiangyu Zhu", "Zhen Lei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1052571, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5fc"}, "filepath": "data/2509.23829v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996670391354856, "type": "Poster", "name": "DexFlyWheel: A Scalable and Self-improving Data Generation Framework for Dexterous Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117303", "abstract": "Dexterous manipulation is critical to advancing robot capabilities in real-world applications, yet diverse and high-quality datasets remain scarce. Existing data collection methods either rely on human teleoperation or require significant human engineering, or are merely limited to grasping, restricting their scalability and generalization. In this paper, we introduce DexFlyWheel, a scalable data generation framework that employs a self-improving cycle to iteratively expand data diversity. Starting from efficient seed demonstrations warmup, our framework expands data diversity via multiple iterations in the self-improving cycle. Each iteration follows a closed-loop pipeline that combines imitation learning, reinforcement learning, rollout trajectory collection, and data augmentation. At each iteration, we first use imitation learning to extract behavioral priors from demonstrations and employ reinforcement learning to enhance generalization. Based on our policy, we rollout trajectories in simulation and then augment these across different environments and objects positions. As iterations progress, our framework generates more diverse data, including various objects, environments, and object positions. Experimental results show that policies trained on our dataset achieve an average success rate of 81.9\\% on the challenge test sets, with a real-world transfer success rate of 78.3\\% on dual-arm lift tasks. Videos can be found on our project website https://DexFlyWheel.github.io.", "arxiv_id": "2509.23829v1", "arxiv_authors": ["Kefei Zhu", "Fengshuo Bai", "YuanHao Xiang", "Yishuai Cai", "Xinglin Chen", "Ruochong Li", "Xingtao Wang", "Hao Dong", "Yaodong Yang", "Xiaopeng Fan", "Yuanpei Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18e"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3581471, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5fd"}, "filepath": "data/2505.11032v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996351934363313, "type": "Poster", "name": "DexGarmentLab: Dexterous Garment Manipulation Environment with Generalizable Policy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117347", "abstract": "Garment manipulation is a critical challenge due to the diversity in garment categories, geometries, and deformations. Despite this, humans can effortlessly handle garments, thanks to the dexterity of our hands. However, existing research in the field has struggled to replicate this level of dexterity, primarily hindered by the lack of realistic simulations of dexterous garment manipulation. Therefore, we propose DexGarmentLab, the first environment specifically designed for dexterous (especially bimanual) garment manipulation, which features large-scale high-quality 3D assets for 15 task scenarios, and refines simulation techniques tailored for garment modeling to reduce the sim-to-real gap. Previous data collection typically relies on teleoperation or training expert reinforcement learning (RL) policies, which are labor-intensive and inefficient. In this paper, we leverage garment structural correspondence to automatically generate a dataset with diverse trajectories using only a single expert demonstration, significantly reducing manual intervention. However, even extensive demonstrations cannot cover the infinite states of garments, which necessitates the exploration of new algorithms. To improve generalization across diverse garment shapes and deformations, we propose a Hierarchical gArment-manipuLation pOlicy (HALO). It first identifies transferable affordance points to accurately locate the manipulation area, then generates generalizable trajectories to complete the task. Through extensive experiments and detailed analysis of our method and baseline, we demonstrate that HALO consistently outperforms existing methods, successfully generalizing to previously unseen instances even with significant variations in shape and deformation where others fail. Our project page is available at: https://dexgarmentlab-review.github.io/.", "arxiv_id": "2505.11032v3", "arxiv_authors": ["Yuran Wang", "Ruihai Wu", "Yue Chen", "Jiarui Wang", "Jiaqi Liang", "Ziyu Zhu", "Haoran Geng", "Jitendra Malik", "Pieter Abbeel", "Hao Dong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a18f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1124465, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5fe"}, "filepath": "data/2510.14741v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990143643495321, "type": "Poster", "name": "DEXTER: Diffusion-Guided EXplanations with TExtual Reasoning for Vision Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117167", "abstract": "Understanding and explaining the behavior of machine learning models is essential for building transparent and trustworthy AI systems. We introduce DEXTER, a data-free framework that combines diffusion models and large language models to generate global, textual explanations of visual classifiers. DEXTER operates by optimizing text prompts to synthesize class-conditional images that strongly activate a target classifier. These synthetic samples are then used to elicit detailed natural language reports that describe class-specific decision patterns and biases. Unlike prior work, DEXTER enables natural language reasoning about a classifier's decision process without access to training data or ground-truth labels. We demonstrate DEXTER's flexibility across three tasks\u2014activation maximization, slice discovery and debiasing, and bias explanation\u2014each illustrating its ability to uncover the internal mechanisms of visual classifiers. Quantitative and qualitative evaluations, including a user study, show that DEXTER produces accurate, interpretable outputs. Experiments on ImageNet, Waterbirds, CelebA, and FairFaces confirm that DEXTER outperforms existing approaches in global model explanation and class-level bias reporting.", "arxiv_id": "2510.14741v1", "arxiv_authors": ["Simone Carnemolla", "Matteo Pennisi", "Sarinda Samarasinghe", "Giovanni Bellitto", "Simone Palazzo", "Daniela Giordano", "Mubarak Shah", "Concetto Spampinato"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a190"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032901, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a5ff"}, "filepath": "data/2506.09997v1.png", "tags": [], "_media_type": "image", "_rand": 0.999015891781508, "type": "Poster", "name": "DGS-LRM: Real-Time Deformable 3D Gaussian Reconstruction From Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117552", "abstract": "We introduce the Deformable Gaussian Splats Large Reconstruction Model (DGS-LRM), the first feed-forward method predicting deformable 3D Gaussian splats from a monocular posed video of any dynamic scene. Feed-forward scene reconstruction has gained significant attention for its ability to rapidly create digital replicas of real-world environments. However, most existing models are limited to static scenes and fail to reconstruct the motion of moving objects. Developing a feed-forward model for dynamic scene reconstruction poses significant challenges, including the scarcity of training data and the need for appropriate 3D representations and training paradigms. To address these challenges, we introduce several key technical contributions: an enhanced large-scale synthetic dataset with ground-truth multi-view videos and dense 3D scene flow supervision; a per-pixel deformable 3D Gaussian representation that is easy to learn, supports high-quality dynamic view synthesis, and enables long-range 3D tracking; and a large transformer network that achieves real-time, generalizable dynamic scene reconstruction. Extensive qualitative and quantitative experiments demonstrate that DGS-LRM achieves dynamic scene reconstruction quality comparable to optimization-based methods, while significantly outperforming the state-of-the-art predictive dynamic reconstruction method on real-world examples. Its predicted physically grounded 3D deformation is accurate and can be readily adapted for long-range 3D tracking tasks, achieving performance on par with state-of-the-art monocular video 3D tracking methods.", "arxiv_id": "2506.09997v1", "arxiv_authors": ["Chieh Hubert Lin", "Zhaoyang Lv", "Songyin Wu", "Zhen Xu", "Thu Nguyen-Phuoc", "Hung-Yu Tseng", "Julian Straub", "Numair Khan", "Lei Xiao", "Ming-Hsuan Yang", "Yuheng Ren", "Richard Newcombe", "Zhao Dong", "Zhengqin Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a191"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3620574, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a600"}, "filepath": "data/2504.21487v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998451342832496, "type": "Poster", "name": "DGSolver: Diffusion Generalist Solver with Universal Posterior Sampling for Image Restoration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116716", "abstract": "Diffusion models have achieved remarkable progress in universal image restoration. However, existing methods perform naive inference in the reverse process, which leads to cumulative errors under limited sampling steps and large step intervals. Moreover, they struggle to balance the commonality of degradation representations with restoration quality, often depending on complex compensation mechanisms that enhance fidelity at the expense of efficiency. To address these challenges, we introduce DGSolver, a diffusion generalist solver with universal posterior sampling. We first derive the exact ordinary differential equations for generalist diffusion models to unify degradation representations and design tailored high-order solvers with a queue-based accelerated sampling strategy to improve both accuracy and efficiency. We then integrate universal posterior sampling to better approximate manifold-constrained gradients, yielding a more accurate noise estimation and correcting errors in inverse inference. Extensive experiments demonstrate that DGSolver outperforms state-of-the-art methods in restoration accuracy, stability, and scalability, both qualitatively and quantitatively.", "arxiv_id": "2504.21487v2", "arxiv_authors": ["Hebaixu Wang", "Jing Zhang", "Haonan Guo", "Di Wang", "Jiayi Ma", "Bo Du"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a192"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 970989, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a601"}, "filepath": "data/2502.17157v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995960399937462, "type": "Poster", "name": "DICEPTION: A Generalist Diffusion Model for Visual Perceptual Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116654", "abstract": "This paper's primary objective is to develop a robust generalist perception model capable of addressing multiple tasks under constraints of computational resources and limited training data. We leverage text-to-image diffusion models pre-trained on billions of images and successfully introduce our DICEPTION, a visual generalist model. Exhaustive evaluations demonstrate that DICEPTION effectively tackles diverse perception tasks, even achieving performance comparable to SOTA single-task specialist models. Specifically, we achieve results on par with SAM-vit-h using only 0.06% of their data (e.g., 600K vs.\\ 1B pixel-level annotated images). We designed comprehensive experiments on architectures and input paradigms, demonstrating that the key to successfully re-purposing a single diffusion model for multiple perception tasks lies in maximizing the preservation of the pre-trained model's prior knowledge. Consequently, DICEPTION can be trained with substantially lower computational costs than conventional models requiring training from scratch. Furthermore, adapting DICEPTION to novel tasks is highly efficient, necessitating fine-tuning on as few as 50 images and approximately 1% of its parameters. Finally, we demonstrate that a subtle application of classifier-free guidance can improve the model's performance on depth and normal estimation. We also show that pixel-aligned training, as is characteristic of perception tasks, significantly enhances the model's ability to preserve fine details. DICEPTION offers valuable insights and presents a promising direction for the development of advanced diffusion-based visual generalist models.", "arxiv_id": "2502.17157v3", "arxiv_authors": ["Canyu Zhao", "Yanlong Sun", "Mingyu Liu", "Huanyi Zheng", "Muzhi Zhu", "Zhiyue Zhao", "Hao Chen", "Tong He", "Chunhua Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a193"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097511, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a602"}, "filepath": "data/2505.11196v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994728946997219, "type": "Poster", "name": "DiCo: Revitalizing ConvNets for Scalable and Efficient Diffusion Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117743", "abstract": "Diffusion Transformer (DiT), a promising diffusion model for visual generation, demonstrates impressive performance but incurs significant computational overhead. Intriguingly, analysis of pre-trained DiT models reveals that global self-attention is often redundant, predominantly capturing local patterns\u2014highlighting the potential for more efficient alternatives. In this paper, we revisit convolution as an alternative building block for constructing efficient and expressive diffusion models. However, naively replacing self-attention with convolution typically results in degraded performance. Our investigations attribute this performance gap to the higher channel redundancy in ConvNets compared to Transformers. To resolve this, we introduce a compact channel attention mechanism that promotes the activation of more diverse channels, thereby enhancing feature diversity. This leads to Diffusion ConvNet (DiCo), a family of diffusion models built entirely from standard ConvNet modules, offering strong generative performance with significant efficiency gains. On class-conditional ImageNet benchmarks, DiCo outperforms previous diffusion models in both image quality and generation speed. Notably, DiCo-XL achieves an FID of **2.05** at 256$\\times$256 resolution and **2.53** at 512$\\times$512, with a **2.7$\\times$** and **3.1$\\times$** speedup over DiT-XL/2, respectively. Furthermore, our largest model, DiCo-H, scaled to 1B parameters, reaches an FID of **1.90** on ImageNet 256$\\times$256\u2014without any additional supervision during training.", "arxiv_id": "2505.11196v2", "arxiv_authors": ["Yuang Ai", "Qihang Fan", "Xuefeng Hu", "Zhenheng Yang", "Ran He", "Huaibo Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a194"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5923060, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a603"}, "filepath": "data/2509.18096v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996396188688513, "type": "Poster", "name": "Diff4Seg: Unveiling Open-Vocabulary Semantic Segmentation in Text-to-Image Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119138", "abstract": "Text-to-image diffusion models excel at translating language prompts into photorealistic images by implicitly grounding textual concepts in their cross-modal attention mechanisms. Prior work has exploited these attention maps for downstream tasks such as editing, inpainting, and zero-shot open-vocabulary semantic segmentation (OVSS), but a detailed understanding of how these maps contribute to image generation remains limited. Recent architectural advances like Multi-Modal Diffusion Transformers (MM-DiTs) introduce joint self-attention over concatenated image and text tokens, enabling richer and more scalable cross-modal alignment. In this work, we systematically analyze the attention structures of MM-DiT, focusing on how specific heads and layers propagate semantic information and influence generation quality. By decomposing attention score distributions and attention norms across layers, we identify a subset of heads that consistently align text tokens with spatially coherent image regions, naturally yielding high-quality zero-shot segmentation masks. We then introduce a lightweight LoRA-based fine-tuning method to enhance the semantic grouping capabilities of these heads without degrading\u2014 and often improving\u2014image fidelity. Our findings demonstrate that semantic alignment is an emergent property of diffusion transformers and can be selectively amplified to improve both dense recognition and generative performance, paving the way toward unified models that bridge generation and perception.", "arxiv_id": "2509.18096v1", "arxiv_authors": ["Chaehyun Kim", "Heeseong Shin", "Eunbeen Hong", "Heeji Yoon", "Anurag Arnab", "Paul Hongsuck Seo", "Sunghwan Hong", "Seungryong Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a195"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.479Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2399406, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a604"}, "filepath": "data/2505.19516v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994129678969048, "type": "Poster", "name": "DiffE2E: Rethinking End-to-End Driving with a Hybrid Action Diffusion and Supervised Policy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117605", "abstract": "End-to-end learning has emerged as a transformative paradigm in autonomous driving research. However, the inherently multimodal nature of driving behaviors and the generalization challenges in long-tail scenarios remain critical obstacles to robust deployment. We propose DiffE2E, a diffusion-based end-to-end autonomous driving framework. This framework first performs multi-scale alignment of multi-sensor perception features through a hierarchical bidirectional cross-attention mechanism. It then introduces a novel class of hybrid diffusion-supervision decoders based on the Transformer architecture, and adopts a collaborative training paradigm that seamlessly integrates the strengths of both diffusion and supervised policies. DiffE2E models structured latent spaces, where diffusion captures the distribution of future trajectories and supervision enhances controllability and robustness. A global condition integration module enables deep fusion of perception features with high-level targets, significantly improving the quality of trajectory generation. Subsequently, a cross-attention mechanism facilitates efficient interaction between integrated features and hybrid latent variables, promoting the joint optimization of diffusion and supervision objectives for structured output generation, ultimately leading to more robust control. Experiments demonstrate that DiffE2E achieves state-of-the-art performance in both CARLA closed-loop evaluations and NAVSIM benchmarks. The proposed integrated diffusion-supervision policy offers a generalizable paradigm for hybrid action representation, with strong potential for extension to broader domains including embodied intelligence.", "arxiv_id": "2505.19516v1", "arxiv_authors": ["Rui Zhao", "Yuze Fan", "Ziguo Chen", "Fei Gao", "Zhenhai Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a196"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025618, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a605"}, "filepath": "data/2509.16767v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995676909485294, "type": "Poster", "name": "DiffEye: Diffusion-Based Continuous Eye-Tracking Data Generation Conditioned on Natural Images", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118243", "abstract": "To better understand how the complex human attention system operates on images, numerous models have been developed for scanpath and saliency prediction. These models are typically trained on compressed representations of raw eye-tracking data, referred to as scanpaths, while the rich information contained in the raw trajectories is often discarded. Moreover, most existing approaches fail to capture the variability observed among human subjects viewing the same image. They generally predict a single scanpath of fixed, pre-defined length, which conflicts with the inherent diversity and stochastic nature of real-world visual attention. To address these challenges, we propose DiffEye, a diffusion-based training framework designed to model continuous and diverse eye movement trajectories during free viewing of natural images. Our method builds on a diffusion model conditioned on visual stimuli and introduces a novel component, namely Corresponding Positional Embedding (CPE), which aligns spatial gaze information with the patch-based semantic features of the visual input. By leveraging raw eye-tracking trajectories rather than relying on scanpaths, DiffEye captures the inherent variability in human gaze behavior and generates high-quality, realistic eye movement patterns, despite being trained on a comparatively small dataset. The generated trajectories can also be converted into scanpaths and saliency maps, resulting in outputs that more accurately reflect the distribution of human visual attention. DiffEye is the first method to tackle this task on natural images using a diffusion model while fully leveraging the richness of raw eye-tracking data. Our extensive evaluation shows that DiffEye not only achieves state-of-the-art performance in scanpath generation but also enables, for the first time, the generation of continuous eye movement trajectories.", "arxiv_id": "2509.16767v2", "arxiv_authors": ["Ozgur Kara", "Harris Nisar", "James M. Rehg"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a197"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112282, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a606"}, "filepath": "data/2505.17955v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998913872590843, "type": "Poster", "name": "Diffusion Classifiers Understand Compositionality, but Conditions Apply", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121551", "abstract": "Understanding visual scenes is fundamental to human intelligence. While discriminative models have significantly advanced computer vision, they often struggle with compositional understanding. In contrast, recent generative text-to-image diffusion models excel at synthesizing complex scenes, suggesting inherent compositional capabilities.Building on this, zero-shot diffusion classifiers have been proposed to repurpose diffusion models for discriminative tasks. While prior work offered promising results in discriminative compositional scenarios, these results remain preliminary due to a small number of benchmarks and a relatively shallow analysis of conditions under which the models succeed. To address this, we present a comprehensive study of the discriminative capabilities of diffusion classifiers on a wide range of compositional tasks. Specifically, our study covers three diffusion models (SD 1.5, 2.0, and, for the first time, 3-m) spanning 10 datasets and over 30 tasks. Further, we shed light on the role that target dataset domains play in respective performance; to isolate the domain effects, we introduce a new diagnostic benchmark Self-Bench comprised of images created by diffusion models themselves. Finally, we explore the importance of timestep weighting and uncover a relationship between domain gap and timestep sensitivity, particularly for SD3-m.To sum up, diffusion classifiers understand compositionality, but conditions apply!", "arxiv_id": "2505.17955v2", "arxiv_authors": ["Yujin Jeong", "Arnas Uselis", "Seong Joon Oh", "Anna Rohrbach"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a198"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1265592, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a607"}, "filepath": "data/2510.03608v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990779558422437, "type": "Poster", "name": "Diffusion-Classifier Synergy: Reward-Aligned Learning via Mutual Boosting Loop for FSCIL", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115482", "abstract": "Few-Shot Class-Incremental Learning (FSCIL) challenges models to sequentially learn new classes from minimal examples without forgetting prior knowledge, a task complicated by the stability-plasticity dilemma and data scarcity. Current FSCIL methods often struggle with generalization due to their reliance on limited datasets. While diffusion models offer a path for data augmentation, their direct application can lead to semantic misalignment or ineffective guidance. This paper introduces Diffusion-Classifier Synergy (DCS), a novel framework that establishes a mutual boosting loop between diffusion model and FSCIL classifier. DCS utilizes a reward-aligned learning strategy, where a dynamic, multi-faceted reward function derived from the classifier's state directs the diffusion model. This reward system operates at two levels: the feature level ensures semantic coherence and diversity using prototype-anchored maximum mean discrepancy and dimension-wise variance matching, while the logits level promotes exploratory image generation and enhances inter-class discriminability through confidence recalibration and cross-session confusion-aware mechanisms. This co-evolutionary process, where generated images refine the classifier and an improved classifier state yields better reward signals, demonstrably achieves state-of-the-art performance on FSCIL benchmarks, significantly enhancing both knowledge retention and new class learning.", "arxiv_id": "2510.03608v2", "arxiv_authors": ["Ruitao Wu", "Yifan Zhao", "Guangyao Chen", "Jia Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a199"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106385, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a608"}, "filepath": "data/2510.22229v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995370909070442, "type": "Poster", "name": "Diffusion-Driven Two-Stage Active Learning for Low-Budget Semantic Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118018", "abstract": "Semantic segmentation demands dense pixel-level annotations, which can be prohibitively expensive -- especially under extremely constrained labeling budgets. In this paper, we address the problem of low-budget active learning for semantic segmentation by proposing a novel two-stage selection pipeline. Our approach leverages a pre-trained diffusion model to extract rich multi-scale features that capture both global structure and fine details. In the first stage, we perform a hierarchical, representation-based candidate selection by first choosing a small subset of representative pixels per image using MaxHerding, and then refining these into a diverse global pool. In the second stage, we compute an entropy\u2010augmented disagreement score (eDALD) over noisy multi\u2010scale diffusion features to capture both epistemic uncertainty and prediction confidence, selecting the most informative pixels for annotation. This decoupling of diversity and uncertainty lets us achieve high segmentation accuracy with only a tiny fraction of labeled pixels. Extensive experiments on four benchmarks (ADE-Bed, CamVid, Cityscapes, and Pascal-Context) demonstrate that our method significantly outperforms existing baselines under extreme pixel\u2010budget regimes.", "arxiv_id": "2510.22229v1", "arxiv_authors": ["Jeongin Kim", "Wonho Bae", "YouLee Han", "Giyeong Oh", "Youngjae Yu", "Danica J. Sutherland", "Junhyug Noh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038921, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a609"}, "filepath": "data/2502.01051v5.png", "tags": [], "_media_type": "image", "_rand": 0.9998700427778988, "type": "Poster", "name": "Diffusion Model as a Noise-Aware Latent Reward Model for Step-Level Preference Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117442", "abstract": "Preference optimization for diffusion models aims to align them with human preferences for images. Previous methods typically use Vision-Language Models (VLMs) as pixel-level reward models to approximate human preferences. However, when used for step-level preference optimization, these models face challenges in handling noisy images of different timesteps and require complex transformations into pixel space. In this work, we show that pre-trained diffusion models are naturally suited for step-level reward modeling in the noisy latent space, as they are explicitly designed to process latent images at various noise levels. Accordingly, we propose the **Latent Reward Model (LRM)**, which repurposes components of the diffusion model to predict preferences of latent images at arbitrary timesteps. Building on LRM, we introduce **Latent Preference Optimization (LPO)**, a step-level preference optimization method conducted directly in the noisy latent space. Experimental results indicate that LPO significantly improves the model's alignment with general, aesthetic, and text-image alignment preferences, while achieving a 2.5-28x training speedup over existing preference optimization methods.", "arxiv_id": "2502.01051v5", "arxiv_authors": ["Tao Zhang", "Cheng Da", "Kun Ding", "Huan Yang", "Kun Jin", "Yan Li", "Tingting Gao", "Di Zhang", "Shiming Xiang", "Chunhong Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.480Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087307, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60a"}, "filepath": "data/2505.23325v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997493296060084, "type": "Poster", "name": "Dimension-Reduction Attack! Video Generative Models are Experts on Controllable Image Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118583", "abstract": "Video generative models can be regarded as world simulators due to their ability to capture dynamic, continuous changes inherent in real-world environments. These models integrate high-dimensional information across visual, temporal, spatial, and causal dimensions, enabling predictions of subjects in various status. A natural and valuable research direction is to explore whether a fully trained video generative model in high-dimensional space can effectively support lower-dimensional tasks such as controllable image generation. In this work, we propose a paradigm for video-to-image knowledge compression and task adaptation, termed \\textit{Dimension-Reduction Attack} (\\texttt{DRA-Ctrl}), which utilizes the strengths of video models, including long-range context modeling and flatten full-attention, to perform various generation tasks. Specially, to address the challenging gap between continuous video frames and discrete image generation, we introduce a mixup-based transition strategy that ensures smooth adaptation. Moreover, we redesign the attention structure with a tailored masking mechanism to better align text prompts with image-level control. Experiments across diverse image generation tasks, such as subject-driven and spatially conditioned generation, show that repurposed video models outperform those trained directly on images. These results highlight the untapped potential of large-scale video generators for broader visual applications. \\texttt{DRA-Ctrl} provides new insights into reusing resource-intensive video models and lays foundation for future unified generative models across visual modalities.", "arxiv_id": "2505.23325v1", "arxiv_authors": ["Hengyuan Cao", "Yutong Feng", "Biao Gong", "Yijing Tian", "Yunhong Lu", "Chuang Liu", "Bin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3581484, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60b"}, "filepath": "data/2412.11673v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991626257448962, "type": "Poster", "name": "DINO-Foresight: Looking into the Future with DINO", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116713", "abstract": "Predicting future dynamics is crucial for applications like autonomous driving and robotics, where understanding the environment is key. Existing pixel-level methods are computationally expensive and often focus on irrelevant details. To address these challenges, we introduce DINO-Foresight, a novel framework that operates in the semantic feature space of pretrained Vision Foundation Models (VFMs). Our approach trains a masked feature transformer in a self-supervised manner to predict the evolution of VFM features over time. By forecasting these features, we can apply off-the-shelf, task-specific heads for various scene understanding tasks. In this framework, VFM features are treated as a latent space, to which different heads attach to perform specific tasks for future-frame analysis. Extensive experiments show the very strong performance, robustness and scalability of our framework.", "arxiv_id": "2412.11673v1", "arxiv_authors": ["Efstathios Karypidis", "Ioannis Kakogeorgiou", "Spyros Gidaris", "Nikos Komodakis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1599408, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60c"}, "filepath": "data/2505.20460v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999457949611625, "type": "Poster", "name": "DIPO: Dual-State Images Controlled Articulated Object Generation Powered by Diverse Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118083", "abstract": "We present **DIPO**, a novel framework for the controllable generation of articulated 3D objects from a pair of images: one depicting the object in a resting state and the other in an articulated state.Compared to the single-image approach, our dual-image input imposes only a modest overhead for data collection, but at the same time provides important motion information, which is a reliable guide for predicting kinematic relationships between parts.Specifically, we propose a dual-image diffusion model that captures relationships between the image pair to generate part layouts and joint parameters. In addition, we introduce a Chain-of-Thought (CoT) based **graph reasoner** that explicitly infers part connectivity relationships.To further improve robustness and generalization on complex articulated objects, we develop a fully automated dataset expansion pipeline, name **LEGO-Art**, that enriches the diversity and complexity of PartNet-Mobility dataset. We propose **PM-X**, a large-scale dataset of complex articulated 3D objects, accompanied by rendered images, URDF annotations, and textual descriptions.Extensive experiments demonstrate that DIPO significantly outperforms existing baselines in both the resting state and the articulated state, while the proposed PM-X dataset further enhances generalization to diverse and structurally complex articulated objects.Our code and dataset will be released to the community upon publication.", "arxiv_id": "2505.20460v2", "arxiv_authors": ["Ruiqi Wu", "Xinjie Wang", "Liu Liu", "Chunle Guo", "Jiaxiong Qiu", "Chongyi Li", "Lichao Huang", "Zhizhong Su", "Ming-Ming Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070620, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60d"}, "filepath": "data/2505.17412v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991744694869286, "type": "Poster", "name": "Direct3D-S2: Gigascale 3D Generation Made Easy with Spatial Sparse Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117349", "abstract": "Generating high-resolution 3D shapes using volumetric representations such as Signed Distance Functions (SDFs) presents substantial computational and memory challenges. We introduce Direct3D-S2, a scalable 3D generation framework based on sparse volumes that achieves superior output quality with dramatically reduced training costs.Our key innovation is the Spatial Sparse Attention (SSA) mechanism, which greatly enhances the efficiency of Diffusion Transformer (DiT) computations on sparse volumetric data. SSA allows the model to effectively process large token sets within sparse volumes, significantly reducing computational overhead and achieving a 3.9$\\times$ speedup in the forward pass and a 9.6$\\times$ speedup in the backward pass.Our framework also includes a variational autoencoder (VAE) that maintains a consistent sparse volumetric format across input, latent, and output stages. Compared to previous methods with heterogeneous representations in 3D VAE, this unified design significantly improves training efficiency and stability.Our model is trained on public datasets, and experiments demonstrate that Direct3D-S2 not only surpasses state-of-the-art methods in generation quality and efficiency, but also enables training at 1024\u00b3 resolution using only 8 GPUs\u2014a task typically requiring at least 32 GPUs for volumetric representations at $256^3$ resolution, thus making gigascale 3D generation both practical and accessible.", "arxiv_id": "2505.17412v2", "arxiv_authors": ["Shuang Wu", "Youtian Lin", "Feihu Zhang", "Yifei Zeng", "Yikang Yang", "Yajie Bao", "Jiachen Qian", "Siyu Zhu", "Xun Cao", "Philip Torr", "Yao Yao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a19f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7644688, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60e"}, "filepath": "data/2508.14264v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993781528241075, "type": "Poster", "name": "Directed-Tokens: A Robust Multi-Modality Alignment Approach to Large Language-Vision Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116581", "abstract": "Large multimodal models (LMMs) have gained impressive performance due to their outstanding capability in various understanding tasks. However, these models still suffer from some fundamental limitations related to robustness and generalization due to the alignment and correlation between visual and textual features. In this paper, we introduce a simple but efficient learning mechanism for improving the robust alignment between visual and textual modalities by solving shuffling problems. In particular, the proposed approach can improve reasoning capability, visual understanding, and cross-modality alignment by introducing two new tasks: reconstructing the image order and the text order into the LMM's pre-training and fine-tuning phases. In addition, we propose a new directed-token approach to capture visual and textual knowledge, enabling the capability to reconstruct the correct order of visual inputs. Then, we introduce a new Image-to-Response Guided loss to further improve the visual understanding of the LMM in its responses. The proposed approach consistently achieves state-of-the-art (SoTA) performance compared with prior LMMs on academic task-oriented and instruction-following LMM benchmarks.", "arxiv_id": "2508.14264v1", "arxiv_authors": ["Thanh-Dat Truong", "Huu-Thien Tran", "Tran Thai Son", "Bhiksha Raj", "Khoa Luu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1089286, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a60f"}, "filepath": "data/2506.05341v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992977301863453, "type": "Poster", "name": "Direct Numerical Layout Generation for 3D Indoor Scene Synthesis via Spatial Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118104", "abstract": "Realistic 3D indoor scene synthesis is crucial for Embodied AI and digital content creation. However, achieving high fidelity, strong generalization and precise controllability remains challenging due to complex semantic and physical constraints. Existing methods follow two paradigms: (1) Training models on layout datasets to directly generate numerical 3D layouts, which often generalize poorly to unseen room types; (2) Using LLMs/VLMs to produce open-vocabulary intermediate representations (e.g., scene graphs) followed by constraint-based optimization, improving plausibility but sacrificing flexibility due to predefined rules. Both approaches struggle to adapt to fine-grained user requirements. We introduce DirectLayout, a framework that directly generates numerical 3D layouts from text descriptions, without relying on intermediate representations and constrained optimization. DirectLayout decomposes the generation into three stages: producing a Bird's-Eye View (BEV) layout, lifting it into 3D space, and refining object placements for plausibility. To enable explicit spatial reasoning and help the model grasp basic principles of object placement, we employ Chain-of-Thought (CoT) activation based on the 3D-Front dataset. Additionally, we design CoT-Grounded Generative Layout Reward to enhance generalization and spatial planning. During inference, DirectLayout addresses asset-layout mismatches via Iterative Asset-Layout Alignment through in-context learning. Extensive experiments demonstrate that DirectLayout achieves impressive semantic consistency, generalization and physical plausibility.", "arxiv_id": "2506.05341v2", "arxiv_authors": ["Xingjian Ran", "Yixuan Li", "Linning Xu", "Mulin Yu", "Bo Dai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1571130, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a610"}, "filepath": "data/2505.21089v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996575132871545, "type": "Poster", "name": "DisasterM3: A Remote Sensing Vision-Language Dataset for Disaster Damage Assessment and Response", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121427", "abstract": "Large vision-language models (VLMs) have made great achievements in Earth vision. However, complex disaster scenes with diverse disaster types, geographic regions, and satellite sensors have posed new challenges for VLM applications. To fill this gap, we curate a remote sensing vision-language dataset (DisasterM3) for global-scale disaster assessment and response. DisasterM3 includes 26,988 bi-temporal satellite images and 123k instruction pairs across 5 continents, with three characteristics: **1) Multi-hazard**: DisasterM3 involves 36 historical disaster events with significant impacts, which are categorized into 10 common natural and man-made disasters. **2)Multi-sensor**: Extreme weather during disasters often hinders optical sensor imaging, making it necessary to combine Synthetic Aperture Radar (SAR) imagery for post-disaster scenes. **3) Multi-task**: Based on real-world scenarios, DisasterM3 includes 9 disaster-related visual perception and reasoning tasks, harnessing the full potential of VLM's reasoning ability with progressing from disaster-bearing body recognition to structural damage assessment and object relational reasoning, culminating in the generation of long-form disaster reports. We extensively evaluated 14 generic and remote sensing VLMs on our benchmark, revealing that state-of-the-art models struggle with the disaster tasks, largely due to the lack of a disaster-specific corpus, cross-sensor gap, and damage object counting insensitivity. Focusing on these issues, we fine-tune four VLMs using our dataset and achieve stable improvements across all tasks, with robust cross-sensor and cross-disaster generalization capabilities.", "arxiv_id": "2505.21089v2", "arxiv_authors": ["Junjue Wang", "Weihao Xuan", "Heli Qi", "Zhihao Liu", "Kunyi Liu", "Yuhan Wu", "Hongruixuan Chen", "Jian Song", "Junshi Xia", "Zhuo Zheng", "Naoto Yokoya"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1134596, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a611"}, "filepath": "data/2510.22107v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999720455371596, "type": "Poster", "name": "Discovering Latent Graphs with GFlowNets for Diverse Conditional Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118199", "abstract": "Capturing diversity is crucial in conditional and prompt-based image generation, particularly when conditions contain uncertainty that can lead to multiple plausible outputs. To generate diverse images reflecting this diversity, traditional methods often modify random seeds, making it difficult to discern meaningful differences between samples, or diversify the input prompt, which is limited in verbally interpretable diversity. We propose \\modelnamenospace, a novel conditional image generation framework, applicable to any pretrained conditional generative model, that addresses inherent condition/prompt uncertainty and generates diverse plausible images. \\modelname is based on a simple yet effective idea: decomposing the input condition into diverse latent representations, each capturing an aspect of the uncertainty and generating a distinct image. First, we integrate a latent graph, parameterized by Generative Flow Networks (GFlowNets), into the prompt representation computation. Second, leveraging GFlowNets' advanced graph sampling capabilities to capture uncertainty and output diverse trajectories over the graph, we produce multiple trajectories that collectively represent the input condition, leading to diverse condition representations and corresponding output images. Evaluations on natural image and medical image datasets demonstrate \\modelnamenospace\u2019s improvement in both diversity and fidelity across image synthesis, image generation, and counterfactual generation tasks.", "arxiv_id": "2510.22107v1", "arxiv_authors": ["Bailey Trang", "Parham Saremi", "Alan Q. Wang", "Fangrui Huang", "Zahra TehraniNasab", "Amar Kumar", "Tal Arbel", "Li Fei-Fei", "Ehsan Adeli"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032746, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a612"}, "filepath": "data/2505.01917v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996452290108889, "type": "Poster", "name": "Discrete Spatial Diffusion: Intensity-Preserving Diffusion Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116326", "abstract": "Generative diffusion models have achieved remarkable success in producing high-quality images. However, because these models typically operate in continuous intensity spaces\u2014diffusing independently per pixel and color channel\u2014they are fundamentally ill-suited for applications where quantities such as particle counts or material units are inherently discrete and governed by strict conservation laws like mass preservation, which limits their applicability in scientific workflows. To address this limitation, we propose Discrete Spatial Diffusion (DSD), a framework based on a continuous-time, discrete-state jump stochastic process that operates directly in discrete spatial domains while strictly preserving mass in both forward and reverse diffusion processes. By using spatial diffusion to achieve mass preservation, we introduce stochasticity naturally through a discrete formulation. We demonstrate the expressive flexibility of DSD by performing image synthesis, class conditioning, and image inpainting across widely-used image benchmarks, with the ability to condition on image intensity. Additionally, we highlight its applicability to domain-specific scientific data for materials microstructure, bridging the gap between diffusion models and mass-conditioned scientific applications.", "arxiv_id": "2505.01917v2", "arxiv_authors": ["Javier E. Santos", "Agnese Marcato", "Roman Colman", "Nicholas Lubbers", "Yen Ting Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a4"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1000539, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a613"}, "filepath": "data/2504.13140v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997276872507461, "type": "Poster", "name": "Disentangled Concepts Speak Louder Than Words: Explainable Video Action Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115949", "abstract": "Effective explanations of video action recognition models should disentangle how movements unfold over time from the surrounding spatial context. However, existing methods\u2014based on saliency\u2014produce entangled explanations, making it unclear whether predictions rely on motion or spatial context. Language-based approaches offer structure but often fail to explain motions due to their tacit nature\u2014intuitively understood but difficult to verbalize. To address these challenges, we propose Disentangled Action aNd Context concept-based Explainable (DANCE) video action recognition, a framework that predicts actions through disentangled concept types: motion dynamics, objects, and scenes. We define motion dynamics concepts as human pose sequences. We employ a large language model to automatically extract object and scene concepts. Built on an ante-hoc concept bottleneck design, DANCE enforces prediction through these concepts. Experiments on four datasets\u2014KTH, Penn Action, HAA500, and UCF101\u2014demonstrate that DANCE significantly improves explanation clarity with competitive performance. Through a user study, we validate the superior interpretability of DANCE. Experimental results also show that DANCE is beneficial for model debugging, editing, and failure analysis.", "arxiv_id": "2504.13140v1", "arxiv_authors": ["Jongseo Lee", "Wooil Lee", "Gyeong-Moon Park", "Seong Tae Kim", "Jinwoo Choi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1933525, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a614"}, "filepath": "data/2509.21989v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999172310074222, "type": "Poster", "name": "Disentangling Diffusion Features for Detecting Inconsistencies in Subject-Driven Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119961", "abstract": "We propose a novel approach for disentangling visual and semantic features from the backbones of pre-trained diffusion models, enabling the detection of visually inconsistent regions in subject-driven image generation. While diffusion model backbones are known to encode semantically rich features, they should also contain visual features that capture appearance to support their image synthesis capabilities. However, isolating these visual features is non-trivial due to the absence of datasets with annotated visual correspondences.To address this, we design an automated dataset generation pipeline that produces image pairs with annotated semantic and visual correspondences based on existing subject-driven datasets. Using this dataset, we propose an architecture to disentangle semantic and visual features in a contrastive manner. We further propose a metric that leverages the disentangled features to quantify and localize inconsistencies in subject-driven generation.Experiments show that our approach significantly outperforms global feature-based metrics such as CLIP and DINO, as well as Vision-Language Models, in capturing visual inconsistencies. To the best of our knowledge, this is the first approach that enables both quantification and spatial localization of inconsistency in subject-driven image generation, offering a valuable tool for advancing the task.", "arxiv_id": "2509.21989v1", "arxiv_authors": ["Abdelrahman Eldesokey", "Aleksandar Cvejic", "Bernard Ghanem", "Peter Wonka"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3087314, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a615"}, "filepath": "data/2506.09024v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997851093491867, "type": "Poster", "name": "DIsoN: Decentralized Isolation Networks for Out-of-Distribution Detection in Medical Imaging", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115312", "abstract": "Safe deployment of machine learning (ML) models in safety-critical domains such as medical imaging requires detecting inputs with characteristics not seen during training, known as out-of-distribution (OOD) detection, to prevent unreliable predictions. Effective OOD detection after deployment could benefit from access to the training data, enabling direct comparison between test samples and the training data distribution to identify differences. State-of-the-art OOD detection methods, however, either discard the training data after deployment or assume that test samples and training data are centrally stored together, an assumption that rarely holds in real-word settings. This is because shipping the training data with the deployed model is usually impossible due to the size of training databases, as well as proprietary or privacy constraints. We introduce the Isolation Network, an OOD detection framework that quantifies the difficulty of separating a target test sample from the training data by solving a binary classification task. We then propose Decentralized Isolation Networks (DIsoN), which enables the comparison of training and test data when data-sharing is impossible, by exchanging only model parameters between the remote computational nodes of training and deployment. We further extend DIsoN with class-conditioning, comparing a target sample solely with training data of its predicted class. We evaluate DIsoN on four medical imaging datasets (dermatology, chest X-ray, breast ultrasound, histopathology) across 12 OOD detection tasks. DIsoN performs favorably against existing methods while respecting data-privacy. This decentralized OOD detection framework opens the way for a new type of service that ML developers could provide along with their models: providing remote, secure utilization of their training data for OOD detection services. Code will be made available at: ************", "arxiv_id": "2506.09024v1", "arxiv_authors": ["Felix Wagner", "Pramit Saha", "Harry Anthony", "J. Alison Noble", "Konstantinos Kamnitsas"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.481Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038373, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a616"}, "filepath": "data/2508.09423v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993824824746869, "type": "Poster", "name": "Distilling LLM Prior to Flow Model for Generalizable Agent\u2019s Imagination in Object Goal Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117630", "abstract": "The Object Goal Navigation (ObjectNav) task challenges agents to locate a specified object in an unseen environment by imagining unobserved regions of the scene. Prior approaches rely on deterministic and discriminative models to complete semantic maps, overlooking the inherent uncertainty in indoor layouts and limiting their ability to generalize to unseen environments. In this work, we propose GOAL, a generative flow-based framework that models the semantic distribution of indoor environments by bridging observed regions with LLM-enriched full-scene semantic maps. During training, spatial priors inferred from large language models (LLMs) are encoded as two-dimensional Gaussian fields and injected into target maps, distilling rich contextual knowledge into the flow model and enabling more generalizable completions. Extensive experiments demonstrate that GOAL achieves state-of-the-art performance on MP3D and Gibson, and shows strong generalization in transfer settings to HM3D.", "arxiv_id": "2508.09423v2", "arxiv_authors": ["Badi Li", "Ren-jie Lu", "Yu Zhou", "Jingke Meng", "Wei-shi Zheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1064987, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a617"}, "filepath": "data/2505.12191v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990637794456612, "type": "Poster", "name": "Ditch the Denoiser: Emergence of Noise Robustness in Self-Supervised Learning from Data Curriculum", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118208", "abstract": "Self-Supervised Learning (SSL) has become a powerful solution to extract rich representations from unlabeled data. Yet, SSL research is mostly focused on clean, curated and high-quality datasets. As a result, applying SSL on noisy data remains a challenge, despite being crucial to applications such as astrophysics, medical imaging, geophysics or finance. In this work, we present a fully self-supervised framework that enables noise-robust representation learning without requiring a denoiser at inference or downstream fine-tuning. Our method first trains an SSL denoiser on noisy data, then uses it to construct a denoised-to-noisy data curriculum (i.e., training first on denoised, then noisy samples) for pretraining a SSL backbone (e.g., DINOv2), combined with a teacher-guided regularization that anchors noisy embeddings to their denoised counterparts. This process encourages the model to internalize noise robustness. Notably, the denoiser can be discarded after pretraining, simplifying deployment. On ImageNet-1k with ViT-B under extreme Gaussian noise ($\\sigma=255$, SNR = 0.72 dB), our method improves linear probing accuracy by 4.8\\% over DINOv2, demonstrating that denoiser-free robustness can emerge from noise-aware pretraining.", "arxiv_id": "2505.12191v1", "arxiv_authors": ["Wenquan Lu", "Jiaqi Zhang", "Hugues Van Assel", "Randall Balestriero"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1a9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1123159, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a618"}, "filepath": "data/2503.09271v4.png", "tags": [], "_media_type": "image", "_rand": 0.9996870735641386, "type": "Poster", "name": "DitHub: A Modular Framework for Incremental Open-Vocabulary Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116948", "abstract": "Open-Vocabulary object detectors can generalize to an unrestricted set of categories through simple textual prompting. However, adapting these models to rare classes or reinforcing their abilities on multiple specialized domains remains essential. While recent methods rely on monolithic adaptation strategies with a single set of weights, we embrace modular deep learning. We introduce DitHub, a framework designed to build and maintain a library of efficient adaptation modules. Inspired by Version Control Systems, DitHub manages expert modules as branches that can be fetched and merged as needed. This modular approach allows us to conduct an in-depth exploration of the compositional properties of adaptation modules, marking the first such study in Object Detection. Our method achieves state-of-the-art performance on the ODinW-13 benchmark and ODinW-O, a newly introduced benchmark designed to assess class reappearance.", "arxiv_id": "2503.09271v4", "arxiv_authors": ["Chiara Cappellino", "Gianluca Mancusi", "Matteo Mosconi", "Angelo Porrello", "Simone Calderara", "Rita Cucchiara"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1aa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 987438, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a619"}, "filepath": "data/2506.01430v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990814701796953, "type": "Poster", "name": "DNAEdit: Direct Noise Alignment for Text-Guided Rectified Flow Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118684", "abstract": "Leveraging the powerful generation capability of large-scale pretrained text-to-image models, training-free methods have demonstrated impressive image editing results. Conventional diffusion-based methods, as well as recent rectified flow (RF)-based methods, typically reverse synthesis trajectories by gradually adding noise to clean images, during which the noisy latent at the current timestep is used to approximate that at the next timesteps, introducing accumulated drift and degrading reconstruction accuracy. Considering the fact that in RF the noisy latent is estimated through direct interpolation between Gaussian noises and clean images at each timestep, we propose Direct Noise Alignment (DNA), which directly refines the desired Gaussian noise in the noise domain, significantly reducing the error accumulation in previous methods. Specifically, DNA estimates the velocity field of the interpolated noised latent at each timestep and adjusts the Gaussian noise by computing the difference between the predicted and expected velocity field. We validate the effectiveness of DNA and reveal its relationship with existing RF-based inversion methods. Additionally, we introduce a Mobile Velocity Guidance (MVG) to control the target prompt-guided generation process, balancing image background preservation and target object editability. DNA and MVG collectively constitute our proposed method, namely DNAEdit. Finally, we introduce DNA-Bench, a long-prompt benchmark, to evaluate the performance of advanced image editing models. Experimental results demonstrate that our DNAEdit achieves superior performance to state-of-the-art text-guided editing methods. Our code, model, and benchmark will be made publicly available.", "arxiv_id": "2506.01430v1", "arxiv_authors": ["Chenxi Xie", "Minghan Li", "Shuai Li", "Yuhui Wu", "Qiaosi Yi", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ab"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1098565, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61a"}, "filepath": "data/2506.12323v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998727986572664, "type": "Poster", "name": "Doctor Approved: Generating Medically Accurate Skin Disease Images through AI\u2013Expert Feedback", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118803", "abstract": "Paucity of medical data severely limits the generalizability of diagnostic ML models, as the full spectrum of disease variability can not be represented by a small clinical dataset. To address this, diffusion models (DMs) have been considered as a promising avenue for synthetic image generation and augmentation. However, they frequently produce _medically inaccurate_ images, deteriorating the model performance. Expert domain knowledge is critical for synthesizing images that correctly encode clinical information, especially when data is scarce and quality outweighs quantity. Existing approaches for incorporating human feedback, such as reinforcement learning (RL) and Direct Preference Optimization (DPO), rely on robust reward functions or demand labor-intensive expert evaluations. Recent progress in Multimodal Large Language Models (MLLMs) reveals their strong visual reasoning capabilities, making them adept candidates as evaluators. In this work, we propose a novel framework, coined MAGIC (**M**edically **A**ccurate **G**eneration of **I**mages through AI-Expert **C**ollaboration), that synthesizes clinically accurate skin disease images for data augmentation. Our method creatively translates expert-defined criteria into actionable feedback for image synthesis of DMs, significantly improving clinical accuracy while reducing the direct human workload. Experiments demonstrate that our method greatly improves the clinical quality of synthesized skin disease images, with outputs aligning with dermatologist assessments. Additionally, augmenting training data with these synthesized images improves diagnostic accuracy by +9.02% on a challenging 20-condition skin disease classification task, and by +13.89% in the few-shot setting.", "arxiv_id": "2506.12323v2", "arxiv_authors": ["Janet Wang", "Yunbei Zhang", "Zhengming Ding", "Jihun Hamm"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ac"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060276, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61b"}, "filepath": "data/2510.02912v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992872222171794, "type": "Poster", "name": "Don't Just Chase \u201cHighlighted Tokens\u201d in MLLMs: Revisiting Visual Holistic Context Retention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115059", "abstract": "Despite their powerful capabilities, multimodal large language models (MLLMs) suffer from considerable computational overhead due to their reliance on massive visual tokens. Recent studies have explored token pruning to alleviate this problem, which typically uses text-vision cross-attention or [CLS] attention to assess and discard redundant visual tokens. In this work, we identify a critical limitation of such attention-first pruning approaches, i.e., they tend to preserve semantically similar tokens, resulting in pronounced performance drops under high pruning rates. To this end, we propose HoloV, a simple yet effective, plug-and-play visual token pruning framework for efficient inference. Distinct from previous attention-first schemes, HoloV rethinks token retention from a holistic perspective. By adaptively distributing the pruning budget across different spatial crops, HoloV ensures that the retained tokens capture the global visual context rather than isolated salient features. This strategy minimizes representational collapse and maintains task-relevant information even under aggressive pruning. Experimental results demonstrate that our HoloV achieves superior performance across various tasks, MLLM architectures, and pruning ratios compared to SOTA methods. For instance, LLaVA1.5 equipped with HoloV preserves 95.8% of the original performance after pruning 88.9% of visual tokens, achieving superior efficiency-accuracy trade-offs.", "arxiv_id": "2510.02912v2", "arxiv_authors": ["Xin Zou", "Di Lu", "Yizhou Wang", "Yibo Yan", "Yuanhuiyi Lyu", "Xu Zheng", "Linfeng Zhang", "Xuming Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ad"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1046052, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61c"}, "filepath": "data/2505.16239v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992820650987169, "type": "Poster", "name": "DOVE: Efficient One-Step Diffusion Model for Real-World Video Super-Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119199", "abstract": "Diffusion models have demonstrated promising performance in real-world video super-resolution (VSR). However, the dozens of sampling steps they require, make inference extremely slow. Sampling acceleration techniques, particularly single-step, provide a potential solution. Nonetheless, achieving one step in VSR remains challenging, due to the high training overhead on video data and stringent fidelity demands. To tackle the above issues, we propose DOVE, an efficient one-step diffusion model for real-world VSR. DOVE is obtained by fine-tuning a pretrained video diffusion model (*i.e.*, CogVideoX). To effectively train DOVE, we introduce the latent\u2013pixel training strategy. The strategy employs a two-stage scheme to gradually adapt the model to the video super-resolution task. Meanwhile, we design a video processing pipeline to construct a high-quality dataset tailored for VSR, termed HQ-VSR. Fine-tuning on this dataset further enhances the restoration capability of DOVE. Extensive experiments show that DOVE exhibits comparable or superior performance to multi-step diffusion-based VSR methods. It also offers outstanding inference efficiency, achieving a speed\u2011up of at least 11$\\times$ over existing approaches. The code will be made publicly available.", "arxiv_id": "2505.16239v1", "arxiv_authors": ["Zheng Chen", "Zichen Zou", "Kewei Zhang", "Xiongfei Su", "Xin Yuan", "Yong Guo", "Yulun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ae"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1188539, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61d"}, "filepath": "data/2510.18851v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995160471050182, "type": "Poster", "name": "DP\u00b2O-SR: Direct Perceptual Preference Optimization for Real-World Image Super-Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119975", "abstract": "Real-world image super-resolution (Real-ISR) benefits from pre-trained text-to-image diffusion models, which can generate rich details. However, these models exhibit inherent stochasticity, where different noise inputs yield results of varying perceptual quality. This stochasticity actually widens the perceptual quality *range* of outputs, which motivates us to develop **D**irect **P**erceptual **P**reference **O**ptimization for Real-I**SR** (**DP\u00b2O-SR**). We first define a Combined Perceptual Score (CPS) to quantify image perceptual quality preference, then employ Direct Preference Optimization (DPO) to fine-tune the pre-trained Real-ISR model towards generating outputs with higher CPS scores. While simply increasing the number of noise samples could improve DPO performance, this is computationally expensive. We therefore propose to extract more training signals from a limited group of candidate outputs. Instead of forming only one preference pair (best and worst samples) per input, we generate multiple pairs by selecting higher- and lower-scoring samples from the same group, significantly increasing the DPO training efficacy. In addition, considering the unequal importance of these pairs, we introduce Hierarchical Significance Weighting (HSW) to weight the DPO loss based on intra-pair significance (CPS difference within the pair) and inter-group learning potential (CPS variance of the source candidate group), prioritizing more informative pairs and maximizing DPO learning from available samples. Experiments show that **DP\u00b2O-SR** significantly enhances the perceptual quality over the diffusion-based Real-ISR models, validated by quantitative metrics and visualization comparison. The code, model, and data will be made publicly available.", "arxiv_id": "2510.18851v1", "arxiv_authors": ["Rongyuan Wu", "Lingchen Sun", "Zhengqiang Zhang", "Shihao Wang", "Tianhe Wu", "Qiaosi Yi", "Shuai Li", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1af"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1088610, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61e"}, "filepath": "data/2506.14549v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999710648288167, "type": "Poster", "name": "DreamLight: Towards Harmonious and Consistent Image Relighting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115189", "abstract": "We introduce a model named DreamLight for universal image relighting in this work, which can seamlessly composite subjects into a new background while maintaining aesthetic uniformity in terms of lighting and color tone. The background can be specified by natural images (image-based relighting) or generated from unlimited text prompts (text-based relighting). Existing studies primarily focus on image-based relighting, while with scant exploration into text-based scenarios. Some works employ intricate disentanglement pipeline designs relying on environment maps to provide relevant information, which grapples with the expensive data cost required for intrinsic decomposition and light source. Other methods take this task as an image translation problem and perform pixel-level transformation with autoencoder architecture. While these methods have achieved decent harmonization effects, they struggle to generate realistic and natural light interaction effects between the foreground and background. To alleviate these challenges, we reorganize the input data into a unified format and leverage the semantic prior provided by the pretrained diffusion model to facilitate the generation of natural results. Moreover, we propose a Position-Guided Light Adapter (PGLA) that condenses light information from different directions in the background into designed light query embeddings, and modulates the foreground with direction-biased masked attention. In addition, we present a post-processing module named Spectral Foreground Fixer (SFF) to adaptively reorganize different frequency components of subject and relighted background, which helps enhance the consistency of foreground appearance. Extensive comparisons and user study demonstrate that our DreamLight achieves remarkable relighting performance.", "arxiv_id": "2506.14549v1", "arxiv_authors": ["Yong Liu", "Wenpeng Xiao", "Qianqian Wang", "Junlin Chen", "Shiyin Wang", "Yitong Wang", "Xinglong Wu", "Yansong Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 989760, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a61f"}, "filepath": "data/2509.17940v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990353760320674, "type": "Poster", "name": "DriveDPO: Policy Learning via Safety DPO For End-to-End Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116932", "abstract": "End-to-end autonomous driving has substantially progressed by directly predicting future trajectories from raw perception inputs, which bypasses traditional modular pipelines. However, mainstream methods trained via imitation learning suffer from critical safety limitations, as they fail to distinguish between trajectories that appear human-like but are potentially unsafe. Some recent approaches attempt to address this by regressing multiple rule-driven scores but decoupling supervision from policy optimization, resulting in suboptimal performance. To tackle these challenges, we propose DriveDPO, a Safety Direct Preference Optimization Policy Learning framework. First, we distill a unified policy distribution from human imitation similarity and rule-based safety scores for direct policy optimization. Further, we introduce an iterative Direct Preference Optimization stage formulated as trajectory-level preference alignment. Extensive experiments on the NAVSIM benchmark demonstrate that DriveDPO achieves a new state-of-the-art PDMS of 90.0. Furthermore, qualitative results across diverse challenging scenarios highlight DriveDPO\u2019s ability to produce safer and more reliable driving behaviors.", "arxiv_id": "2509.17940v1", "arxiv_authors": ["Shuyao Shang", "Yuntao Chen", "Yuqi Wang", "Yingyan Li", "Zhaoxiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b1"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1026586, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a620"}, "filepath": "data/2412.09043v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991692710780066, "type": "Poster", "name": "DrivingRecon: Large 4D Gaussian Reconstruction Model For Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118906", "abstract": "Large reconstruction model has remarkable progress, which can directly predict 3D or 4D representations for unseen scenes and objects. However, current work has not systematically explored the potential of large reconstruction models in the field of autonomous driving. To achieve this, we introduce the Large 4D Gaussian Reconstruction Model (DrivingRecon). With an elaborate and simple framework design, it not only ensures efficient and high-quality reconstruction, but also provides potential for downstream tasks. There are two core contributions: firstly, the Prune and Dilate Block (PD-Block) is proposed to prune redundant and overlapping Gaussian points and dilate Gaussian points for complex objects. Then, dynamic and static decoupling is tailored to better learn the temporary-consistent geometry across different time. Experimental results demonstrate that DrivingRecon significantly improves scene reconstruction quality compared to existing methods. Furthermore, we explore applications of DrivingRecon in model pre-training, vehicle type adaptation, and scene editing. Our code will be available.", "arxiv_id": "2412.09043v1", "arxiv_authors": ["Hao Lu", "Tianshuo Xu", "Wenzhao Zheng", "Yunpeng Zhang", "Wei Zhan", "Dalong Du", "Masayoshi Tomizuka", "Kurt Keutzer", "Yingcong Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2383905, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a621"}, "filepath": "data/2505.24173v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992979181699243, "type": "Poster", "name": "DrVD-Bench: Do Vision-Language Models Reason Like Human Doctors in Medical Image Diagnosis?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121523", "abstract": "Vision\u2013language models (VLMs) exhibit strong zero-shot generalization on natural images and show early promise in interpretable medical image analysis. However, existing benchmarks do not systematically evaluate whether these models truly reason like human clinicians or merely imitate superficial patterns. To address this gap, we propose DrVD-Bench, the first multimodal benchmark for clinical visual reasoning. DrVD-Bench consists of three modules: Visual Evidence Comprehension, Reasoning Trajectory Assessment, and Report Generation Evaluation, comprising a total of 7,789 image\u2013question pairs. Our benchmark covers 20 task types, 17 diagnostic categories, and five imaging modalities\u2014CT, MRI, ultrasound, radiography, and pathology. DrVD-Bench is explicitly structured to reflect the clinical reasoning workflow from modality recognition to lesion identification and diagnosis. We benchmark 19 VLMs, including general-purpose and medical-specific, open-source and proprietary models, and observe that performance drops sharply as reasoning complexity increases. While some models begin to exhibit traces of human-like reasoning, they often still rely on shortcut correlations rather than grounded visual understanding. DrVD-Bench offers a rigorous and structured evaluation framework to guide the development of clinically trustworthy VLMs.", "arxiv_id": "2505.24173v1", "arxiv_authors": ["Tianhong Zhou", "Yin Xu", "Yingtao Zhu", "Chuxi Xiao", "Haiyang Bian", "Lei Wei", "Xuegong Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 941576, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a622"}, "filepath": "data/2505.14359v6.png", "tags": [], "_media_type": "image", "_rand": 0.999204330582553, "type": "Poster", "name": "Dual Data Alignment Makes AI-Generated Image Detector Easier Generalizable", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119323", "abstract": "The rapid increase in AI-generated images (AIGIs) underscores the urgent need for generalizable detection methods. Existing detectors, however, are often trained on biased datasets, leading to the possibility of overfitting on non-causal image attributes that are spuriously correlated with real/synthetic labels. While these biased features enhance performance on the training data, they result in substantial performance degradation when applied to unbiased datasets. One common solution is to perform dataset alignment through generative reconstruction, matching the semantic content between real and synthetic images. However, we revisit this approach and show that pixel-level alignment alone is insufficient \u2014 the reconstructed images still suffer from frequency-level misalignment, which can perpetuate spurious correlations. To illustrate, we observe that reconstruction models tend to restore the high-frequency details lost in real images (possibly due to JPEG compression), inadvertently creating a frequency-level misalignment, where synthetic images appear to have richer high-frequency content than real ones. This misalignment leads to models associating high-frequency features with synthetic labels, further reinforcing biased cues. To resolve this, we propose Dual Data Alignment (DDA), which aligns both the pixel and frequency domains. Moreover, we introduce two new test sets: DDA-COCO, containing DDA-aligned synthetic images for testing detector performance on the most aligned dataset, and EvalGEN, featuring the latest generative models for assessing detectors under new generative architectures such as visual auto-regressive generators. Finally, our extensive evaluations demonstrate that a detector trained exclusively on DDA-aligned MSCOCO could improve across 8 diverse benchmarks by a non-trivial margin, showing a +7.2\\% on in-the-wild benchmarks, highlighting the improved generalizability of unbiased detectors.", "arxiv_id": "2505.14359v6", "arxiv_authors": ["Ruoxin Chen", "Junwei Xi", "Zhiyuan Yan", "Ke-Yue Zhang", "Shuang Wu", "Jingyi Xie", "Xu Chen", "Lei Xu", "Isabel Guan", "Taiping Yao", "Shouhong Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.482Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063467, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a623"}, "filepath": "data/2502.02096v3.png", "tags": [], "_media_type": "image", "_rand": 0.99925216821152, "type": "Poster", "name": "Dual-Flow: Transferable Multi-Target, Instance-Agnostic Attacks via $\\textit{In-the-wild}$ Cascading Flow Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115031", "abstract": "Adversarial attacks are widely used to evaluate model robustness, and in black-box scenarios, the transferability of these attacks becomes crucial. Existing generator-based attacks have excellent generalization and transferability due to their instance-agnostic nature. However, when training generators for multi-target tasks, the success rate of transfer attacks is relatively low due to the limitations of the model's capacity. To address these challenges, we propose a novel Dual-Flow framework for multi-target instance-agnostic adversarial attacks, utilizing Cascading Distribution Shift Training to develop an adversarial velocity function. Extensive experiments demonstrate that Dual-Flow significantly improves transferability over previous multi-target generative attacks. For example, it increases the success rate from Inception-v3 to ResNet-152 by 34.58%. Furthermore, our attack method shows substantially stronger robustness against defense mechanisms, such as adversarially trained models.", "arxiv_id": "2502.02096v3", "arxiv_authors": ["Yixiao Chen", "Shikun Sun", "Jianshu Li", "Ruoyu Li", "Zhe Li", "Junliang Xing"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1045832, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a624"}, "filepath": "data/2509.21992v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998782493522679, "type": "Poster", "name": "DualFocus: Depth from Focus with Spatio-Focal Dual Variational Constraints", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118286", "abstract": "Depth-from-Focus (DFF) enables precise depth estimation by analyzing focus cues across a stack of images captured at varying focal lengths. While recent learning-based approaches have advanced this field, they often struggle in complex scenes with fine textures or abrupt depth changes, where focus cues may become ambiguous or misleading. We present DualFocus, a novel DFF framework that leverages the focal stack\u2019s unique gradient patterns induced by focus variation, jointly modeling focus changes over spatial and focal dimensions. Our approach introduces a variational formulation with dual constraints tailored to DFF: spatial constraints exploit gradient pattern changes across focus levels to distinguish true depth edges from texture artifacts, while focal constraints enforce unimodal, monotonic focus probabilities aligned with physical focus behavior. These inductive biases improve robustness and accuracy in challenging regions. Comprehensive experiments on four public datasets demonstrate that DualFocus consistently outperforms state-of-the-art methods in both depth accuracy and perceptual quality. Code and pretrained models will be made publicly available.", "arxiv_id": "2509.21992v1", "arxiv_authors": ["Sungmin Woo", "Sangyoun Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042561, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a625"}, "filepath": "data/2506.15649v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990213537707955, "type": "Poster", "name": "Dual-Stage Value-Guided Inference with Margin-Based Reward Adjustment for Fast and Faithful VLM Captioning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118599", "abstract": "Despite significant advances in inference-time search for vision\u2013language models (VLMs), existing approaches remain both computationally expensive and prone to unpenalized, low-confidence generations which often lead to persistent hallucinations. We introduce Value-guided Inference with Margin-based Reward (ViMaR), a two-stage inference framework that improves both efficiency and output fidelity by combining a temporal-difference value model with a margin-aware reward adjustment. In the first stage, we perform a single pass to identify the highest-value caption among diverse candidates. In the second stage, we selectively refine only those segments that were overlooked or exhibit weak visual grounding, thereby eliminating frequently rewarded evaluations. A calibrated margin-based penalty discourages low-confidence continuations while preserving descriptive richness. Extensive experiments across multiple VLM architectures demonstrate that ViMaR generates captions that are significantly more reliable, factually accurate, detailed, and explanatory, while achieving over 4$\\times$ speedup compared to existing value-guided methods. Specifically, we show that ViMaR trained solely on LLaVA Mistral-7B, generalizes effectively to guide decoding in a stronger unseen model. To further validate this, we adapt the ViMaR to steer generation in LLaVA-OneVision-Qwen2-7B, leading to consistent improvements in caption quality and demonstrating robust cross-model guidance. This cross-model generalization highlights ViMaR's flexibility and modularity, positioning it as a scalable and transferable inference-time decoding strategy. Furthermore, when ViMaR-generated captions are used for self-training, the underlying models achieve substantial gains across a broad suite of visual comprehension benchmarks, underscoring the potential of fast, accurate, and self-improving VLM pipelines.", "arxiv_id": "2506.15649v1", "arxiv_authors": ["Ankan Deria", "Adinath Madhavrao Dukre", "Feilong Tang", "Sara Atito", "Sudipta Roy", "Muhammad Awais", "Muhammad Haris Khan", "Imran Razzak"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1045520, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a626"}, "filepath": "data/2504.17040v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992096905856002, "type": "Poster", "name": "DyMU: Dynamic Merging and Virtual Unmerging for Efficient Variable-Length VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115195", "abstract": "We present DyMU, an efficient, training-free framework that dynamically reduces the computational burden of vision-language models (VLMs) while maintaining high task performance. Our approach comprises two key components. First, Dynamic Token Merging (DToMe) reduces the number of visual token embeddings by merging similar tokens based on image complexity, addressing the inherent inefficiency of fixed-length outputs in vision transformers. Second, Virtual Token Unmerging (VTU) simulates the expected token sequence for large language models (LLMs) by efficiently reconstructing the attention dynamics of a full sequence, thus preserving the downstream performance without additional fine-tuning. Unlike previous approaches, our method dynamically determines token length based on the *image content*\u2014not just resolution\u2014and operates completely training-free, making it readily applicable to most state-of-the-art VLM architectures. Extensive experiments on image and video understanding tasks, demonstrate that DyMU can reduce the average visual token count by 32%-85% while achieving comparable performance to full-length models, across diverse VLM architectures. Furthermore, qualitative analyses show that the adaptive token reduction from DToMe aligns well with human perception and enables users to better control computational costs through flexible integration with additional vision tools and models.", "arxiv_id": "2504.17040v2", "arxiv_authors": ["Zhenhailong Wang", "Senthil Purushwalkam", "Caiming Xiong", "Silvio Savarese", "Heng Ji", "Ran Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2052826, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a627"}, "filepath": "data/2506.13922v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997963417691539, "type": "Poster", "name": "DynaGuide: Steering Diffusion Polices with Active Dynamic Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117515", "abstract": "Deploying large, complex policies in the real world requires the ability to steer them to fit the needs of a situation. Most common steering approaches, like goal-conditioning, require training the robot policy with a distribution of test-time objectives in mind. To overcome this limitation, we present DynaGuide, a steering method for diffusion policies using guidance from an external dynamics model during the diffusion denoising process. DynaGuide separates the dynamics model from the base policy, which gives it multiple advantages, including the ability to steer towards multiple objectives, enhance underrepresented base policy behaviors, and maintain robustness on low-quality objectives. The separate guidance signal also allows DynaGuide to work with off-the-shelf pretrained diffusion policies. We demonstrate the performance and features of DynaGuide against other steering approaches in a series of simulated and real experiments, showing an average steering success of 70% on a set of articulated CALVIN tasks and outperforming goal-conditioning by 5.4x when steered with low-quality objectives. We also successfully steer an off-the-shelf real robot policy to express preference for particular objects and even create novel behavior.", "arxiv_id": "2506.13922v1", "arxiv_authors": ["Maximilian Du", "Shuran Song"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1b9"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2204610, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a628"}, "filepath": "data/2505.11383v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997905422919445, "type": "Poster", "name": "Dynam3D: Dynamic Layered 3D Tokens Empower VLM for Vision-and-Language Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115715", "abstract": "Vision-and-Language Navigation (VLN) is a core task where embodied agents leverage their spatial mobility to navigate in 3D environments toward designated destinations based on natural language instructions. Recently, video-language large models (Video-VLMs) with strong generalization capabilities and rich commonsense knowledge have shown remarkable performance when applied to VLN tasks. However, these models still encounter the following challenges when applied to real-world 3D navigation: 1) Insufficient understanding of 3D geometry and spatial semantics; 2) Limited capacity for large-scale exploration and long-term environmental memory; 3) Poor adaptability to dynamic and changing environments.To address these limitations, we propose Dynam3D, a dynamic layered 3D representation model that leverages language-aligned, generalizable, and hierarchical 3D representations as visual input to train 3D-VLM in navigation action prediction. Given posed RGB-D images, our Dynam3D projects 2D CLIP features into 3D space and constructs multi-level 3D patch-instance-zone representations for 3D geometric and semantic understanding with a dynamic and layer-wise update strategy. Our Dynam3D is capable of online encoding and localization of 3D instances, and dynamically updates them in changing environments to provide large-scale exploration and long-term memory capabilities for navigation. By leveraging large-scale 3D-language pretraining and task-specific adaptation, our Dynam3D sets new state-of-the-art performance on VLN benchmarks including R2R-CE, REVERIE-CE and NavRAG-CE under monocular settings. Furthermore, experiments for pre-exploration, lifelong memory, and real-world robot validate the effectiveness of practical deployment.", "arxiv_id": "2505.11383v1", "arxiv_authors": ["Zihan Wang", "Seungjun Lee", "Gim Hee Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ba"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047346, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a629"}, "filepath": "data/2510.10691v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998977576482123, "type": "Poster", "name": "Dynamic Gaussian Splatting from Defocused and Motion-blurred Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118115", "abstract": "This paper presents a unified framework that allows high-quality dynamic Gaussian Splatting from both defocused and motion-blurred monocular videos. Due to the significant diference between the formation processes of defocus blur and motion blur, existing methods are tailored for either one of them, lacking the ability to simultaneously deal with both of them. Although the two can be jointly modeled as blur kernel based convolution, the inherent difficulty in estimating accurate blur kernels greatly limits the progress in this direction. We in this work go step further towards this direction. Particularly, we propose to estimate per-pixel reliable blur kernels using a blur prediction network that exploits blur-related scene and camera information and is subject to a blur-aware sparsity constraint. Besides, we introduce a dynamic Gaussian densification strategy to mitigate the lack of Gaussians for incomplete regions, and boost the performance of novel view synthesis by incorporating unseen view information to constrain scene optimization. Extensive experiments show that our method outperforms the state-of-the-art methods in generating photorealistic novel view synthesis from defocused and motion-blurred monocular videos. Our code and trained model will be made publicly available.", "arxiv_id": "2510.10691v1", "arxiv_authors": ["Xuankai Zhang", "Junjin Xiao", "Qing Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1bb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1889255, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62a"}, "filepath": "data/2510.21351v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999541025720782, "type": "Poster", "name": "Dynamic Semantic-Aware Correlation Modeling for UAV Tracking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119675", "abstract": "UAV tracking can be widely applied in scenarios such as disaster rescue, environmental monitoring, and logistics transportation. However, existing UAV tracking methods predominantly emphasize speed and lack exploration in semantic awareness, which hinders the search region from extracting accurate localization information from the template. The limitation results in suboptimal performance under typical UAV tracking challenges such as camera motion, fast motion, and low resolution, etc. To address this issue, we propose a dynamic semantic aware correlation modeling tracking framework. The core of our framework is a Dynamic Semantic Relevance Generator, which, in combination with the correlation map from the Transformer, explore semantic relevance. The approach enhances the search region's ability to extract important information from the template, improving accuracy and robustness under the aforementioned challenges.Additionally, to enhance the tracking speed, we design a pruning method for the proposed framework. Therefore, we present multiple model variants that achieve trade-offs between speed and accuracy, enabling flexible deployment according to the available computational resources. Experimental results validate the effectiveness of our method, achieving competitive performance on multiple UAV tracking datasets.", "arxiv_id": "2510.21351v1", "arxiv_authors": ["Xinyu Zhou", "Tongxin Pan", "Lingyi Hong", "Pinxue Guo", "Haijing Guo", "Zhaoyu Chen", "Kaixun Jiang", "Wenqiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1bc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037745, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62b"}, "filepath": "data/2506.08004v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994796544055313, "type": "Poster", "name": "Dynamic View Synthesis as an Inverse Problem", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119387", "abstract": "In this work, we address dynamic view synthesis from monocular videos as an inverse problem in a training-free setting. By redesigning the noise initialization phase of a pre-trained video diffusion model, we enable high-fidelity dynamic view synthesis without any weight updates or auxiliary modules. We begin by identifying a fundamental obstacle to deterministic inversion arising from zero-terminal signal-to-noise ratio (SNR) schedules and resolve it by introducing a novel noise representation, termed K-order Recursive Noise Representation. We derive a closed form expression for this representation, enabling precise and efficient alignment between the VAE-encoded and the DDIM inverted latents. To synthesize newly visible regions resulting from camera motion, we introduce Stochastic Latent Modulation, which performs visibility aware sampling over the latent space to complete occluded regions. Comprehensive experiments demonstrate that dynamic view synthesis can be effectively performed through structured latent manipulation in the noise initialization phase.", "arxiv_id": "2506.08004v1", "arxiv_authors": ["Hidir Yesiltepe", "Pinar Yanardag"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1bd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3682814, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62c"}, "filepath": "data/2505.21076v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992499305177593, "type": "Poster", "name": "DynamicVL: Benchmarking Multimodal Large Language Models for Dynamic City Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121371", "abstract": "Multimodal large language models have demonstrated remarkable capabilities in visual understanding, but their application to long-term Earth observation analysis remains limited, primarily focusing on single-temporal or bi-temporal imagery. To address this gap, we introduce **DVL-Suite**, a comprehensive framework for analyzing long-term urban dynamics through remote sensing imagery. Our suite comprises 15,063 high-resolution (1.0m) multi-temporal images spanning 42 megacities in the U.S. from 2005 to 2023, organized into two components: **DVL-Bench** and **DVL-Instruct**. The DVL-Bench includes seven urban understanding tasks, from fundamental change detection (*pixel-level*) to quantitative analyses (*regional-level*) and comprehensive urban narratives (*scene-level*), capturing diverse urban dynamics including expansion/transformation patterns, disaster assessment, and environmental challenges. We evaluate 17 state-of-the-art multimodal large language models and reveal their limitations in long-term temporal understanding and quantitative analysis. These challenges motivate the creation of **DVL-Instruct**, a specialized instruction-tuning dataset designed to enhance models' capabilities in multi-temporal Earth observation. Building upon this dataset, we develop **DVLChat**, a baseline model capable of both image-level question-answering and pixel-level segmentation, facilitating a comprehensive understanding of city dynamics through language interactions.", "arxiv_id": "2505.21076v2", "arxiv_authors": ["Weihao Xuan", "Junjue Wang", "Heli Qi", "Zihang Chen", "Zhuo Zheng", "Yanfei Zhong", "Junshi Xia", "Naoto Yokoya"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1be"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1102888, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62d"}, "filepath": "data/2509.21930v1.png", "tags": [], "_media_type": "image", "_rand": 0.999716869020203, "type": "Poster", "name": "DynaNav: Dynamic Feature and Layer Selection for Efficient Visual Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119244", "abstract": "Visual navigation is essential for robotics and embodied AI. However, existing foundation models, particularly those with transformer decoders, suffer from high computational overhead and lack interpretability, limiting their deployment on edge devices. To address this, we propose DynaNav, a Dynamic Visual Navigation framework that adapts feature and layer selection based on scene complexity. It employs a trainable hard feature selector for sparse operations, enhancing efficiency and interpretability. Additionally, we integrate feature selection into an early-exit mechanism, with Bayesian Optimization determining optimal exit thresholds to reduce computational cost. Extensive experiments in real-world-based datasets and simulated environments demonstrate the effectiveness of DynaNav. Compared to ViNT, DynaNav achieves a $2.6\\times$ reduction in FLOPs, 42.3% lower inference time, and 32.8% lower memory usage while improving navigation performance across four public datasets.", "arxiv_id": "2509.21930v1", "arxiv_authors": ["Jiahui Wang", "Changhao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1bf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031775, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62e"}, "filepath": "data/2509.23931v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992767203733094, "type": "Poster", "name": "Each Complexity Deserves a Pruning Policy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117190", "abstract": "The established redundancy in visual tokens within large vision\u2013language models (LVLMs) allows for pruning to effectively reduce their substantial computational demands. Empirical evidence from previous works indicates that visual tokens in later decoder stages receive less attention than shallow layers. Then, previous methods typically employ heuristics layer-specific pruning strategies where, although the number of tokens removed may differ across decoder layers, the overall pruning schedule is fixed and applied uniformly to all input samples and tasks, failing to align token elimination with the model\u2019s holistic reasoning trajectory. Cognitive science indicates that human visual processing often begins with broad exploration to accumulate evidence before narrowing focus as the target becomes distinct. Our experiments reveal an analogous pattern in LVLMs. This observation strongly suggests that neither a fixed pruning schedule nor a heuristics layer-wise strategy can optimally accommodate the diverse complexities inherent in different inputs. To overcome this limitation, we introduce Complexity-Adaptive Pruning (AdaPrune), which is a training-free, plug-and-play framework that tailors pruning policies to varying sample and task complexities. Specifically, AdaPrune quantifies the mutual information between visual and textual tokens, and then projects this signal to a budget-constrained logistic retention curve. Each such logistic curve, defined by its unique shape, is shown to effectively correspond with the specific complexity of different tasks, and can easily guarantee adherence to a pre-defined computational constraints. We evaluate AdaPrune not only on standard vision-language tasks but also on Vision-Language-Action (VLA) models for autonomous driving. Notably, when applied to LLaVA-1.5-7B, our method prunes 89\\% of visual tokens and reduces inference FLOPs by 76.8\\%, but still retaining 96.7\\% of the original accuracy averaged over all tasks. This corresponds to a 9.1\\% improvement over the recent work PDrop (CVPR'2025), demonstrating the effectivenes.", "arxiv_id": "2509.23931v2", "arxiv_authors": ["Hanshi Wang", "Yuhao Xu", "Zekun Xu", "Jin Gao", "Yufan Liu", "Weiming Hu", "Ke Wang", "Zhipeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1144934, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a62f"}, "filepath": "data/2504.15271v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997012512585461, "type": "Poster", "name": "Eagle 2.5: Boosting Long-Context Post-Training for Frontier Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117551", "abstract": "We introduce Eagle2.5, a frontier vision-language model (VLM) for long-context multimodal learning. Our work addresses the challenges in long video comprehension and high-resolution image understanding, introducing a generalist framework for both tasks. The proposed training framework incorporates Automatic Degrade Sampling and Image Area Preservation, two techniques that preserve contextual integrity and visual details. The framework also includes numerous efficiency optimizations in the pipeline for long-context data training. Finally, we propose Eagle-Video-110K, a novel dataset that integrates both story-level and clip-level annotations, facilitating long-video understanding. Eagle2.5 demonstrates substantial improvements on long-context multimodal benchmarks, providing a robust solution to the limitations of existing VLMs. Notably, our best model Eagle2.5-8B achieves 72.4\\% on Video-MME with 512 input frames, matching the results of top-tier commercial model such as GPT-4o and large-scale open-source models like Qwen2.5-VL-72B and InternVL2.5-78B.", "arxiv_id": "2504.15271v1", "arxiv_authors": ["Guo Chen", "Zhiqi Li", "Shihao Wang", "Jindong Jiang", "Yicheng Liu", "Lidong Lu", "De-An Huang", "Wonmin Byeon", "Matthieu Le", "Tuomas Rintamaki", "Tyler Poon", "Max Ehrlich", "Tuomas Rintamaki", "Tyler Poon", "Tong Lu", "Limin Wang", "Bryan Catanzaro", "Jan Kautz", "Andrew Tao", "Zhiding Yu", "Guilin Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.483Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1280055, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a630"}, "filepath": "data/2506.15838v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995043955533469, "type": "Poster", "name": "EchoShot: Multi-Shot Portrait Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119343", "abstract": "Video diffusion models substantially boost the productivity of artistic workflows with high-quality portrait video generative capacity. However, prevailing pipelines are primarily constrained to single-shot creation, while real-world applications urge for multiple shots with identity consistency and flexible content controllability. In this work, we propose EchoShot, a native and scalable multi-shot framework for portrait customization built upon a foundation video diffusion model. To start with, we propose shot-aware position embedding mechanisms within video diffusion transformer architecture to model inter-shot variations and establish intricate correspondence between multi-shot visual content and their textual descriptions. This simple yet effective design enables direct training on multi-shot video data without introducing additional computational overhead. To facilitate model training within multi-shot scenario, we construct PortraitGala, a large-scale and high-fidelity human-centric video dataset featuring cross-shot identity consistency and fine-grained captions such as facial attributes, outfits, and dynamic motions. To further enhance applicability, we extend EchoShot to perform reference image-based personalized multi-shot generation and long video synthesis with infinite shot counts. Extensive evaluations demonstrate that EchoShot achieves superior identity consistency as well as attribute-level controllability in multi-shot portrait video generation. Notably, the proposed framework demonstrates potential as a foundational paradigm for general multi-shot video modeling. All the models and the dataset will be made open-source upon acceptance.", "arxiv_id": "2506.15838v1", "arxiv_authors": ["Jiahao Wang", "Hualian Sheng", "Sijia Cai", "Weizhan Zhang", "Caixia Yan", "Yachuang Feng", "Bing Deng", "Jieping Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1010955, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a631"}, "filepath": "data/2510.20217v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996519949938913, "type": "Poster", "name": "EditInfinity: Image Editing with Binary-Quantized Generative Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115392", "abstract": "Adapting pretrained diffusion-based generative models for text-driven image editing with negligible tuning overhead has demonstrated remarkable potential. A classical adaptation paradigm, as followed by these methods, first infers the generative trajectory inversely for a given source image by image inversion, then performs image editing along the inferred trajectory guided by the target text prompts. However, the performance of image editing is heavily limited by the approximation errors introduced during image inversion by diffusion models, which arise from the absence of exact supervision in the intermediate generative steps. To circumvent this issue, we investigate the parameter-efficient adaptation of VQ-based generative models for image editing, and leverage their inherent characteristic that the exact intermediate quantized representations of a source image are attainable, enabling more effective supervision for precise image inversion. Specifically, we propose \\emph{EditInfinity}, which adapts \\emph{Infinity}, a binary-quantized generative model, for image editing. We propose an efficient yet effective image inversion mechanism that integrates text prompting rectification and image style preservation, enabling precise image reconstruction. Furthermore, we devise a holistic smoothing strategy which allows our \\emph{EditInfinity} to perform image editing with high fidelity to source images and precise semantic alignment to the text prompts. Extensive experiments on the PIE-Bench benchmark across \"add\", \"change\", and \"remove\" editing operations, demonstrate the superior performance of our model compared to state-of-the-art diffusion-based baselines. Code will be released.", "arxiv_id": "2510.20217v2", "arxiv_authors": ["Jiahuan Wang", "Yuxin Chen", "Jun Yu", "Guangming Lu", "Wenjie Pei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4162306, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a632"}, "filepath": "data/2410.15392v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993492465711531, "type": "Poster", "name": "EF-3DGS: Event-Aided Free-Trajectory 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115668", "abstract": "Scene reconstruction from casually captured videos has wide real-world applications. Despite recent progress, existing methods relying on traditional cameras tend to fail in high-speed scenarios due to insufficient observations and inaccurate pose estimation. Event cameras, inspired by biological vision, record pixel-wise intensity changes asynchronously with high temporal resolution and low latency, providing valuable scene and motion information in blind inter-frame intervals. In this paper, we introduce the event cameras to aid scene construction from a casually captured video for the first time, and propose Event-Aided Free-Trajectory 3DGS, called EF-3DGS, which seamlessly integrates the advantages of event cameras into 3DGS through three key components. First, we leverage the Event Generation Model (EGM) to fuse events and frames, enabling continuous supervision between discrete frames. Second, we extract motion information through Contrast Maximization (CMax) of warped events, which calibrates camera poses and provides gradient-domain constraints for 3DGS. Third, to address the absence of color information in events, we combine photometric bundle adjustment (PBA) with a Fixed-GS training strategy that separates structure and color optimization, effectively ensuring color consistency across different views. We evaluate our method on the public Tanks and Temples benchmark and a newly collected real-world dataset, RealEv-DAVIS. Our method achieves up to 3dB higher PSNR and 40% lower Absolute Trajectory Error (ATE) compared to state-of-the-art methods under challenging high-speed scenarios.", "arxiv_id": "2410.15392v3", "arxiv_authors": ["Bohao Liao", "Wei Zhai", "Zengyu Wan", "Zhixin Cheng", "Wenfei Yang", "Tianzhu Zhang", "Yang Cao", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4966558, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a633"}, "filepath": "data/2509.17786v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999918199662211, "type": "Poster", "name": "Efficient Low-Rank Model Merging in Core Space", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115191", "abstract": "In this paper we address the challenges associated with merging low-rank adaptations of large neural networks. With the rise of parameter-efficient adaptation techniques, such as Low-Rank Adaptation (LoRA), model fine-tuning has become more accessible. While fine-tuning models with LoRA is highly efficient, existing merging methods often sacrifice this efficiency by merging fully-sized weight matrices. We propose the Core Space merging framework, which enables the merging of LoRA-adapted models within a common alignment basis to preserve the efficiency of low-rank adaptation and improve performance. We further provide a formal proof that projection into Core Space ensures no loss of information and provide a complexity analysis showing the efficiency gains. Extensive empirical results demonstrate that Core Space significantly improves existing merging techniques and achieves state-of-the-art results on both vision and language tasks while utilizing a fraction of the computational resources.", "arxiv_id": "2509.17786v3", "arxiv_authors": ["Aniello Panariello", "Daniel Marczak", "Simone Magistri", "Angelo Porrello", "Bart\u0142omiej Twardowski", "Andrew D. Bagdanov", "Simone Calderara", "Joost van de Weijer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062230, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a634"}, "filepath": "data/2510.20673v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992725099752071, "type": "Poster", "name": "Efficient Multi-bit Quantization Network Training via Weight Bias Correction and Bit-wise Coreset Sampling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117708", "abstract": "Multi-bit quantization networks enable flexible deployment of deep neural networks by supporting multiple precision levels within a single model. However, existing approaches suffer from significant training overhead as full-dataset updates are repeated for each supported bit-width, resulting in a cost that scales linearly with the number of precisions. Additionally, extra fine-tuning stages are often required to support additional or intermediate precision options, further compounding the overall training burden. To address this issue, we propose two techniques that greatly reduce the training overhead without compromising model utility: (i) Weight bias correction enables shared batch normalization and eliminates the need for fine-tuning by neutralizing quantization-induced bias across bit-widths and aligning activation distributions; and (ii) Bit-wise coreset sampling strategy allows each child model to train on a compact, informative subset selected via gradient-based importance scores by exploiting the implicit knowledge transfer phenomenon. Experiments on CIFAR-10/100, TinyImageNet, and ImageNet-1K with both ResNet and ViT architectures demonstrate that our method achieves competitive or superior accuracy while reducing training time up to 7.88\u00d7.", "arxiv_id": "2510.20673v1", "arxiv_authors": ["Jinhee Kim", "Jae Jun An", "Kang Eun Jeon", "Jong Hwan Ko"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043360, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a635"}, "filepath": "data/2509.15472v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994392617051195, "type": "Poster", "name": "Efficient Multimodal Dataset Distillation via Generative Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119089", "abstract": "Dataset distillation aims to synthesize a small dataset from a large dataset, enabling the model trained on it to perform well on the original dataset. With the blooming of large language models and multimodal large language models, the importance of multimodal datasets, particularly image-text datasets, has grown significantly. However, existing multimodal dataset distillation methods are constrained by the Matching Training Trajectories algorithm, which significantly increases the computing resource requirement, and takes days to process the distillation. In this work, we introduce EDGE, a generative distillation method for efficient multimodal dataset distillation. Specifically, we identify two key challenges of distilling multimodal datasets with generative models: 1) The lack of correlation between generated images and captions.2) The lack of diversity among generated samples.To address the aforementioned issues, we propose a novel generative model training workflow with a bi-directional contrastive loss and a diversity loss. Furthermore, we propose a caption synthesis strategy to further improve text-to-image retrieval performance by introducing more text information. Our method is evaluated on Flickr30K, COCO, and CC3M datasets, demonstrating superior performance and efficiency compared to existing approaches. Notably, our method achieves results 18$\\times$ faster than the state-of-the-art method.", "arxiv_id": "2509.15472v2", "arxiv_authors": ["Zhenghao Zhao", "Haoxuan Wang", "Junyi Wu", "Yuzhang Shang", "Gaowen Liu", "Yan Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073602, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a636"}, "filepath": "data/2510.00515v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995571572839572, "type": "Poster", "name": "Efficient Multi-modal Large Language Models via Progressive Consistency Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116731", "abstract": "Visual tokens consume substantial computational resources in multi-modal large models (MLLMs), significantly compromising their efficiency. Recent works have attempted to improve efficiency by compressing visual tokens during training, either through modifications to model components or by introducing additional parameters. However, they often overlook the increased learning difficulty caused by such compression, as the model\u2019s parameter space struggles to quickly adapt to the substantial perturbations in the feature space induced by token compression. In this work, we propose to develop Efficient MLLMs via Progressive Consistency Distillation (EPIC), a progressive learning framework. Specifically, by decomposing the feature space perturbations introduced by token compression along the token-wise and layer-wise dimensions, we introduce token consistency distillation and layer consistency distillation, respectively, aiming to reduce the training difficulty by leveraging guidance from a teacher model and following a progressive learning trajectory. Extensive experiments demonstrate the superior effectiveness, robustness, and generalization capabilities of our proposed framework.", "arxiv_id": "2510.00515v1", "arxiv_authors": ["Zichen Wen", "Shaobo Wang", "Yufa Zhou", "Junyuan Zhang", "Qintong Zhang", "Yifeng Gao", "Zhaorun Chen", "Bin Wang", "Weijia Li", "Conghui He", "Linfeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1059208, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a637"}, "filepath": "data/2510.18546v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991978488089788, "type": "Poster", "name": "EfficientNav: Towards On-Device Object-Goal Navigation with Navigation Map Caching and Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115877", "abstract": "Object-goal navigation (ObjNav) tasks an agent with navigating to the location of a specific object in an unseen environment. Embodied agents equipped with large language models (LLMs) and online constructed navigation maps can perform ObjNav in a zero-shot manner. However, existing agents heavily rely on giant LLMs on the cloud, e.g., GPT-4, while directly switching to small LLMs, e.g., LLaMA3.2-11b, suffer from significant success rate drops due to limited model capacity for understanding complex navigation maps, which prevents deploying ObjNav on local devices.At the same time, the long prompt introduced by the navigation map description will cause high planning latency on local devices.In this paper, we propose EfficientNav to enable on-device efficient LLM-based zero-shot ObjNav. To help the smaller LLMs better understand the environment, we propose semantics-aware memory retrieval to prune redundant information in navigation maps.To reduce planning latency, we propose discrete memory caching and attention-based memory clustering to efficiently save and re-use the KV cache.Extensive experimental results demonstrate that EfficientNavachieves 11.1\\% improvement in success rate on HM3D benchmark over GPT-4-based baselines, and demonstrates 6.7$\\times$ real-time latency reduction and 4.7$\\times$ end-to-end latency reduction over GPT-4 planner. Our code is available on Anonymous Github.", "arxiv_id": "2510.18546v1", "arxiv_authors": ["Zebin Yang", "Sunjian Zheng", "Tong Xie", "Tianshi Xu", "Bo Yu", "Fan Wang", "Jie Tang", "Shaoshan Liu", "Meng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1c9"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1026989, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a638"}, "filepath": "data/2506.09980v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998074686869771, "type": "Poster", "name": "Efficient Part-level 3D Object Generation via Dual Volume Packing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115853", "abstract": "Recent progress in 3D object generation has greatly improved both the quality and efficiency.However, most existing methods generate a single mesh with all parts fused together, which limits the ability to edit or manipulate individual parts.A key challenge is that different objects may have a varying number of parts.To address this, we propose a new end-to-end framework for part-level 3D object generation.Given a single input image, our method generates high-quality 3D objects with an arbitrary number of complete and semantically meaningful parts.We introduce a dual volume packing strategy that organizes all parts into two complementary volumes, allowing for the creation of complete and interleaved parts that assemble into the final object.Experiments show that our model achieves better quality, diversity, and generalization than previous image-based part-level generation methods.", "arxiv_id": "2506.09980v1", "arxiv_authors": ["Jiaxiang Tang", "Ruijie Lu", "Zhaoshuo Li", "Zekun Hao", "Xuan Li", "Fangyin Wei", "Shuran Song", "Gang Zeng", "Ming-Yu Liu", "Tsung-Yi Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ca"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050988, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a639"}, "filepath": "data/2505.24407v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995850054089173, "type": "Poster", "name": "Efficient RAW Image Deblurring with Adaptive Frequency Modulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116988", "abstract": "Image deblurring plays a crucial role in enhancing visual clarity across various applications. Although most deep learning approaches primarily focus on sRGB images, which inherently lose critical information during the image signal processing pipeline, RAW images, being unprocessed and linear, possess superior restoration potential but remain underexplored. Deblurring RAW images presents unique challenges, particularly in handling frequency-dependent blur while maintaining computational efficiency. To address these issues, we propose Frequency Enhanced Network (FrENet), a framework specifically designed for RAW-to-RAW deblurring that operates directly in the frequency domain. We introduce a novel Adaptive Frequency Positional Modulation module, which dynamically adjusts frequency components according to their spectral positions, thereby enabling precise control over the deblurring process. Additionally, frequency domain skip connections are adopted to further preserve high-frequency details. Experimental results demonstrate that FrENet surpasses state-of-the-art deblurring methods in RAW image deblurring, achieving significantly better restoration quality while maintaining high efficiency in terms of reduced MACs. Furthermore, FrENet's adaptability enables it to be extended to sRGB images, where it delivers comparable or superior performance compared to methods specifically designed for sRGB data. The source code will be publicly available.", "arxiv_id": "2505.24407v4", "arxiv_authors": ["Wenlong Jiao", "Binglong Li", "Wei Shang", "Ping Wang", "Dongwei Ren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1cb"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080830, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63a"}, "filepath": "data/2509.16549v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994341014409484, "type": "Poster", "name": "Efficient Rectified Flow for Image Fusion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117923", "abstract": "Image fusion is a fundamental and important task in computer vision, aiming to combine complementary information from different modalities to fuse images. In recent years, diffusion models have made significant developments in the field of image fusion. However, diffusion models often require complex computations and redundant inference time, which reduces the applicability of these methods. To address this issue, we propose RFfusion, an efficient one-step diffusion model for image fusion based on Rectified Flow. We incorporate Rectified Flow into the image fusion task to straighten the sampling path in the diffusion model, achieving one-step sampling without the need for additional training, while still maintaining high-quality fusion results. Furthermore, we propose a task-specific variational autoencoder (VAE) architecture tailored for image fusion, where the fusion operation is embedded within the latent space to further reduce computational complexity. To address the inherent discrepancy between conventional reconstruction-oriented VAE objectives and the requirements of image fusion, we introduce a two-stage training strategy. This approach facilitates the effective learning and integration of complementary information from multi-modal source images, thereby enabling the model to retain fine-grained structural details while significantly enhancing inference efficiency. Extensive experiments demonstrate that our method outperforms other state-of-the-art methods in terms of both inference speed and fusion quality.", "arxiv_id": "2509.16549v2", "arxiv_authors": ["Zirui Wang", "Jiayi Zhang", "Tianwei Guan", "Yuhan Zhou", "Xingyuan Li", "Minjing Dong", "Jinyuan Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1cc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037927, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63b"}, "filepath": "data/2506.10100v1.png", "tags": [], "_media_type": "image", "_rand": 0.99955681112277, "type": "Poster", "name": "EfficientVLA: Training-Free Acceleration and Compression for Vision-Language-Action Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117949", "abstract": "Vision-Language-Action (VLA) models, particularly diffusion-based architectures, demonstrate transformative potential for embodied intelligence but are severely hampered by high computational and memory demands stemming from extensive inherent and inference-time redundancies. While existing acceleration efforts often target isolated inefficiencies, such piecemeal solutions typically fail to holistically address the varied computational and memory bottlenecks across the entire VLA pipeline, thereby limiting practical deployability. We introduce VLA-Pruner, a structured and training-free inference acceleration framework that systematically eliminates these barriers by cohesively exploiting multifaceted redundancies. VLA-Pruner synergistically integrates three targeted strategies: (1) pruning of functionally inconsequential layers from the language module, guided by an analysis of inter-layer redundancies; (2) optimizing the visual processing pathway through a task-aware strategy that selects a compact, diverse set of visual tokens, balancing task-criticality with informational coverage; and (3) alleviating temporal computational redundancy within the iterative diffusion-based action head by strategically caching and reusing key intermediate features.We apply our method to a standard VLA model CogACT, yielding a $1.93\\times$ inference speedup and reduces FLOPs to $28.9\\%$, with only a $0.6\\%$ success rate drop in the SIMPLER benchmark. The code will be open-sourced and is available in the supplementary materials.", "arxiv_id": "2506.10100v1", "arxiv_authors": ["Yantai Yang", "Yuhao Wang", "Zichen Wen", "Luo Zhongwei", "Chang Zou", "Zhipeng Zhang", "Chuan Wen", "Linfeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1cd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1078313, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63c"}, "filepath": "data/2503.08221v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992026446112007, "type": "Poster", "name": "EgoBlind: Towards Egocentric Visual Assistance for the Blind", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121565", "abstract": "We present EgoBlind, the first egocentric VideoQA dataset collected from blind individuals to evaluate the assistive capabilities of contemporary multimodal large language models (MLLMs). EgoBlind comprises 1,392 videos that record the daily lives of real blind users from a first-person perspective. It also features 5,311 questions directly posed or generated and verified by blind individuals to reflect their needs for visual assistance under various scenarios. We provide each question with an average of 3 reference answers to alleviate subjective evaluation. Using EgoBlind, we comprehensively evaluate 16 advanced MLLMs and find that all models struggle, with the best performers achieving accuracy near 60\\%, far behind human performance of 87.4\\%. To guide future advancements, we identify and summarize major limitations of existing MLLMs in egocentric visual assistance for the blind and explore heuristic solutions for improvement. With these efforts, we hope EgoBlind can serve as a valuable foundation for developing more effective AI assistants to enhance the independence of the blind individuals' lives.", "arxiv_id": "2503.08221v3", "arxiv_authors": ["Junbin Xiao", "Nanxin Huang", "Hao Qiu", "Zhulin Tao", "Xun Yang", "Richang Hong", "Meng Wang", "Angela Yao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ce"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.484Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068359, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63d"}, "filepath": "data/2509.19626v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993987089074592, "type": "Poster", "name": "EgoBridge: Domain Adaptation for Generalizable Imitation from Egocentric Human Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119049", "abstract": "Egocentric human experience data presents a vast resource for scaling up end-to-end imitation learning for robotic manipulation. However, significant domain gaps in visual appearance, sensor modalities, and kinematics between human and robot impede knowledge transfer. This paper presents EgoBridge, a unified co-training framework that explicitly aligns the policy latent spaces between human and robot data using domain adaptation. Through a measure of discrepancy on the joint policy latent features and actions based on Optimal Transport (OT), we learn observation representations that not only align between the human and robot domain but also preserve the action-relevant information critical for policy learning. EgoBridge achieves a significant absolute policy success rate improvement by 44% over human-augmented cross-embodiment baselines in three real-world single-arm and bimanual manipulation tasks. EgoBridge also generalizes to new objects, scenes, and tasks seen only in human data, where baselines fail entirely. Videos and additional information can be found at https://ego-bridge.github.io/", "arxiv_id": "2509.19626v1", "arxiv_authors": ["Ryan Punamiya", "Dhruv Patel", "Patcharapong Aphiwetsa", "Pranav Kuppili", "Lawrence Y. Zhu", "Simar Kareer", "Judy Hoffman", "Danfei Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1cf"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2653741, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63e"}, "filepath": "data/2503.15470v1.png", "tags": [], "_media_type": "image", "_rand": 0.999325900116862, "type": "Poster", "name": "EgoDTM: Towards 3D-Aware Egocentric Video-Language Pretraining", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115822", "abstract": "Egocentric video-language pretraining has significantly advanced video representation learning. Humans perceive and interact with a fully 3D world, developing spatial awareness that extends beyond text-based understanding. However, most previous works learn from 1D text or 2D visual cues, such as bounding boxes, which inherently lack 3D understanding. To bridge this gap, we introduce EgoDTM, an Egocentric Depth- and Text-aware \\textbf{M}odel, jointly trained through large-scale 3D-aware video pretraining and video-text contrastive learning. EgoDTM incorporates a lightweight 3D-aware decoder to efficiently learn 3D-awareness from pseudo depth maps generated by depth estimation models. To further facilitate 3D-aware video pretraining, we enrich the original brief captions with hand-object visual cues by organically combining several foundation models. Extensive experiments demonstrate EgoDTM's superior performance across diverse downstream tasks, highlighting its superior 3D-aware visual understanding. Code: \\url{https://anonymous.4open.science/r/EgoDTM}.", "arxiv_id": "2503.15470v1", "arxiv_authors": ["Boshen Xu", "Yuting Mei", "Xinbi Liu", "Sipeng Zheng", "Qin Jin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1878799, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a63f"}, "filepath": "data/2510.22129v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997408417522806, "type": "Poster", "name": "egoEMOTION: Egocentric Vision and Physiological Signals for Emotion and Personality Recognition in Real-world Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121710", "abstract": "Understanding affect is central to anticipating human behavior, yet current egocentric vision benchmarks largely ignore the person\u2019s emotional states that shape their decisions and actions. Existing tasks in egocentric perception focus on physical activities, hand-object interactions, and attention modeling\u2014assuming neutral affect and uniform personality. This limits the ability of vision systems to capture key internal drivers of behavior. In this paper, we present egoEMOTION, the first dataset that couples egocentric visual and physiological signals with dense self-reports of emotion and personality across controlled and real-world scenarios. Our dataset includes over 50 hours of recordings from 43 participants, captured using Meta\u2019s Project Aria glasses. Each session provides synchronized eye-tracking video, head-mounted photoplethysmography, inertial motion data, and physiological baselines for reference. Participants completed emotion-elicitation tasks and naturalistic activities while self-reporting their affective state using the Circumplex Model and Mikels\u2019 Wheel as well as their personality via the Big Five model. We define three benchmark tasks: (1) continuous affect classification (valence, arousal, dominance); (2) discrete emotion classification; and (3) trait-level personality inference. We show that a classical learning-based method, as a simple baseline in real-world affect prediction, produces better estimates from signals captured on egocentric vision systems than processing physiological signals. Our dataset establishes emotion and personality as core dimensions in egocentric perception and opens new directions in affect-driven modeling of behavior, intent, and interaction.", "arxiv_id": "2510.22129v1", "arxiv_authors": ["Matthias Jammot", "Bj\u00f6ern Braun", "Paul Streli", "Rafael Wampfler", "Christian Holz"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053650, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a640"}, "filepath": "data/2507.18342v1.png", "tags": [], "_media_type": "image", "_rand": 0.999475284941352, "type": "Poster", "name": "EgoExoBench: A Benchmark for First- and Third-person View Video Understanding in MLLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121628", "abstract": "Transferring and integrating knowledge across first-person (egocentric) and third-person (exocentric) viewpoints is intrinsic to human intelligence, enabling humans to learn from others and convey insights from their own experiences. Despite rapid progress in multimodal large language models (MLLMs), their ability to perform such cross-view reasoning remains unexplored. To address this, we introduce EgoExoBench, the first benchmark for egocentric exocentric video understanding and reasoning. Built from publicly available datasets, EgoExoBench comprises over 7300 question\u2013answer pairs spanning eleven sub-tasks organized into three core challenges: semantic alignment, viewpoint association, and temporal reasoning. We evaluate 13 state-of-the-art MLLMs and find that while these models excel on single-view tasks, they struggle to align semantics across perspectives, accurately associate views, and infer temporal dynamics in the ego-exo context. We hope EgoExoBench can serve as a valuable resource for research on embodied agents and intelligent assistants seeking human-like cross-view intelligence.", "arxiv_id": "2507.18342v1", "arxiv_authors": ["Yuping He", "Yifei Huang", "Guo Chen", "Baoqi Pei", "Jilan Xu", "Tong Lu", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1079727, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a641"}, "filepath": "data/2505.24287v1.png", "tags": [], "_media_type": "image", "_rand": 0.999444944538486, "type": "Poster", "name": "EgoExOR: An Ego-Exo-Centric Operating Room Dataset for Surgical Activity Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121697", "abstract": "Operating rooms (ORs) demand precise coordination among surgeons, nurses, and equipment in a fast-paced, occlusion-heavy environment, necessitating advanced perception models to enhance safety and efficiency. Existing datasets either provide partial egocentric views or sparse exocentric multi-view context, but do not explore the comprehensive combination of both. We introduce EgoExOR, the first OR dataset and accompanying benchmark to fuse first-person and third-person perspectives. Spanning 94 minutes (84,553 frames at 15 FPS) of two emulated spine procedures, Ultrasound-Guided Needle Insertion and Minimally Invasive Spine Surgery, EgoExOR integrates egocentric data (RGB, gaze, hand tracking, audio) from wearable glasses, exocentric RGB and depth from RGB-D cameras, and ultrasound imagery. Its detailed scene graph annotations, covering 36 entities and 22 relations (568,235 triplets), enable robust modeling of clinical interactions, supporting tasks like action recognition and human-centric perception. We evaluate the surgical scene graph generation performance of two adapted state-of-the-art models and offer a new baseline that explicitly leverages EgoExOR\u2019s multimodal and multi-perspective signals. This new dataset and benchmark set a new foundation for OR perception, offering a rich, multimodal resource for next-generation clinical perception. Our code and data are available at https://github.com/ardamamur/EgoExOR.", "arxiv_id": "2505.24287v1", "arxiv_authors": ["Ege \u00d6zsoy", "Arda Mamur", "Felix Tristram", "Chantal Pellegrini", "Magdalena Wysocki", "Benjamin Busam", "Nassir Navab"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 891135, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a642"}, "filepath": "data/2510.23569v1.png", "tags": [], "_media_type": "image", "_rand": 0.999965220515509, "type": "Poster", "name": "EgoThinker: Unveiling Egocentric Reasoning with Spatio-Temporal CoT", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119530", "abstract": "Egocentric video reasoning centers on an unobservable agent behind the camera who dynamically shapes the environment, requiring inference of hidden intentions and recognition of fine-grained interactions. This core challenge limits current multimodal large language models (MLLMs), which excel at visible event reasoning but lack embodied, first-person understanding. To bridge this gap, we introduce EgoThinker, a novel framework that endows MLLMs with robust egocentric reasoning capabilities through spatio-temporal chain-of-thought supervision and a two-stage learning curriculum. First, we introduce EgoRe-5M, a large-scale egocentric QA dataset constructed from 13M diverse egocentric video clips. This dataset features multi-minute segments annotated with detailed CoT rationales and dense hand\u2013object grounding. Second, we employ SFT on EgoRe-5M to instill reasoning skills, followed by reinforcement fine-tuning (RFT) to further enhance spatio-temporal localization. Experimental results show that EgoThinker outperforms existing methods across multiple egocentric benchmarks, while achieving substantial improvements in fine-grained spatio-temporal localization tasks.", "arxiv_id": "2510.23569v1", "arxiv_authors": ["Baoqi Pei", "Yifei Huang", "Jilan Xu", "Yuping He", "Guo Chen", "Fei Wu", "Yu Qiao", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025867, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a643"}, "filepath": "data/2411.08380v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994950692180423, "type": "Poster", "name": "EgoVid-5M: A Large-Scale Video-Action Dataset for Egocentric Videos Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121716", "abstract": "Video generation has emerged as a promising tool for world simulation, leveraging visual data to replicate real-world environments. Within this context, egocentric video generation, which centers on the human perspective, holds significant potential for enhancing applications in virtual reality, augmented reality, and gaming. However, the generation of egocentric videos presents substantial challenges due to the dynamic nature of first-person viewpoints, the intricate diversity of actions, and the complex variety of scenes encountered. Existing datasets are inadequate for addressing these challenges effectively. To bridge this gap, we present EgoVid-5M, the first high-quality dataset specifically curated for egocentric video generation. EgoVid-5M encompasses over 5 million egocentric video clips and is enriched with detailed action annotations, including fine-grained kinematic control and high-level textual descriptions. To ensure the integrity and usability of the dataset, we implement a sophisticated data cleansing pipeline designed to maintain frame consistency, action coherence, and motion smoothness under egocentric conditions. Furthermore, we introduce EgoDreamer, which is capable of generating egocentric videos driven simultaneously by action descriptions and kinematic control signals. The EgoVid-5M dataset, associated action annotations, and all data cleansing metadata will be released for the advancement of research in egocentric video generation.", "arxiv_id": "2411.08380v1", "arxiv_authors": ["Xiaofeng Wang", "Kang Zhao", "Feng Liu", "Jiayu Wang", "Guosheng Zhao", "Xiaoyi Bao", "Zheng Zhu", "Yingya Zhang", "Xingang Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6920047, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a644"}, "filepath": "data/2510.17700v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992639743996644, "type": "Poster", "name": "Elastic ViTs from Pretrained Models without Retraining", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118288", "abstract": "Vision foundation models achieve remarkable performance but are only available in a limited set of pre-determined sizes, forcing sub-optimal deployment choices under real-world constraints. We introduce a novel post-pretraining structured pruning method that enables elastic inference across a continuum of compute budgets. Our approach combines gradient information with cross-network structure correlations, efficiently approximated through an evolutionary algorithm, does not require labeled data, generalizes to models without a classification head, and is retraining free. Experiments on DINO and AugReg models demonstrate superior performance over state of the art methods across various sparsities, requiring less than five minutes on a A100 GPU to generate elastic models that can be adjusted to any computational budget. Our key contributions include an efficient pruning strategy for pretrained Vision Transformers, a novel evolutionary approximation of Hessian off-diagonal structures, and a self-supervised importance scoring mechanism that maintains strong performance without requiring retraining nor labels.", "arxiv_id": "2510.17700v1", "arxiv_authors": ["Walter Simoncini", "Michael Dorkenwald", "Tijmen Blankevoort", "Cees G. M. Snoek", "Yuki M. Asano"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1024172, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a645"}, "filepath": "data/2412.09585v3.png", "tags": [], "_media_type": "image", "_rand": 0.999026874474103, "type": "Poster", "name": "Elevating Visual Perception in Multimodal LLMs with Visual Embedding Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119448", "abstract": "In recent times, the standard practice for developing MLLMs is to feed features from vision encoder(s) into the LLM and train with natural language supervision. This approach often causes models to lean towards language comprehension and undermine the rich visual perception signals present in the data, which are critical for tasks involving spatial reasoning in the domain of embodied AI and robotics. Is it possible to optimize both at the same time? In this work, we propose VisPer-LM, the first approach that infuses visual perception knowledge from expert vision encoders into the LLM's (of an MLLM) hidden representations. We start by investigating MLLMs trained solely with natural language supervision and identify a positive correlation between the quality of visual representations within these models and their downstream performance. Given this insight, we formulate the objective during the pretraining stage in MLLMs as a coupled optimization of predictive visual embedding and next (text) token prediction. Moreover, through extensive probing, we observe improved visual representation quality due to embedding optimization, underscoring the effectiveness of our probing setup. We demonstrate that our VisPer-LM outperforms the single and multi-encoder baselines, proving our approach's superiority over explicitly feeding the corresponding features to the LLM. In particular, VisPer-LM boosts performance by an average margin of up to 2.5% on various benchmarks, with a notable improvement of 8.7% on the Depth task in CV-Bench.", "arxiv_id": "2412.09585v3", "arxiv_authors": ["Jitesh Jain", "Zhengyuan Yang", "Humphrey Shi", "Jianfeng Gao", "Jianwei Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1445504, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a646"}, "filepath": "data/2503.08367v1.png", "tags": [], "_media_type": "image", "_rand": 0.999932265637219, "type": "Poster", "name": "Embodied Crowd Counting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118347", "abstract": "Occlusion is one of the fundamental challenges in crowd counting. In the community, various data-driven approaches have been developed to address this issue, yet their effectiveness is limited. This is mainly because most existing crowd counting datasets on which the methods are trained are based on passive cameras, restricting their ability to fully sense the environment.Recently, embodied navigation methods have shown significant potential in precise object detection in interactive scenes. These methods incorporate active camera settings, holding promise in addressing the fundamental issues in crowd counting. However, most existing methods are designed for indoor navigation, showing unknown performance in analyzing complex object distribution in large-scale scenes, such as crowds. Besides, most existing embodied navigation datasets are indoor scenes with limited scale and object quantity, preventing them from being introduced into dense crowd analysis. Based on this, a novel task, Embodied Crowd Counting (ECC), is proposed to count the number of persons in a large-scale scene actively. We then build up an interactive simulator, the Embodied Crowd Counting Dataset (ECCD), which enables large-scale scenes and large object quantities. A prior probability distribution approximating a realistic crowd distribution is introduced to generate crowds. Then, a zero-shot navigation method (ZECC) is proposed as a baseline. This method contains an MLLM-driven coarse-to-fine navigation mechanism, enabling active Z-axis exploration, and a normal-line-based crowd distribution analysis method for fine counting. Experimental results show that the proposed method achieves the best trade-off between counting accuracy and navigation cost.", "arxiv_id": "2503.08367v1", "arxiv_authors": ["Runling Long", "Yunlong Wang", "Jia Wan", "Xiang Deng", "Xinting Zhu", "Weili Guan", "Antoni B. Chan", "Liqiang Nie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3732795, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a647"}, "filepath": "data/2506.17220v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997868219754678, "type": "Poster", "name": "Emergent Temporal Correspondences from Video Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117953", "abstract": "Recent advancements in video diffusion models based on Diffusion Transformers (DiTs) have achieved remarkable success in generating temporally coherent videos. Yet, a fundamental question persists: how do these models internally establish and represent temporal correspondences across frames? We introduce DiffTrack, the first quantitative analysis framework designed to answer this question. DiffTrack constructs a dataset of prompt-generated video with pseudo ground-truth tracking annotations and proposes novel evaluation metrics to systematically analyze how each component within the full 3D attention mechanism of DiTs (e.g., representations, layers, and timesteps) contributes to establishing temporal correspondences. Our analysis reveals that query-key similarities in specific (but not all) layers play a critical role in temporal matching, and that this matching becomes increasingly prominent throughout denoising. We demonstrate practical applications of DiffTrack in zero-shot point tracking, where it achieves state-of-the-art performance compared to existing vision foundation and self-supervised video models. Further, we extend our findings to motion-enhanced video generation with a novel guidance method that improves temporal consistency of generated videos without additional training. We believe our work offers crucial insights into the inner workings of video DiTs and establishes a foundation for further research and applications leveraging their temporal understanding.", "arxiv_id": "2506.17220v2", "arxiv_authors": ["Jisu Nam", "Soowon Son", "Dahyun Chung", "Jiyoung Kim", "Siyoon Jin", "Junhwa Hur", "Seungryong Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1d9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3761961, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a648"}, "filepath": "data/2510.12753v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997029049466373, "type": "Poster", "name": "E-MoFlow: Learning Egomotion and Optical Flow from Event Data via Implicit Regularization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120349", "abstract": "The estimation of optical flow and 6-DoF ego-motion\u2014two fundamental tasks in 3-D vision\u2014has typically been addressed independently. For neuromorphic vision (e.g., event cameras), however, the lack of robust data association makes solving the two problems separately an ill-posed challenge, especially in the absence of supervision via ground truth.Existing works mitigate this ill-posedness by either enforcing the smoothness of the flow field via an explicit variational regularizer or leveraging explicit structure-and-motion priors in the parametrization to improve event alignment.The former notably introduces bias in results and computational overhead, while the latter\u2014which parametrizes the optical flow in terms of the scene depth and the camera motion\u2014often converges to suboptimal local minima.To address these issues, we propose an unsupervised pipeline that jointly optimizes egomotion and flow via implicit spatial-temporal and geometric regularization. First, by modeling camera's egomotion as a continuous spline and optical flow as an implicit neural representation, our method inherently embeds spatial-temporal coherence through inductive biases. Second, we incorporate structure-and-motion priors through differential geometric constraints, bypassing explicit depth estimation while maintaining rigorous geometric consistency.As a result, our framework (called \\textbf{E-MoFlow}) unifies egomotion and optical flow estimation via implicit regularization under a fully unsupervised paradigm. Experiments demonstrate its versatility to general 6-DoF motion scenarios, achieving state-of-the-art performance among unsupervised methods and competitive even with supervised approaches. Code will be released upon acceptance.", "arxiv_id": "2510.12753v2", "arxiv_authors": ["Wenpu Li", "Bangyan Liao", "Yi Zhou", "Qi Xu", "Pian Wan", "Peidong Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1da"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1094919, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a649"}, "filepath": "data/2505.20033v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993112026204483, "type": "Poster", "name": "EmoNet-Face: An Expert-Annotated Benchmark for Synthetic Emotion Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121788", "abstract": "Effective human-AI interaction relies on AI's ability to accurately perceive and interpret human emotions. Current benchmarks for vision and vision-language models are severely limited, offering a narrow emotional spectrum that overlooks nuanced states (e.g., bitterness, intoxication) and fails to distinguish subtle differences between related feelings (e.g., shame vs. embarrassment). Existing datasets also often use uncontrolled imagery with occluded faces and lack demographic diversity, risking significant bias. To address these critical gaps, we introduce EmoNet Face, a comprehensive benchmark suite. EmoNet Face features: (1) A novel 40-category emotion taxonomy, meticulously derived from foundational research to capture finer details of human emotional experiences. (2) Three large-scale, AI-generated datasets (EmoNet HQ, Binary, and Big) with explicit, full-face expressions and controlled demographic balance across ethnicity, age, and gender. (3) Rigorous, multi-expert annotations for training and high-fidelity evaluation. (4) We build Empathic Insight Face, a model achieving human-expert-level performance on our benchmark. The publicly released EmoNet Face suite\u2014taxonomy, datasets, and model\u2014provides a robust foundation for developing and evaluating AI systems with a deeper understanding of human emotions.", "arxiv_id": "2505.20033v2", "arxiv_authors": ["Christoph Schuhmann", "Robert Kaczmarczyk", "Gollam Rabby", "Felix Friedrich", "Maurice Kraus", "Krishna Kalyan", "Kourosh Nadi", "Huu Nguyen", "Kristian Kersting", "S\u00f6ren Auer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1db"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.485Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 914101, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64a"}, "filepath": "data/2510.20244v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999753220663068, "type": "Poster", "name": "Empower Words: DualGround for Structured Phrase and Sentence-Level Temporal Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115987", "abstract": "Video Temporal Grounding (VTG) aims to localize temporal segments in long, untrimmed videos that align with a given natural language query. This task typically comprises two subtasks: \\textit{Moment Retrieval (MR)} and \\textit{Highlight Detection (HD)}. While recent advances have been progressed by powerful pretrained vision-language models such as CLIP and InternVideo2, existing approaches commonly treat all text tokens uniformly during cross-modal attention, disregarding their distinct semantic roles. To validate the limitations of this approach, we conduct controlled experiments demonstrating that VTG models overly rely on [EOS]-driven global semantics while failing to effectively utilize word-level signals, which limits their ability to achieve fine-grained temporal alignment. Motivated by this limitation, we propose DualGround, a dual-branch architecture that explicitly separates global and local semantics by routing the [EOS] token through a sentence-level path and clustering word tokens into phrase-level units for localized grounding. Our method introduces (1) token-role-aware cross modal interaction strategies that align video features with sentence-level and phrase-level semantics in a structurally disentangled manner, and (2) a joint modeling framework that not only improves global sentence-level alignment but also enhances fine-grained temporal grounding by leveraging structured phrase-aware context. This design allows the model to capture both coarse and localized semantics, enabling more expressive and context-aware video grounding. DualGround achieves state-of-the-art performance on both Moment Retrieval and Highlight Detection tasks across QVHighlights and Charades-STA benchmarks, demonstrating the effectiveness of disentangled semantic modeling in video-language alignment.", "arxiv_id": "2510.20244v1", "arxiv_authors": ["Minseok Kang", "Minhyeok Lee", "Minjung Kim", "Donghyeong Kim", "Sangyoun Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1dc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 992649, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64b"}, "filepath": "data/2504.20690v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991784984593548, "type": "Poster", "name": "Enabling Instructional Image Editing with In-Context Generation in Large Scale Diffusion Transformer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119860", "abstract": "Instruction-based image editing enables precise modifications via natural language prompts, but existing methods face a precision-efficiency tradeoff: fine-tuning demands massive datasets (>10M) and computational resources, while training-free approaches suffer from weak instruction comprehension. We address this by proposing \\textbf{ICEdit}, which leverages the inherent comprehension and generation abilities of large-scale Diffusion Transformers (DiTs) through three key innovations: (1) An in-context editing paradigm without architectural modifications; (2) Minimal parameter-efficient fine-tuning for quality improvement; (3) Early Filter Inference-Time Scaling, which uses VLMs to select high-quality noise samples for efficiency. Experiments show that ICEdit achieves state-of-the-art editing performance with only 0.1\\% of the training data and 1\\% trainable parameters compared to previous methods. Our approach establishes a new paradigm for balancing precision and efficiency in instructional image editing.", "arxiv_id": "2504.20690v3", "arxiv_authors": ["Zechuan Zhang", "Ji Xie", "Yu Lu", "Zongxin Yang", "Yi Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1dd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6319227, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64c"}, "filepath": "data/2505.23601v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999339268692246, "type": "Poster", "name": "EndoBench: A Comprehensive Evaluation of Multi-Modal Large Language Models for Endoscopy Analysis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121546", "abstract": "Endoscopic procedures are essential for diagnosing and treating internal diseases, and multi-modal large language models (MLLMs) are increasingly applied to assist in endoscopy analysis. However, current benchmarks are limited, as they typically cover specific endoscopic scenarios and a small set of clinical tasks, failing to capture the real-world diversity of endoscopic scenarios and the full range of skills needed in clinical workflows. To address these issues, we introduce EndoBench, the first comprehensive benchmark specifically designed to assess MLLMs across the full spectrum of endoscopic practice with multi-dimensional capacities. EndoBench encompasses 4 distinct endoscopic scenarios, 12 specialized clinical tasks with 12 secondary subtasks, and 5 levels of visual prompting granularities, resulting in 6,832 rigorously validated VQA pairs from 21 diverse datasets. Our multi-dimensional evaluation framework mirrors the clinical workflow\u2014spanning anatomical recognition, lesion analysis, spatial localization, and surgical operations\u2014to holistically gauge the perceptual and diagnostic abilities of MLLMs in realistic scenarios. We benchmark 23 state-of-the-art models, including general-purpose, medical-specialized, and proprietary MLLMs, and establish human clinician performance as a reference standard. Our extensive experiments reveal: (1) proprietary MLLMs outperform open-source and medical-specialized models overall, but still trail human experts; (2) medical-domain supervised fine-tuning substantially boosts task-specific accuracy; and (3) model performance remains sensitive to prompt format and clinical task complexity. EndoBench establishes a new standard for evaluating and advancing MLLMs in endoscopy, highlighting both progress and persistent gaps between current models and expert clinical reasoning. We publicly release our benchmark and code.", "arxiv_id": "2505.23601v2", "arxiv_authors": ["Shengyuan Liu", "Boyun Zheng", "Wenting Chen", "Zhihao Peng", "Zhenfei Yin", "Jing Shao", "Jiancong Hu", "Yixuan Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1de"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068954, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64d"}, "filepath": "data/2505.10562v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992467808142791, "type": "Poster", "name": "End-to-End Vision Tokenizer Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116026", "abstract": "Existing vision tokenization isolates the optimization of vision tokenizers from downstream training, implicitly assuming the visual tokens can generalize well across various tasks, e.g., image generation and visual question answering. The vision tokenizer optimized for low-level reconstruction is agnostic to downstream tasks requiring varied representations and semantics. This decoupled paradigm introduces a critical misalignment: The loss of the vision tokenization can be the representation bottleneck for target tasks. For example, errors in tokenizing text in a given image lead to poor results when recognizing or generating them. To address this, we propose ETT, an end-to-end vision tokenizer tuning approach that enables joint optimization between vision tokenization and target autoregressive tasks. Unlike prior autoregressive models that use only discrete indices from a frozen vision tokenizer, ETT leverages the visual embeddings of the tokenizer codebook, and optimizes the vision tokenizers end-to-end with both reconstruction and caption objectives. ETT can be seamlessly integrated into existing training pipelines with minimal architecture modifications. Our ETT is simple to implement and integrate, without the need to adjust the original codebooks or architectures of the employed large language models. Extensive experiments demonstrate that our proposed end-to-end vision tokenizer tuning unlocks significant performance gains, i.e., 2-6% for multimodal understanding and visual generation tasks compared to frozen tokenizer baselines, while preserving the original reconstruction capability. We hope this very simple and strong method can empower multimodal foundation models besides image generation and understanding.", "arxiv_id": "2505.10562v1", "arxiv_authors": ["Wenxuan Wang", "Fan Zhang", "Yufeng Cui", "Haiwen Diao", "Zhuoyan Luo", "Huchuan Lu", "Jing Liu", "Xinlong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1df"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1010152, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64e"}, "filepath": "data/2501.01895v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994733507555938, "type": "Poster", "name": "EnerVerse: Envisioning Embodied Future Space for Robotics Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116531", "abstract": "We introduce EnerVerse, a generative robotics foundation model that constructs and interprets embodied spaces. EnerVerse employs a chunk-wise autoregressive video diffusion framework to predict future embodied spaces from instructions, enhanced by a sparse context memory for long-term reasoning. To model the 3D robotics world, we adopt a multi-view video representation, providing rich perspectives to address challenges like motion ambiguity and 3D grounding. Additionally, EnerVerse-D, a data engine pipeline combining generative modeling with 4D Gaussian Splatting, forms a self-reinforcing data loop to reduce the sim-to-real gap. Leveraging these innovations, EnerVerse translates 4D world representations into physical actions via a policy head (EnerVerse-A), achieving state-of-the-art performance in both simulation and real-world tasks.", "arxiv_id": "2501.01895v2", "arxiv_authors": ["Siyuan Huang", "Liliang Chen", "Pengfei Zhou", "Shengcong Chen", "Zhengkai Jiang", "Yue Hu", "Yue Liao", "Peng Gao", "Hongsheng Li", "Maoqing Yao", "Guanghui Ren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e0"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1141209, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a64f"}, "filepath": "data/2510.06254v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990147084420467, "type": "Poster", "name": "Enhanced Self-Distillation Framework for Efficient Spiking Neural Network Training", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116982", "abstract": "Spiking Neural Networks (SNNs) exhibit exceptional energy efficiency on neuromorphic hardware due to their sparse activation patterns. However, conventional training methods based on surrogate gradients and Backpropagation Through Time (BPTT) not only lag behind Artificial Neural Networks (ANNs) in performance, but also incur significant computational and memory overheads that grow linearly with the temporal dimension. To enable high-performance SNN training under limited computational resources, we propose a enhanced self-distillation framework, jointly optimized with rate-based backpropagation. Specifically, the firing rates of intermediate SNN layers are projected onto lightweight ANN branches, and high-quality knowledge generated by the model itself is used to optimize substructures through the ANN pathways. Unlike traditional self-distillation paradigms, we observe that low-quality self-generated knowledge may hinder convergence. To address this, we decouple the teacher signal into reliable and unreliable components, ensuring that only reliable knowledge is used to guide the optimization of the model. Extensive experiments on CIFAR-10, CIFAR-100, CIFAR10-DVS, and ImageNet demonstrate that our method reduces training complexity while achieving high-performance SNN training. For instance, on CIFAR-100, it reduces memory consumption by 75.80% and training time by 23.30% compared to BPTT, while improving accuracy by 1.85%. Notably, the proposed self-distillation framework also shows strong adaptability when applied to ANNs.", "arxiv_id": "2510.06254v1", "arxiv_authors": ["Xiaochen Zhao", "Chengting Yu", "Kairong Yu", "Lei Liu", "Aili Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1120874, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a650"}, "filepath": "data/2504.06264v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995471660512207, "type": "Poster", "name": "Enhancing 3D Reconstruction for Dynamic Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116068", "abstract": "In this work, we address the task of 3D reconstruction in dynamic scenes, where object motions frequently degrade the quality of previous 3D pointmap regression methods, such as DUSt3R, that are originally designed for static 3D scene reconstruction. Although these methods provide an elegant and powerful solution in static settings, they struggle in the presence of dynamic motions that disrupt alignment based solely on camera poses. To overcome this, we propose D$^2$USt3R that directly regresses Static-Dynamic Aligned Pointmaps (SDAP) that simultaneiously capture both static and dynamic 3D scene geometry. By explicitly incorporating both spatial and temporal aspects, our approach successfully encapsulates 3D dense correspondence to the proposed pointmaps, enhancing downstream tasks. Extensive experimental evaluations demonstrate that our proposed approach consistently achieves superior 3D reconstruction performance across various datasets featuring complex motions.", "arxiv_id": "2504.06264v1", "arxiv_authors": ["Jisang Han", "Honggyu An", "Jaewoo Jung", "Takuya Narihira", "Junyoung Seo", "Kazumi Fukuda", "Chaehyun Kim", "Sunghwan Hong", "Yuki Mitsufuji", "Seungryong Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7495782, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a651"}, "filepath": "data/2510.16540v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996867278046118, "type": "Poster", "name": "Enhancing Compositional Reasoning in CLIP via Reconstruction and Alignment of Text Descriptions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119758", "abstract": "Despite recent advances, vision-language models trained with standard contrastive objectives still struggle with compositional reasoning -- the ability to understand structured relationships between visual and linguistic elements.This shortcoming is largely due to the tendency of the text encoder to focus on individual words rather than their relations, a limitation reinforced by contrastive training that primarily aligns words with visual objects.In this paper, we introduce REconstruction and Alignment of text Descriptions (READ), a fine-tuning method designed to enhance compositional reasoning by adding two auxiliary objectives to the contrastive learning: (1) a token-level reconstruction objective, where a frozen pre-trained decoder reconstructs paraphrased captions based on the embedding of the original caption; and (2) a sentence-level alignment objective, which explicitly aligns paraphrased sentences in the embedding space.We show that READ-CLIP, a model derived by applying the READ method to the pre-trained CLIP model, achieves the state-of-the-art performance across five major compositional reasoning benchmarks, outperforming the strongest conventional fine-tuning baseline by up to 4.1\\%.Furthermore, applying READ to existing CLIP variants (including NegCLIP and FSC-CLIP) also improves performance on these benchmarks.Quantitative and qualitative analyses reveal that our proposed objectives -- reconstruction and alignment -- offer complementary benefits: the former encourages the encoder to capture relationships between words within a caption, while the latter ensures consistent representations for paraphrases expressed with different wording.", "arxiv_id": "2510.16540v1", "arxiv_authors": ["Jihoon Kwon", "Kyle Min", "Jy-yong Sohn"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1179083, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a652"}, "filepath": "data/2506.01511v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995281424298407, "type": "Poster", "name": "Enhancing Diffusion-based Unrestricted Adversarial Attacks via Adversary Preferences Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118301", "abstract": "Preference alignment in diffusion models has primarily focused on benign human preferences (e.g., aesthetic). In this paper, we propose a novel perspective: framing unrestricted adversarial example generation as a problem of aligning with adversary preferences. Unlike benign alignment, adversarial alignment involves two inherently conflicting preferences: visual consistency and attack effectiveness, which often lead to unstable optimization and reward hacking (e.g., reducing visual quality to improve attack success). To address this, we propose APA (Adversary Preferences Alignment), a two-stage framework that decouples conflicting preferences and optimizes each with differentiable rewards. In the first stage, APA fine-tunes LoRA to improve visual consistency using rule-based similarity reward. In the second stage, APA updates either the image latent or prompt embedding based on feedback from a substitute classifier, guided by trajectory-level and step-wise rewards. To enhance black-box transferability, we further incorporate a diffusion augmentation strategy. Experiments demonstrate that APA achieves significantly better attack transferability while maintaining high visual consistency, inspiring further research to approach adversarial attacks from an alignment perspective.", "arxiv_id": "2506.01511v1", "arxiv_authors": ["Kaixun Jiang", "Zhaoyu Chen", "Haijing Guo", "Jinglun Li", "Jiyuan Fu", "Pinxue Guo", "Hao Tang", "Bo Li", "Wenqiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2001001, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a653"}, "filepath": "data/2510.09343v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991113808182874, "type": "Poster", "name": "Enhancing Infrared Vision: Progressive Prompt Fusion Network and Benchmark", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115172", "abstract": "We engage in the relatively underexplored task named thermal infrared image enhancement. Existing infrared image enhancement methods primarily focus on tackling individual degradations, such as noise, contrast, and blurring, making it difficult to handle coupled degradations. Meanwhile, all-in-one enhancement methods, commonly applied to RGB sensors, often demonstrate limited effectiveness due to the significant differences in imaging models. In sight of this, we first revisit the imaging mechanism and introduce a Recurrent Prompt Fusion Network (RPFN). Specifically, the RPFN initially establishes prompt pairs based on the thermal imaging process. For each type of degradation, we fuse the corresponding prompt pairs to modulate the model's features, providing adaptive guidance that enables the model to better address specific degradations under single or multiple conditions.In addition, a selective recurrent training mechanism is introduced to gradually refine the model's handling of composite cases to align the enhancement process, which not only allows the model to remove camera noise and retain key structural details, but also enhancing the overall contrast of the thermal image. Furthermore, we introduce the most comprehensive high-quality infrared benchmark covering a wide range of scenarios. Extensive experiments substantiate that our approach not only delivers promising visual results under specific degradation but also significantly improves performance on complex degradation scenes, achieving a notable 8.76% improvement.", "arxiv_id": "2510.09343v1", "arxiv_authors": ["Jinyuan Liu", "Zihang Chen", "Zhu Liu", "Zhiying Jiang", "Long Ma", "Xin Fan", "Risheng Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1633036, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a654"}, "filepath": "data/2510.21609v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996978582184545, "type": "Poster", "name": "Enhancing Tactile-based Reinforcement Learning for Robotic Control", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117811", "abstract": "Effectively combining tactile sensing and reinforcement learning (RL) creates powerful new pathways for sophisticated robot manipulation. However, tactile information is not always fully exploited by neural network-based approaches in deep RL due to its unique characteristics (e.g. sparsity). Departing from conventional reliance on idealised state representations, we present a new approach to strengthen the performance of sensory-driven agents for complex manipulation tasks. We provide a novel application and analysis of tailored reconstruction and multi-step dynamics objectives that help the agent more effectively leverage its tactile observations, and propose training these objectives on a separated auxiliary memory. We find that dynamics-based objectives unlock higher-performing agents that are able to predict future contacts with high precision. Experimental results show the efficacy of our approach through a simulated robotic agent on three complex control tasks with touch and proprioception alone.", "arxiv_id": "2510.21609v1", "arxiv_authors": ["Elle Miller", "Trevor McInroe", "David Abel", "Oisin Mac Aodha", "Sethu Vijayakumar"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e6"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061900, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a655"}, "filepath": "data/2505.19261v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993117573395319, "type": "Poster", "name": "Enhancing Text-to-Image Diffusion Transformer via Split-Text Conditioning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120085", "abstract": "Current text-to-image diffusion generation typically employs complete-text conditioning. Due to the intricate syntax, diffusion transformers (DiTs) inherently suffer from a comprehension defect of complete-text captions. One-fly complete-text input either overlooks critical semantic details or causes semantic confusion by simultaneously modeling diverse semantic primitive types. To mitigate this defect of DiTs, we propose a novel split-text conditioning framework named DiT-ST. This framework converts a complete-text caption into a split-text caption, a collection of simplified sentences, to explicitly express various semantic primitives and their interconnections. The split-text caption is then injected into different denoising stages of DiT-ST in a hierarchical and incremental manner. Specifically, DiT-ST leverages Large Language Models to parse captions, extracting diverse primitives and hierarchically sorting out and constructing these primitives into a split-text input. Moreover, we partition the diffusion denoising process according to its differential sensitivities to diverse semantic primitive types and determine the appropriate timesteps to incrementally inject tokens of diverse semantic primitive types into input tokens via cross-attention. In this way, DiT-ST enhances the representation learning of specific semantic primitive types across different stages. Extensive experiments validate the effectiveness of our proposed DiT-ST in mitigating the complete-text comprehension defect. Dataset and code are available.", "arxiv_id": "2505.19261v1", "arxiv_authors": ["Yu Zhang", "Jialei Zhou", "Xinchen Li", "Qi Zhang", "Zhongwei Wan", "Tianyu Wang", "Duoqian Miao", "Changwei Wang", "Longbing Cao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4458379, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a656"}, "filepath": "data/2412.06474v1.png", "tags": [], "_media_type": "image", "_rand": 0.999304063812433, "type": "Poster", "name": "Enhancing Vision-Language Model Reliability with Uncertainty-Guided Dropout Decoding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118572", "abstract": "Large vision-language models (LVLMs) excel at multimodal tasks but are prone to misinterpreting visual inputs, often resulting in hallucinations and unreliable outputs. We present Dropout Decoding, a novel inference-time approach that quantifies the uncertainty of visual tokens and selectively masks uncertain tokens to improve decoding. Our method measures the uncertainty of each visual token by projecting it onto the text space and decomposing it into aleatoric and epistemic components. Specifically, we focus on epistemic uncertainty, which captures perception-related errors more effectively. Inspired by dropout regularization, we introduce uncertainty-guided token dropout, which applies the dropout principle to input visual tokens instead of model parameters, and during inference rather than training. By aggregating predictions from an ensemble of masked decoding contexts, we can robustly mitigate errors arising from visual token misinterpretations. Evaluations on benchmarks including CHAIR, THRONE, and MMBench demonstrate that Dropout Decoding significantly reduces object hallucinations (OH) and enhances both reliability and quality of LVLM outputs across diverse visual contexts.", "arxiv_id": "2412.06474v1", "arxiv_authors": ["Yixiong Fang", "Ziran Yang", "Zhaorun Chen", "Zhuokai Zhao", "Jiawei Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.486Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1619919, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a657"}, "filepath": "data/2510.07823v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994485650325838, "type": "Poster", "name": "Enhancing Visual Prompting through Expanded Transformation Space and Overfitting Mitigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116362", "abstract": "Visual prompting (VP) has emerged as a promising parameter-efficient fine-tuning approach for adapting pre-trained vision models to downstream tasks without modifying model parameters. Despite offering advantages like negligible computational overhead and compatibility with black-box models, conventional VP methods typically achieve lower accuracy than other adaptation approaches. Our analysis reveals two critical limitations: the restricted expressivity of simple additive transformation and a tendency toward overfitting when the parameter count increases. To address these challenges, we propose ACAVP (Affine, Color, and Additive Visual Prompting), which enhances VP's expressive power by introducing complementary transformation operations: affine transformation for creating task-specific prompt regions while preserving original image information, and color transformation for emphasizing task-relevant visual features. Additionally, we identify that overfitting is a critical issue in VP training and introduce TrivialAugment as an effective data augmentation, which not only benefits our approach but also significantly improves existing VP methods, with performance gains of up to 12 percentage points on certain datasets. This demonstrates that appropriate data augmentation is universally beneficial for VP training. Extensive experiments across twelve diverse image classification datasets with two different model architectures demonstrate that ACAVP achieves state-of-the-art accuracy among VP methods, surpasses linear probing in average accuracy, and exhibits superior robustness to distribution shifts, all while maintaining minimal computational overhead during inference.", "arxiv_id": "2510.07823v1", "arxiv_authors": ["Shohei Enomoto"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1e9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1019757, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a658"}, "filepath": "data/2509.26096v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994937136289817, "type": "Poster", "name": "Entropy-aware Variance Optimization for Diffusion Inference", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115792", "abstract": "Diffusion models (DMs) excel in image generation, but suffer from slow inference and training-inference discrepancies. Although gradient-based solvers like DPM-Solver accelerate sampling inference of DMs, they lack theoretical foundations in information transmission efficiency. This paper introduces an information-theoretic perspective on the diffusion inference processes, revealing that successful denoising fundamentally reduces conditional entropy during reverse transitions. This principle leads to two key insights: (1) data prediction parameterization outperforms its noise counterpart in entropy reduction, and (2) conditional variance optimization that minimizes transition and reconstruction errors through conditional entropy reduction. Building on these insights, we propose entropy-aware variance optimization for diffusion inference, called *EvoDiff*, that systematically reduces uncertainty during denoising by optimizing conditional entropy reduction. Extensive experiments on DMs validate our insights and demonstrate that the proposed method consistently outperforms state-of-the-art gradient-based solvers. For example, compared to the baseline method, EvoDiff reduces reconstruction error by up to 45.5\\% (FID from 5.10 to 2.78) at 10 function evaluations (NFE) on CIFAR-10, cuts function evaluation cost by 25\\% (from 20 to 15 NFE) for high-quality samples on ImageNet-256, and improves text-to-image generation while reducing artifacts.", "arxiv_id": "2509.26096v2", "arxiv_authors": ["Shigui Li", "Wei Chen", "Delu Zeng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ea"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1098936, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a659"}, "filepath": "data/2504.13987v1.png", "tags": [], "_media_type": "image", "_rand": 0.999565376761209, "type": "Poster", "name": "Entropy Rectifying Guidance for Diffusion and Flow Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118003", "abstract": "Guidance techniques are commonly used in diffusion and flow models to improve image quality and input consistency for conditional generative tasks such as class-conditional and text-to-image generation.In particular, classifier-free guidance (CFG) is the most widely adopted guidance technique.It results, however, in trade-offs across quality, diversity and consistency: improving some at the expense of others.While recent work has shown that it is possible to disentangle these factors to some extent, such methods come with an overhead of requiring an additional (weaker) model, or require more forward passes per sampling step. In this paper, we propose Entropy Rectifying Guidance (ERG), a simple and effective guidance method based on inference-time changes in the attention mechanism of state-of-the-art diffusion transformer architectures, which allows for simultaneous improvements over image quality, diversity and prompt consistency. ERG is more general than CFG and similar guidance techniques, as it extends to unconditional sampling. We show that ERG results in significant improvements in various generation tasks such as text-to-image, class-conditional and unconditional image generation.We also show that ERG can be seamlessly combined with other recent guidance methods such as CADS and APG, further improving generations.", "arxiv_id": "2504.13987v1", "arxiv_authors": ["Tariq Berrada Ifriqi", "Adriana Romero-Soriano", "Michal Drozdzal", "Jakob Verbeek", "Karteek Alahari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1eb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1406771, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65a"}, "filepath": "data/2504.02826v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995854511191368, "type": "Poster", "name": "Envisioning Beyond the Pixels: Benchmarking Reasoning-Informed Visual Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121633", "abstract": "Large Multi-modality Models (LMMs) have made significant progress in visual understanding and generation, but they still face challenges in General Visual Editing, particularly in following complex instructions, preserving appearance consistency, and supporting flexible input formats. To study this gap, we introduce \\textbf{RISEBench}, the first benchmark for evaluating **R**easoning-**I**nformed vi**S**ual **E**diting (**RISE**). RISEBench focuses on four key reasoning categories: *Temporal*, *Causal*, *Spatial*, and *Logical Reasoning*. We curate high-quality test cases for each category and propose an robust evaluation framework that assesses *Instruction Reasoning*, *Appearance Consistency*, and *Visual Plausibility* with both human judges and the LMM-as-a-judge approach. We conducted experiments evaluating eight prominent visual editing models, comprising both open-source and proprietary models. The evaluation results demonstrate that current models face significant challenges in reasoning-based editing tasks. Even the most powerful model evaluated, GPT-4o-Image, achieves an accuracy of merely 28.8\\%. RISEBench effectively highlights the limitations of contemporary editing models, provides valuable insights, and indicates potential future directions for the field of reasoning-aware visual editing.", "arxiv_id": "2504.02826v4", "arxiv_authors": ["Xiangyu Zhao", "Peiyuan Zhang", "Kexian Tang", "Xiaorong Zhu", "Hao Li", "Wenhao Chai", "Zicheng Zhang", "Renqiu Xia", "Guangtao Zhai", "Junchi Yan", "Hua Yang", "Xue Yang", "Haodong Duan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ec"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3464042, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65b"}, "filepath": "data/2506.05287v1.png", "tags": [], "_media_type": "image", "_rand": 0.999543644301539, "type": "Poster", "name": "EOC-Bench: Can MLLMs Identify, Recall, and Forecast Objects in an Egocentric World?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121637", "abstract": "The emergence of multimodal large language models (MLLMs) has driven breakthroughs in egocentric vision applications. These applications necessitate persistent, context-aware understanding of objects, as users interact with tools in dynamic and cluttered environments. However, existing embodied benchmarks primarily focus on static scene exploration, emphasizing object's appearance and spatial attributes while neglecting the assessment of dynamic changes arising from users' interactions.capabilities in object-level spatiotemporal reasoning required for real-world interactions.To address this gap, we introduce EOC-Bench, an innovative benchmark designed to systematically evaluate object-centric embodied cognition in dynamic egocentric scenarios.Specially, EOC-Bench features 3,277 meticulously annotated QA pairs categorized into three temporal categories: Past, Present, and Future, covering 11 fine-grained evaluation dimensions and 3 visual object referencing types.To ensure thorough assessment, we develop a mixed-format human-in-the-loop annotation frameworkBased on EOC-Bench, we conduct comprehensive evaluations of various proprietary, open-source, and object-level MLLMs. EOC-Bench serves as a crucial tool for advancing the embodied object cognitive capabilities of MLLMs, establishing a robust foundation for developing reliable core models for embodied systems.All data and evaluation codes will be made publicly available.", "arxiv_id": "2506.05287v1", "arxiv_authors": ["Yuqian Yuan", "Ronghao Dang", "Long Li", "Wentong Li", "Dian Jiao", "Xin Li", "Deli Zhao", "Fan Wang", "Wenqiao Zhang", "Jun Xiao", "Yueting Zhuang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ed"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061095, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65c"}, "filepath": "data/2510.15963v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996562048857398, "type": "Poster", "name": "ESCA: Contextualizing Embodied Agents via Scene-Graph Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117064", "abstract": "Multi-modal large language models (MLLMs) are making rapid progress toward general-purpose embodied agents. However, current training pipelines primarily rely on high-level vision-sound-text pairs and lack fine-grained, structured alignment between pixel-level visual content and textual semantics. To overcome this challenge, we propose ESCA, a new framework for contextualizing embodied agents through structured spatial-temporal understanding. At its core is SGClip, a novel CLIP-based, open-domain, and promptable model for generating scene graphs. SGClip is trained on 87K+ open-domain videos via a neurosymbolic learning pipeline, which harnesses model-driven self-supervision from video-caption pairs and structured reasoning, thereby eliminating the need for human-labeled scene graph annotations. We demonstrate that SGClip supports both prompt-based inference and task-specific fine-tuning, excelling in scene graph generation and action localization benchmarks. ESCA with SGClip consistently improves both open-source and commercial MLLMs, achieving state-of-the-art performance across two embodied environments. Notably, it significantly reduces agent perception errors and enables open-source models to surpass proprietary baselines.", "arxiv_id": "2510.15963v1", "arxiv_authors": ["Jiani Huang", "Amish Sethi", "Matthew Kuo", "Mayank Keoliya", "Neelay Velingker", "JungHo Jung", "Ser-Nam Lim", "Ziyang Li", "Mayur Naik"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ee"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 953528, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65d"}, "filepath": "data/2506.18322v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997272500503889, "type": "Poster", "name": "Escaping the SpuriVerse: Can Large Vision-Language Models Generalize Beyond Seen Spurious Correlations?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121529", "abstract": "Finetuning can cause spurious correlations to arise between non-essential features and the target labels, but benchmarks to study these effects involve contrived settings and narrow tasks. In contrast, we consider spurious correlations in multi-modal Large Vision Language Models (LVLMs) pretrained on extensive and diverse datasets without explicit task supervision. We develop a benchmark by sourcing GPT-4o errors on real-world visual-question-answering (VQA) benchmarks, then curating a subset through LVLM-human annotation and synthetic counterfactual evaluation to identify errors caused by spurious correlations. This process yields SpuriVerse, a novel benchmark comprised of 124 distinct types of spurious correlations extracted from real-world datasets, each containing 1 realistic and 10 synthetic VQA samples for a total of 1364 multiple choice questions. We evaluate 15 open and closed-source LVLMs on SpuriVerse, finding that even state-of-the-art closed-source models struggle significantly, achieving at best only 37.1\\% accuracy. Fine-tuning on synthetic examples that emphasize the spurious correlation improves performance to 78.40\\%, suggesting that training on diverse spurious patterns generalizes to unseen situations: models appear to learn to avoid \"shortcuts\" and attend to the overall image context.", "arxiv_id": "2506.18322v1", "arxiv_authors": ["Yiwei Yang", "Chung Peng Lee", "Shangbin Feng", "Dora Zhao", "Bingbing Wen", "Anthony Z. Liu", "Yulia Tsvetkov", "Bill Howe"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ef"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1024456, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65e"}, "filepath": "data/2507.00981v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992379323809812, "type": "Poster", "name": "Evaluating Depth Estimation Robustness with Procedural Scene Perturbations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117944", "abstract": "Recent years have witnessed substantial progress on monocular depth estimation, particularly as measured by the success of large models on standard benchmarks. However, performance on standard benchmarks does not offer a complete assessment, because most evaluate accuracy but not robustness. In this work, we perform a thorough evaluation of the robustness of state-of-the-art monocular depth models. We use procedural generation to create 3D scenes which test robustness to various controlled perturbations, including object, camera, material and lighting changes. Our analysis yields interesting findings on what perturbations are challenging for state-of-the-art depth models, which we hope will inform further research.", "arxiv_id": "2507.00981v2", "arxiv_authors": ["Jack Nugent", "Siyang Wu", "Zeyu Ma", "Beining Han", "Meenal Parakh", "Abhishek Joshi", "Lingjie Mei", "Alexander Raistrick", "Xinyuan Li", "Jia Deng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1016896, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a65f"}, "filepath": "data/2505.13279v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991323744881278, "type": "Poster", "name": "Event-Driven Dynamic Scene Depth Completion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117094", "abstract": "Depth completion in dynamic scenes poses significant challenges due to rapid ego-motion and object motion, which can severely degrade the quality of input modalities such as RGB images and LiDAR measurements. Conventional RGB-D sensors often struggle to align precisely and capture reliable depth under such conditions. In contrast, event cameras, with their high temporal resolution and sensitivity to motion at the pixel level, provide complementary cues that are particularly beneficial in dynamic environments. To this end, we propose EventDC, the first event-driven depth completion framework. It consists of two key components: Event-Modulated Alignment (EMA) and Local Depth Filtering (LDF). Both modules adaptively learn the two fundamental components of convolution operations: offsets and weights conditioned on motion-sensitive event streams. In the encoder, EMA leverages events to modulate the sampling positions of RGB-D features to achieve pixel redistribution for improved alignment and fusion. In the decoder, LDF refines depth estimations around moving objects by learning motion-aware masks from events. Additionally, EventDC incorporates two loss terms to further benefit global alignment and enhance local depth recovery. Moreover, we establish the first benchmark for event-based depth completion, comprising one real-world and two synthetic datasets, to facilitate future research. Extensive experiments on this benchmark demonstrate the superiority of EventDC. Our code and dataset will be released on paper acceptance.", "arxiv_id": "2505.13279v2", "arxiv_authors": ["Zhiqiang Yan", "Jianhao Jiao", "Zhengxue Wang", "Gim Hee Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1089540, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a660"}, "filepath": "data/2506.06277v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997907098878127, "type": "Poster", "name": "ExAct: A Video-Language Benchmark for Expert Action Analysis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121524", "abstract": "We present ExAct, a new video-language benchmark for expert-level understanding of skilled physical human activities. Our new benchmark contains 3,521 expert-curated video question-answer pairs spanning 11 physical activities in 6 domains: Sports, Bike Repair, Cooking, Health, Music, and Dance. ExAct requires the correct answer to be selected from five carefully designed candidate options, thus necessitating a nuanced, fine-grained, expert-level understanding of physical human skills. Evaluating the recent state-of-the-art VLMs on ExAct reveals a substantial performance gap relative to human expert performance. Specifically, the best-performing GPT-4o model achieves only 44.70% accuracy, well below the 82.02% attained by trained human specialists/experts. We believe that our ExAct will be beneficial for developing and evaluating VLMs capable of precise understanding of human skills in various physical and procedural domains. We will release the dataset and evaluation code.", "arxiv_id": "2506.06277v1", "arxiv_authors": ["Han Yi", "Yulu Pan", "Feihong He", "Xinyu Liu", "Benjamin Zhang", "Oluwatumininu Oguntola", "Gedas Bertasius"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2311669, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a661"}, "filepath": "data/2508.05430v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992925812544962, "type": "Poster", "name": "Explaining Similarity in Vision-Language Encoders with Weighted Banzhaf Interactions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116010", "abstract": "Language-image pre-training (LIP) enables the development of vision-language models capable of zero-shot classification, localization, multimodal retrieval, and semantic understanding. Various explanation methods have been proposed to visualize the importance of input image-text pairs on the model's similarity outputs. However, popular saliency maps are limited by capturing only first-order attributions, overlooking the complex cross-modal interactions intrinsic to such encoders. We introduce faithful interaction explanations of LIP models (FIxLIP) as a unified approach to decomposing the similarity in vision-language encoders. FIxLIP is rooted in game theory, where we analyze how using the weighted Banzhaf interaction index offers greater flexibility and improves computational efficiency over the Shapley interaction quantification framework. From a practical perspective, we propose how to naturally extend explanation evaluation metrics, like the pointing game and area between the insertion/deletion curves, to second-order interaction explanations. Experiments on MS COCO and ImageNet-1k benchmarks validate that second-order methods like FIxLIP outperform first-order attribution methods. Beyond delivering high-quality explanations, we demonstrate the utility of FIxLIP in comparing different models like CLIP vs. SigLIP-2 and ViT-B/32 vs. ViT-L/16.", "arxiv_id": "2508.05430v1", "arxiv_authors": ["Hubert Baniecki", "Maximilian Muschalik", "Fabian Fumagalli", "Barbara Hammer", "Eyke H\u00fcllermeier", "Przemyslaw Biecek"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081748, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a662"}, "filepath": "data/2409.16838v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996313962145107, "type": "Poster", "name": "Explicitly Modeling Subcortical Vision with a Neuro-Inspired Front-End Improves CNN Robustness", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117438", "abstract": "Convolutional neural networks (CNNs) trained on object recognition achieve high task performance but continue to exhibit vulnerability under a range of visual perturbations and out-of-domain images, when compared with biological vision. Prior work has demonstrated that coupling a standard CNN with a front-end block (VOneBlock) that mimics the primate primary visual cortex (V1) can improve overall model robustness. Expanding on this, we introduce Early Vision Networks (EVNets), a new class of hybrid CNNs that combine the VOneBlock with a novel SubcorticalBlock, whose architecture draws from computational models in neuroscience and is parameterized to maximize alignment with subcortical responses reported across multiple experimental studies. Without being optimized to do so, the assembly of the SubcorticalBlock with the VOneBlock improved V1 alignment across most standard V1 benchmarks, and better modeled extra-classical receptive field phenomena. In addition, EVNets exhibit stronger emergent shape bias and overperform the base CNN architecture by 8.5\\% on an aggregate benchmark of robustness evaluations, including adversarial perturbations, common corruptions, and domain shifts. Finally, we show that EVNets can be further improved when paired with a state-of-the-art data augmentation technique, surpassing the performance of the isolated data augmentation approach by 7.3\\% on our robustness benchmark. This result reveals complementary benefits between changes in architecture to better mimic biology and training-based machine learning approaches.", "arxiv_id": "2409.16838v2", "arxiv_authors": ["Lucas Piper", "Arlindo L. Oliveira", "Tiago Marques"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062081, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a663"}, "filepath": "data/2510.11268v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996570065551901, "type": "Poster", "name": "Exploring and Leveraging Class Vectors for Classifier Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116452", "abstract": "Image classifiers play a critical role in detecting diseases in medical imaging and identifying anomalies in manufacturing processes. However, their predefined behaviors after extensive training make post hoc model editing difficult, especially when it comes to forgetting specific classes or adapting to distribution shifts. Existing classifier editing methods either focus narrowly on correcting errors or incur extensive retraining costs, creating a bottleneck for flexible editing. Moreover, such editing has seen limited investigation in image classification. To overcome these challenges, we introduce class vectors, which capture class-specific representation adjustments during fine-tuning. Whereas task vectors encode task-level changes in weight space, class vectors disentangle each class\u2019s adaptation in the latent space. We show that class vectors capture each class\u2019s semantic shift and that classifier editing can be achieved either by steering latent features along these vectors or by mapping them into weight space to update the decision boundaries. We also demonstrate that the inherent linearity and orthogonality of class vectors support efficient, flexible, and high-level concept editing via simple class arithmetic. Finally, we validate their utility in applications such as unlearning, environmental adaptation, adversarial defense, and adversarial trigger optimization.", "arxiv_id": "2510.11268v2", "arxiv_authors": ["Jaeik Kim", "Jaeyoung Do"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.487Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083243, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a664"}, "filepath": "data/2510.17299v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994132634554368, "type": "Poster", "name": "Exploring Performance Degradation in Dense Tasks for Self-supervised Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115902", "abstract": "In this work, we observe a counterintuitive phenomenon in self-supervised learning (SSL): longer training may impair the performance of dense downstream tasks (e.g., semantic segmentation). We refer to this phenomenon as Self-supervised Dense Degradation (SDD) and demonstrate its consistent presence across ten state-of-the-art SSL methods with various losses, architectures, and datasets. When the model performs suboptimally on dense tasks at the end of training, measuring the performance during training becomes essential. However, evaluating dense performance effectively without annotations remains an open challenge.To tackle this issue, we introduce a Dense representation Quality Estimator (DQE), composed of a class-relevance measure and an effective dimensionality measure. The proposed DQE is both theoretically grounded and empirically validated to be closely correlated with the downstream performance. Based on this metric, we introduce a straightforward yet effective model selection strategy and a DQE-based regularization method. Experiments on ten SSL methods across four benchmarks confirm that model selection improves mIoU by $4.0\\\\%$ on average with negligible computational cost. Additionally, DQE regularization consistently mitigates the effects of dense degradation. Code is provided in the supplementary material.", "arxiv_id": "2510.17299v1", "arxiv_authors": ["Siran Dai", "Qianqian Xu", "Peisong Wen", "Yang Liu", "Qingming Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1041892, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a665"}, "filepath": "data/2505.15660v3.png", "tags": [], "_media_type": "image", "_rand": 0.999525542589482, "type": "Poster", "name": "Exploring the Limits of Vision-Language-Action Manipulations in Cross-task Generalization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116674", "abstract": "The generalization capabilities of vision-language-action (VLA) models to unseen tasks are crucial to achieving general-purpose robotic manipulation in open-world settings.However, the cross-task generalization capabilities of existing VLA models remain significantly underexplored.To address this gap, we introduce **AGNOSTOS**, a novel simulation benchmark designed to rigorously evaluate cross-task zero-shot generalization in manipulation. AGNOSTOS comprises 23 unseen manipulation tasks for test\u2014distinct from common training task distributions\u2014and incorporates two levels of generalization difficulty to assess robustness. Our systematic evaluation reveals that current VLA models, despite being trained on diverse datasets, struggle to generalize effectively to these unseen tasks. To overcome this limitation, we propose **Cross-Task In-Context Manipulation (X-ICM)**, a method that conditions large language models (LLMs) on in-context demonstrations from seen tasks to predict action sequences for unseen tasks.Additionally, we introduce a **dynamics-guided sample selection** strategy that identifies relevant demonstrations by capturing cross-task dynamics. On AGNOSTOS, X-ICM significantly improves cross-task zero-shot generalization performance over leading VLAs, achieving improvements of 6.0\\% over $\\pi_0$ and 7.9\\% over VoxPoser.We believe AGNOSTOS and X-ICM will serve as valuable tools for advancing general-purpose robotic manipulation.", "arxiv_id": "2505.15660v3", "arxiv_authors": ["Jiaming Zhou", "Ke Ye", "Jiayi Liu", "Teli Ma", "Zifan Wang", "Ronghe Qiu", "Kun-Yu Lin", "Zhilin Zhao", "Junwei Liang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f7"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1347030, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a666"}, "filepath": "data/2505.16985v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996583535970431, "type": "Poster", "name": "Extremely Simple Multimodal Outlier Synthesis for Out-of-Distribution Detection and Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116022", "abstract": "Out-of-distribution (OOD) detection and segmentation are crucial for deploying machine learning models in safety-critical applications such as autonomous driving and robot-assisted surgery. While prior research has primarily focused on unimodal image data, real-world applications are inherently multimodal, requiring the integration of multiple modalities for improved OOD detection. A key challenge is the lack of supervision signals from unknown data, leading to overconfident predictions on OOD samples. To address this challenge, we propose Feature Mixing, an extremely simple and fast method for synthesizing multimodal outliers with theoretical support, which can be further optimized to help the model better distinguish between in-distribution (ID) and OOD data. Feature Mixing is modality-agnostic and applicable to various modality combinations. Additionally, we introduce CARLA-OOD, a new multimodal dataset for OOD segmentation, featuring synthetic OOD objects across diverse scenes and weather conditions. Extensive experiments on SemanticKITTI, nuScenes, CARLA-OOD datasets, and the MultiOOD benchmark demonstrate that Feature Mixing achieves state-of-the-art performance with a $10 \\times$ to $370 \\times$ speedup. Our source code and dataset will be publicly available.", "arxiv_id": "2505.16985v1", "arxiv_authors": ["Moru Liu", "Hao Dong", "Jessica Kelly", "Olga Fink", "Mario Trapp"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081475, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a667"}, "filepath": "data/2510.14560v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995984713445857, "type": "Poster", "name": "Eyes Wide Open: Ego Proactive Video-LLM for Streaming Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120140", "abstract": "Envision an AI capable of functioning in human-like settings, moving beyond mere observation to actively understand, anticipate, and proactively respond to unfolding events. Towards this vision, we focus on the innovative task where, given ego-streaming video input, an assistant proactively answers diverse, evolving questions at the opportune moment, while maintaining synchronized perception and reasoning. This task embodies three key properties: (1) Proactive Coherence, (2) Just-in-Time Responsiveness, and (3) Synchronized Efficiency.To evaluate and address these properties, we first introduce ESTP-Bench (Ego Streaming Proactive Benchmark) alongside the ESTP-F1 metric\u2014a novel framework designed for their rigorous assessment. Secondly, we propose a comprehensive technical pipeline to enable models to tackle this challenging task. This pipeline comprises: (1) a data engine, (2) a multi-stage training strategy, and (3) a proactive dynamic compression technique. Our proposed model effectively addresses these critical properties while achieving state-of-the-art (SOTA) performance on the standard COIN benchmark.", "arxiv_id": "2510.14560v1", "arxiv_authors": ["Yulin Zhang", "Cheng Shi", "Yang Wang", "Sibei Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1f9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1098489, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a668"}, "filepath": "data/2510.11675v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999754405647294, "type": "Poster", "name": "FACE: Faithful Automatic Concept Extraction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115653", "abstract": "Interpreting deep neural networks through concept-based explanations offers a bridge between low-level features and high-level human-understandable semantics. However, existing automatic concept discovery methods often fail to align these extracted concepts with the model\u2019s true decision-making process, thereby compromising explanation faithfulness. In this work, we propose FACE (Faithful Automatic Concept Extraction), a novel framework that combines Non-negative Matrix Factorization (NMF) with a Kullback-Leibler (KL) divergence regularization term to ensure alignment between the model\u2019s original and concept-based predictions. Unlike prior methods that operate solely on encoder activations, FACE incorporates classifier supervision during concept learning, enforcing predictive consistency and enabling faithful explanations. We provide theoretical guarantees showing that minimizing the KL divergence bounds the deviation in predictive distributions, thereby promoting faithful local linearity in the learned concept space. Systematic evaluations on ImageNet, COCO, and CelebA datasets demonstrate that FACE outperforms existing methods across faithfulness and sparsity metrics.", "arxiv_id": "2510.11675v1", "arxiv_authors": ["Dipkamal Bhusal", "Michael Clifford", "Sara Rampazzi", "Nidhi Rastogi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1fa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1000904, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a669"}, "filepath": "data/2501.01243v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996398946659568, "type": "Poster", "name": "Face-Human-Bench: A Comprehensive Benchmark of Face and Human Understanding for Multi-modal Assistants", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121722", "abstract": "Faces and humans are crucial elements in social interaction and are widely included in everyday photos and videos. Therefore, a deep understanding of faces and humans will enable multi-modal assistants to achieve improved response quality and broadened application scope. Currently, the multi-modal assistant community lacks a comprehensive and scientific evaluation of face and human understanding abilities. In this paper, we first propose a hierarchical ability taxonomy that includes three levels of abilities. Then, based on this taxonomy, we collect images and annotations from publicly available datasets in the face and human community and build a semi-automatic data pipeline to produce problems for the new benchmark. Finally, the obtained Face-Human-Bench includes a development set and a test set, each with 1800 problems, supporting both English and Chinese. We conduct evaluations over 25 mainstream multi-modal large language models (MLLMs) with our Face-Human-Bench, focusing on the correlation between abilities, the impact of the relative position of targets on performance, and the impact of Chain of Thought (CoT) prompting on performance. We also explore which abilities of MLLMs need to be supplemented by specialist models.", "arxiv_id": "2501.01243v3", "arxiv_authors": ["Lixiong Qin", "Shilong Ou", "Miaoxuan Zhang", "Jiangning Wei", "Yuhang Zhang", "Xiaoshuai Song", "Yuchen Liu", "Mei Wang", "Weiran Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1fb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 997785, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66a"}, "filepath": "data/2510.10292v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994385127134415, "type": "Poster", "name": "FactoredScenes: Real-World Scene Generation via Library Learning of Layout and Pose Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119086", "abstract": "Real-world scenes, such as those in ScanNet, are difficult to capture, with highly limited data available. Generating realistic scenes with varied object poses remains an open and challenging task. In this work, we propose FactoredScenes, a framework that synthesizes realistic 3D scenes, by leveraging the underlying structure of rooms, while learning the variation of object poses from lived-in scenes. We propose a factored room representation that decomposes scenes into hierarchically organized concepts of programs and object poses. To encode structure, FactoredScenes learns a library of functions capturing reusable layout patterns from which scenes are drawn, then uses large language models to generate high-level programs, regularized by the learned library. To represent scene variations, FactoredScenes learns a program-conditioned model to hierarchically predict object poses, and retrieves and places 3D objects in a scene. We show that FactoredScenes generates realistic, real-world rooms that are difficult to distinguish from real ScanNet scenes.", "arxiv_id": "2510.10292v1", "arxiv_authors": ["Joy Hsu", "Emily Jin", "Jiajun Wu", "Niloy J. Mitra"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1fc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1675198, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66b"}, "filepath": "data/2505.16836v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994254565389397, "type": "Poster", "name": "Fact-R1: Towards Explainable Video Misinformation Detection with Deep Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119113", "abstract": "The rapid spread of multimodal misinformation on social media has raised growing concerns, while research on video misinformation detection remains limited due to the lack of large-scale, diverse datasets. Existing methods often overfit to rigid templates and lack deep reasoning over deceptive content. To address these challenges, we introduce FakeVV, a large-scale benchmark comprising over 100,000 video-text pairs with fine-grained, interpretable annotations. In addition, we further propose Fact-R1, a novel framework that integrates deep reasoning with collaborative rule-based reinforcement learning. Fact-R1 is trained through a three-stage process: (1) misinformation long-Chain-of-Thought (CoT) instruction tuning, (2) preference alignment via Direct Preference Optimization (DPO), and (3) Group Relative Policy Optimization (GRPO) using a novel verifiable reward function. This enables Fact-R1 to exhibit emergent reasoning behaviors comparable to those observed in advanced text-based reinforcement learning systems, but in the more complex multimodal misinformation setting. Our work establishes a new paradigm for misinformation detection, bridging large-scale video understanding, reasoning-guided alignment, and interpretable verification.", "arxiv_id": "2505.16836v3", "arxiv_authors": ["Fanrui Zhang", "Dian Li", "Qiang Zhang", "Jun Chen", "Gang Liu", "Junxiong Lin", "Jiahong Yan", "Jiawei Liu", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1fd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1067453, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66c"}, "filepath": "data/2506.24125v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994061635895977, "type": "Poster", "name": "FADRM: Fast and Accurate Data Residual Matching for Dataset Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117777", "abstract": "Residual connection has been extensively studied and widely applied at the model architecture level. However, its potential in the more challenging data-centric approaches remains unexplored. In this work, we introduce the concept of ***Data Residual Matching*** for the first time, leveraging data-level skip connections to facilitate data generation and mitigate data information vanishing. This approach maintains a balance between newly acquired knowledge through pixel space optimization and existing core local information identification within raw data modalities, specifically for the dataset distillation task. Furthermore, by incorporating optimization-level refinements, our method significantly improves computational efficiency, achieving superior performance while reducing training time and peak GPU memory usage by 50\\%. Consequently, the proposed method **F**ast and **A**ccurate **D**ata **R**esidual **M**atching for Dataset Distillation (**FADRM**) establishes a new state-of-the-art, demonstrating substantial improvements over existing methods across multiple dataset benchmarks in both efficiency and effectiveness. For instance, with ResNet-18 as the student model and a 0.8\\% compression ratio on ImageNet-1K, the method achieves 47.7\\% test accuracy in single-model dataset distillation and 50.0\\% in multi-model dataset distillation, surpassing RDED by +5.7\\% and outperforming state-of-the-art multi-model approaches, EDC and CV-DD, by +1.4\\% and +4.0\\%.", "arxiv_id": "2506.24125v1", "arxiv_authors": ["Jiacheng Cui", "Xinyue Bi", "Yaxin Luo", "Xiaohan Zhao", "Jiacheng Liu", "Zhiqiang Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1fe"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1267144, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66d"}, "filepath": "data/2510.09459v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990015755037915, "type": "Poster", "name": "Failure Prediction at Runtime without Failure Data for Generative Robot Policies", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119369", "abstract": "Recent advances in imitation learning (IL) with generative models, such as diffusion and flow matching, have significantly improved the capabilities of robots to perform complex, long-horizon tasks. However, distribution shifts from unseen environments or compounding action errors can still cause unpredictable and unsafe behavior, leading to task failure. Therefore, being able to predict failures of generative IL policies as early as possible during runtime is crucial for deploying robots in human-centered and safety-critical environments. The two main existing approaches for recognizing failures both have major drawbacks. Methods relying solely on out-of-distribution (OOD) detection are prone to raising false alarms as policies might generalize. Conversely, external monitoring of the robot\u2019s interactions with its environment provides no foresight about future behavior, meaning that failures can only be detected late or in retrospect. To close the existing gap, we propose FIPER, a framework for Failure Prediction at Runtime for generative IL policies without relying on failure examples. FIPER identifies two key indicators of policy failures: 1) consecutive OOD observations and 2) persistently high uncertainty (entropy) in generated actions. We calibrate both observation- and action-based failure prediction scores on a few successful rollouts and use conformal prediction to provide statistical performance guarantees. We evaluate our framework in five simulation and real-world environments where various types of failures can occur. Our results show that FIPER better distinguishes true failures from OOD situations and predicts failures earlier and more accurately than existing methods. We thus consider FIPER an important step towards more interpretable and safer generative robot policies. Our code and data are available at this link.", "arxiv_id": "2510.09459v2", "arxiv_authors": ["Ralf R\u00f6mer", "Adrian Kobras", "Luca Worbis", "Angela P. Schoellig"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a1ff"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1021706, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66e"}, "filepath": "data/2411.19623v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990160548922536, "type": "Poster", "name": "FairDD: Fair Dataset Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118865", "abstract": "Condensing large datasets into smaller synthetic counterparts has demonstrated its promise for image classification. However, previous research has overlooked a crucial concern in image recognition: ensuring that models trained on condensed datasets are unbiased towards protected attributes (PA), such as gender and race. Our investigation reveals that dataset distillation (DD) fails to alleviate the unfairness towards minority groups within original datasets. Moreover, this bias typically worsens in the condensed datasets due to their smaller size. To bridge the research gap, we propose a novel fair dataset distillation (FDD) framework, namely FairDD, which can be seamlessly applied to diverse matching-based DD approaches, requiring no modifications to their original architectures. The key innovation of FairDD lies in synchronously matching synthetic datasets to PA-wise groups of original datasets, rather than indiscriminate alignment to the whole distributions in vanilla DDs, dominated by majority groups. This synchronized matching allows synthetic datasets to avoid collapsing into majority groups and bootstrap their balanced generation to all PA groups. Consequently, FairDD could effectively regularize vanilla DDs to favor biased generation toward minority groups while maintaining the accuracy of target attributes. Theoretical analyses and extensive experimental evaluations demonstrate that FairDD significantly improves fairness compared to vanilla DD methods, with a promising trade-off between fairness and accuracy. Its consistent superiority across diverse DDs, spanning Distribution and Gradient Matching, establishes it as a versatile FDD approach.", "arxiv_id": "2411.19623v2", "arxiv_authors": ["Qihang Zhou", "Shenhao Fang", "Shibo He", "Wenchao Meng", "Jiming Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a200"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083893, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a66f"}, "filepath": "data/2410.18804v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990103089289765, "type": "Poster", "name": "Fast constrained sampling in pre-trained diffusion models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120010", "abstract": "Large denoising diffusion models, such as Stable Diffusion, have been trained on billions of image-caption pairs to perform text-conditioned image generation. As a byproduct of this training, these models have acquired general knowledge about image statistics, which can be useful for other inference tasks. However, when confronted with sampling an image under new constraints, e.g. generating the missing parts of an image, using large pre-trained text-to-image diffusion models is inefficient and often unreliable. Previous approaches either utilized backpropagation through the denoiser network, making them significantly slower and more memory-demanding than simple text-to-image generation, or only enforced the constraint locally, failing to capture critical long-range correlations in the sampled image. In this work, we propose an algorithm that enables fast, high-quality generation under arbitrary constraints. We show that in denoising diffusion models, we can employ an approximation to Newton's optimization method that allows us to speed up inference and avoid the expensive backpropagation operations. Our approach produces results that rival or surpass the state-of-the-art training-free inference methods while requiring a fraction of the time. We demonstrate the effectiveness of our algorithm under both linear (inpainting, super-resolution) and non-linear (style-guided generation) constraints. An implementation is provided in the supplementary code.", "arxiv_id": "2410.18804v3", "arxiv_authors": ["Alexandros Graikos", "Nebojsa Jojic", "Dimitris Samaras"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a201"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.488Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 959264, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a670"}, "filepath": "data/2406.09408v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998614947237373, "type": "Poster", "name": "Fast Data Attribution for Text-to-Image Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120006", "abstract": "Data attribution for text-to-image models aims to identify the training images that most significantly influenced a generated output. Existing attribution methods involve considerable computational resources for each query, making them impractical for real-world applications. We propose a novel approach for scalable and efficient data attribution. Our key idea is to distill a slow, unlearning-based attribution method to a feature embedding space for efficient retrieval of highly influential training images. During deployment, combined with efficient indexing and search methods, our method successfully finds highly influential images without running expensive attribution algorithms. We show extensive results on both medium-scale models trained on MSCOCO and large-scale Stable Diffusion models trained on LAION, demonstrating that our method can achieve better or competitive performance in a few seconds, faster than existing methods by 2,500x - 400,000x. Our work represents a meaningful step towards the large-scale application of data attribution methods on real-world models such as Stable Diffusion. Our code, models, and datasets will be made publicly available.", "arxiv_id": "2406.09408v3", "arxiv_authors": ["Sheng-Yu Wang", "Aaron Hertzmann", "Alexei A. Efros", "Jun-Yan Zhu", "Richard Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a202"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083092, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a671"}, "filepath": "data/2507.03779v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991404184387019, "type": "Poster", "name": "FastDINOv2: Frequency Based Curriculum Learning Improves Robustness and Training Speed", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115334", "abstract": "Large-scale vision foundation models such as DINOv2 boast impressive performances by leveraging massive architectures and training datasets. The expense of large-scale pre-training puts such research out of reach for many, hence limiting scientific advancements. We thus propose a novel pretraining strategy for DINOv2 that simultaneously accelerates convergence\u2013and strengthens robustness to common corruptions as a by-product. Our approach involves a frequency filtering curriculum\u2013low-frequency being seen first\u2013and the Gaussian noise patching augmentation. Applied to a ViT-B/16 backbone trained on ImageNet-1K, while pre-training time is reduced by 1.6\u00d7\u2013from 16.64 to 10.32 l40s days\u2013and FLOPs by 2.25\u00d7, our method still achieves matching robustness in corruption benchmarks (ImageNet-C) and maintains competitive linear probing performance compared with the DINOv2 baseline. This dual benefit of efficiency and robustness makes large-scale self-supervised foundation modeling more attainable, while opening the door to novel exploration around data curriculum and augmentation as a means to improve self-supervised learning models robustness.", "arxiv_id": "2507.03779v1", "arxiv_authors": ["Jiaqi Zhang", "Juntuo Wang", "Zhixin Sun", "John Zou", "Randall Balestriero"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a203"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1293070, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a672"}, "filepath": "data/2505.13389v4.png", "tags": [], "_media_type": "image", "_rand": 0.9998980102555646, "type": "Poster", "name": "Faster Video Diffusion with Trainable Sparse Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117649", "abstract": "Scaling video diffusion transformers (DiTs) is limited by their quadratic 3D attention, even though most of the attention mass concentrates on a small subset of positions. We turn this observation into ViSA, a trainable, hardware-efficient sparse attention that replaces full attention at \\emph{both} training and inference. In ViSA, a lightweight coarse stage pools tokens into tiles and identifies high-weight \\emph{critical tokens}; a fine stage computes token-level attention only inside those tiles, both subject to block computing layout to ensure hard efficiency. This leads to a single differentiable kernel that trains end-to-end, requires no post-hoc profiling, and sustains 85\\% of FlashAttention3 MFU. We perform a large sweep of ablation studies and scaling-law experiments by pretraining DiTs from 60M to 1.4B parameters. ViSA reaches a Pareto point that cuts training FLOPS by 2.53$\\times$ with no drop in diffusion loss. Retrofitting the open-source Wan-2.1 model speeds up attention time by 6$\\times$ and lowers end-to-end generation time from 31s to 18s with comparable quality.", "arxiv_id": "2505.13389v4", "arxiv_authors": ["Peiyuan Zhang", "Yongqi Chen", "Haofeng Huang", "Will Lin", "Zhengzhong Liu", "Ion Stoica", "Eric Xing", "Hao Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a204"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1095947, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a673"}, "filepath": "data/2509.20295v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993831422031784, "type": "Poster", "name": "FAST: Foreground\u2011aware Diffusion with Accelerated Sampling Trajectory for Segmentation\u2011oriented Anomaly Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115836", "abstract": "Industrial anomaly segmentation relies heavily on pixel-level annotations, yet real-world anomalies are often scarce, diverse, and costly to label. Segmentation-oriented industrial anomaly synthesis (SIAS) has emerged as a promising alternative; however, existing methods struggle to balance sampling efficiency and generation quality. Moreover, most approaches treat all spatial regions uniformly, overlooking the distinct statistical differences between anomaly and background areas. This uniform treatment hinders the synthesis of controllable, structure-specific anomalies tailored for segmentation tasks. In this paper, we propose FAST, a foreground-aware diffusion framework featuring two novel modules: the Anomaly-Informed Accelerated Sampling (AIAS) and the Foreground-Aware Reconstruction Module (FARM). AIAS is a training-free sampling algorithm specifically designed for segmentation-oriented industrial anomaly synthesis, which accelerates the reverse process through coarse-to-fine aggregation and enables the synthesis of state-of-the-art segmentation-oriented anomalies in as few as 10 steps. Meanwhile, FARM adaptively adjusts the anomaly-aware noise within the masked foreground regions at each sampling step, preserving localized anomaly signals throughout the denoising trajectory. Extensive experiments on multiple industrial benchmarks demonstrate that FAST consistently outperforms existing anomaly synthesis methods in downstream segmentation tasks. We release the code in: https://anonymous.4open.science/r/NeurIPS-938.", "arxiv_id": "2509.20295v2", "arxiv_authors": ["Xichen Xu", "Yanshu Wang", "Jinbao Wang", "Xiaoning Lei", "Guoyang Xie", "Guannan Jiang", "Zhichao Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a205"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1175050, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a674"}, "filepath": "data/2506.01953v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991518570814184, "type": "Poster", "name": "Fast-in-Slow: A Dual-System Foundation Model Unifying Fast Manipulation within Slow Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119931", "abstract": "Generalized policy and execution efficiency constitute the two critical challenges in robotic manipulation. While recent foundation policies benefit from the common-sense reasoning capabilities of internet-scale pretrained vision-language models (VLMs), they often suffer from low execution frequency. To mitigate this dilemma, dual-system approaches, inspired by Kahneman\u2019s theory, have been proposed to leverage a VLM-based System 2 model handling high-level reasoning and a separate System 1 action model ensuring real-time control. However, existing designs maintain both systems as separate models, limiting System 1 from fully leveraging the rich pretrained knowledge from the VLM-based System 2. In this work, we propose Fast-in-Slow (FiS), a unified dual-system vision-language-action (VLA) model that embeds the System 1 execution module within the VLM-based System 2 by partially sharing parameters. This innovative paradigm not only enables high-frequency execution in System 1, but also facilitates coordination between the reasoning and execution components within a single foundation model of System 2. Given their fundamentally distinct roles within FiS-VLA, we design the two systems to incorporate heterogeneous modality inputs alongside asynchronous operating frequencies, enabling both fast and precise manipulation. To enable coordination between the two systems, a dual-aware co-training strategy is proposed that equips System 1 with action generation capabilities while preserving System 2\u2019s contextual reasoning representation. For evaluation, FiS-VLA outperforms previous state-of-the-art methods by 8% in simulation and 11% in real-world tasks in terms of average success rate, while achieving a 21.9 Hz control frequency without action chunking mechanism.", "arxiv_id": "2506.01953v1", "arxiv_authors": ["Hao Chen", "Jiaming Liu", "Chenyang Gu", "Zhuoyang Liu", "Renrui Zhang", "Xiaoqi Li", "Xiao He", "Yandong Guo", "Chi-Wing Fu", "Shanghang Zhang", "Pheng-Ann Heng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a206"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063126, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a675"}, "filepath": "data/2510.22842v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997354869888037, "type": "Poster", "name": "FastJAM: a Fast Joint Alignment Model for Images", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118662", "abstract": "Joint Alignment (JA) of images aims to align a collection of images into a unified coordinate frame, such that semantically similar features appear at corresponding spatial locations. Most existing approaches often require extensive training times, large-capacity models,and extensive hyperparameter tuning. We introduce FastJAM, a rapid, graph-based method that drastically reduces the computational complexity of joint alignment tasks. FastJAM leverages pairwise matches computed by off-the-shelf image matchers to construct a graph representing intra- and inter-image keypoint relations. A graph neural network propagates and aggregates these correspondences, efficiently predicting per-image homography parameters via image-level pooling. Utilizing an inverse-compositional warping strategy, FastJAM performs image JA quickly and effectively. Experimental results on several benchmarks demonstrate that FastJAM achieves results better than existing modern JA methods in terms of alignment quality, while reducing computation time from hours or minutes to mere seconds. Our code will be made public upon acceptance.", "arxiv_id": "2510.22842v1", "arxiv_authors": ["Omri Hirsch", "Ron Shapira Weber", "Shira Ifergane", "Oren Freifeld"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a207"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031672, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a676"}, "filepath": "data/2411.13022v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998951327726611, "type": "Poster", "name": "Fast MRI for All: Bridging Equity Gaps via Training without Raw Data Access", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115476", "abstract": "Physics-driven deep learning (PD-DL) approaches have become popular for improved reconstruction of fast magnetic resonance imaging (MRI) scans. Though PD-DL offers higher acceleration rates than existing clinical fast MRI techniques, their use has been limited outside specialized MRI centers. A key challenge is generalization to underrepresented pathologies or populations, noted in multiple studies, with fine-tuning on target populations suggested for improvement. However, current approaches for PD-DL training require access to raw k-space measurements, which is typically only available at specialized MRI centers that have research agreements for such data access. This is especially an issue for rural and underserved areas, where commercial MRI scanners only provide access to a final reconstructed image. To tackle these challenges, we propose Compressibility-inspired Unsupervised Learning via Parallel Imaging Fidelity (CUPID) for high-quality PD-DL training using only routine clinical reconstructed images exported from an MRI scanner. CUPID evaluates output quality with a compressibility-based approach while ensuring that the output stays consistent with the clinical parallel imaging reconstruction through well-designed perturbations. Our results show CUPID achieves similar quality to established PD-DL training that requires k-space data while outperforming compressed sensing (CS) and diffusion-based generative methods. We further demonstrate its effectiveness in a zero-shot training setup for retrospectively and prospectively sub-sampled acquisitions, attesting to its minimal training burden. As an approach that radically deviates from existing strategies, CUPID presents an opportunity to provide equitable access to fast MRI for underserved populations in an attempt to reduce the inequalities associated with this expensive imaging modality.", "arxiv_id": "2411.13022v3", "arxiv_authors": ["Ya\u015far Utku Al\u00e7alar", "Merve G\u00fclle", "Mehmet Ak\u00e7akaya"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a208"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070646, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a677"}, "filepath": "data/2503.11187v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992373029662444, "type": "Poster", "name": "FastVID: Dynamic Density Pruning for Fast Video Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120084", "abstract": "Video Large Language Models have demonstrated strong video understanding capabilities, yet their practical deployment is hindered by substantial inference costs caused by redundant video tokens. Existing pruning techniques fail to fully exploit the spatiotemporal redundancy inherent in video data. To bridge this gap, we perform a systematic analysis of video redundancy from two perspectives: temporal context and visual context. Leveraging these insights, we propose Dynamic Density Pruning for Fast Video LLMs termed FastVID.Specifically, FastVID dynamically partitions videos into temporally ordered segments to preserve temporal structure and applies a density-based token pruning strategy to maintain essential visual information.Our method significantly reduces computational overhead while maintaining temporal and visual integrity. Extensive evaluations show that FastVID achieves state-of-the-art performance across various short- and long-video benchmarks on leading Video LLMs, including LLaVA-OneVision and LLaVA-Video.Notably, on LLaVA-OneVision-7B, FastVID effectively prunes $\\textbf{90.3}$\\% of video tokens, reduces FLOPs to $\\textbf{8.3}$\\%, and accelerates the prefilling stage by $\\textbf{7.1}\\times$, while maintaining $\\textbf{98.0}$\\% of the original accuracy. Our code will be publicly released.", "arxiv_id": "2503.11187v2", "arxiv_authors": ["Leqi Shen", "Guoqiang Gong", "Tao He", "Yifeng Zhang", "Pengzhang Liu", "Sicheng Zhao", "Guiguang Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a209"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1114335, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a678"}, "filepath": "data/2503.14935v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999881351427541, "type": "Poster", "name": "FAVOR-Bench: A Comprehensive Benchmark for Fine-Grained Video Motion Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121776", "abstract": "Multimodal Large Language Models (MLLMs) have shown impressive video content understanding capabilities but struggle with fine-grained motion comprehension. To comprehensively assess the motion understanding ability of existing MLLMs, we introduce FAVOR-Bench, which comprises 1,776 videos from both ego-centric and third-person perspectives and enables assessment through both close-ended and open-ended tasks. For close-ended evaluation, we carefully design 8,184 multiple-choice question-answer pairs spanning six distinct sub-tasks. For open-ended evaluation, we employ the GPT-assisted evaluation and develop a novel cost-efficient LLM-free assessment method, where the latter can enhance benchmarking interpretability and accessibility. Comprehensive experiments with21 state-of-the-art MLLMs reveal significant limitations in their ability to comprehend and describe detailed temporal dynamics in video motions. To alleviate this limitation, we further build FAVOR-Train, a dataset of 17,152 videos with fine-grained motion annotations. Finetuning Qwen2.5-VL on FAVOR-Train yields consistent improvements on motion-related tasks across TVBench, MotionBenchand our FAVOR-Bench. Our assessment results demonstrate that the proposed FAVOR-Bench and FAVOR-Train provide valuable tools for the community to develop more powerful video understanding models.", "arxiv_id": "2503.14935v1", "arxiv_authors": ["Chongjun Tu", "Lin Zhang", "Pengtao Chen", "Peng Ye", "Xianfang Zeng", "Wei Cheng", "Gang Yu", "Tao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2530691, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a679"}, "filepath": "data/2506.06085v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993918266811961, "type": "Poster", "name": "Feedback Guidance of Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119581", "abstract": "While Classifier-Free Guidance (CFG) has become standard for improving sample fidelity in conditional diffusion models, it can harm diversity and induce memorization by applying constant guidance regardless of whether a particular sample needs correction. We propose **F**eed**B**ack **G**uidance (FBG), which uses a state-dependent coefficient to self-regulate guidance amounts based on need. Our approach is derived from first principles by assuming the learned conditional distribution is linearly corrupted by the unconditional distribution, contrasting with CFG's implicit multiplicative assumption. Our scheme relies on feedback of its own predictions about the conditional signal informativeness to adapt guidance dynamically during inference, challenging the view of guidance as a fixed hyperparameter. The approach is benchmarked on ImageNet512x512, where it significantly outperforms Classifier-Free Guidance and is competitive to Limited Interval Guidance (LIG) while benefitting from a strong mathematical framework. On Text-To-Image generation, we demonstrate that, as anticipated, our approach automatically applies higher guidance scales for complex prompts than for simpler ones and that it can be easily combined with existing guidance schemes such as CFG or LIG.", "arxiv_id": "2506.06085v2", "arxiv_authors": ["Felix Koulischer", "Florian Handke", "Johannes Deleu", "Thomas Demeester", "Luca Ambrogioni"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065056, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67a"}, "filepath": "data/2412.03526v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994859689464187, "type": "Poster", "name": "Feed-Forward Bullet-Time Reconstruction of Dynamic Scenes from Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116056", "abstract": "Recent advancements in static feed-forward scene reconstruction have demonstrated significant progress in high-quality novel view synthesis. However, these models often struggle with generalizability across diverse environments and fail to effectively handle dynamic content. We present BTimer (short for Bullet Timer), the first motion-aware feed-forward model for real-time reconstruction and novel view synthesis of dynamic scenes. Our approach reconstructs the full scene in a 3D Gaussian Splatting representation at a given target (\u2018bullet\u2019) timestamp by aggregating information from all the context frames. Such a formulation allows BTimer to gain scalability and generalization by leveraging both static and dynamic scene datasets. Given a casual monocular dynamic video, BTimer reconstructs a bullet-time scene within 150ms while reaching state-of-the-art performance on both static and dynamic scene datasets, even compared with optimization-based approaches.", "arxiv_id": "2412.03526v3", "arxiv_authors": ["Hanxue Liang", "Jiawei Ren", "Ashkan Mirzaei", "Antonio Torralba", "Ziwei Liu", "Igor Gilitschenski", "Sanja Fidler", "Cengiz Oztireli", "Huan Ling", "Zan Gojcic", "Jiahui Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1124849, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67b"}, "filepath": "data/2509.20890v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994830465969705, "type": "Poster", "name": "FerretNet: Efficient Synthetic Image Detection via Local Pixel Dependencies", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118720", "abstract": "The increasing realism of synthetic images generated by advanced models such as VAEs, GANs, and LDMs poses significant challenges for synthetic image detection.To address this issue, we explore two artifact types introduced during the generation process: (1) latent distribution deviations and (2) decoding-induced smoothing effects, which manifest as inconsistencies in local textures, edges, and color transitions.Leveraging local pixel dependencies (LPD) properties rooted in Markov Random Fields, we reconstruct synthetic images using neighboring pixel information to expose disruptions in texture continuity and edge coherence.Building upon LPD, we propose FerretNet, a lightweight neural network with only 1.1M parameters that delivers efficient and robust synthetic image detection.Extensive experiments demonstrate that FerretNet\u2014trained exclusively on the 4-class ProGAN dataset\u2014achieves an average accuracy of 97.1% on an open-world benchmark comprising over 20 generative models, surpassing state-of-the-art methods by 10.6%. All code and datasets will be publicly released.", "arxiv_id": "2509.20890v2", "arxiv_authors": ["Shuqiao Liang", "Jian Liu", "Renzhang Chen", "Quanlong Guan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1088384, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67c"}, "filepath": "data/2505.17982v4.png", "tags": [], "_media_type": "image", "_rand": 0.9990554260772845, "type": "Poster", "name": "Few-Shot Learning from Gigapixel Images via Hierarchical Vision-Language Alignment and Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117369", "abstract": "Vision-language models (VLMs) have recently been integrated into multiple instance learning (MIL) frameworks to address the challenge of few-shot, weakly supervised classification of whole slide images (WSIs). A key trend involves leveraging multi-scale information to better represent hierarchical tissue structures. However, existing methods often face two key limitations: (1) insufficient modeling of interactions within the same modalities across scales (e.g., 5x and 20x) and (2) inadequate alignment between visual and textual modalities on the same scale. To address these gaps, we propose HiVE-MIL, a hierarchical vision-language framework that constructs a unified graph consisting of (1) parent\u2013child links between coarse (5x) and fine (20x) visual/textual nodes to capture hierarchical relationships, and (2) heterogeneous intra-scale edges linking visual and textual nodes on the same scale. To further enhance semantic consistency, HiVE-MIL incorporates a two-stage, text-guided dynamic filtering mechanism that removes weakly correlated patch\u2013text pairs, and introduces a hierarchical contrastive loss to align textual semantics across scales. Extensive experiments on TCGA breast, lung, and kidney cancer datasets demonstrate that HiVE-MIL consistently outperforms both traditional MIL and recent VLM-based MIL approaches, achieving gains of up to 4.1% in macro F1 under 16-shot settings. Our results demonstrate the value of jointly modeling hierarchical structure and multimodal alignment for efficient and scalable learning from limited pathology data. The code is available at https://anonymous.4open.science/r/HiVE-MIL", "arxiv_id": "2505.17982v4", "arxiv_authors": ["Bryan Wong", "Jong Woo Kim", "Huazhu Fu", "Mun Yong Yi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.489Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1072046, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67d"}, "filepath": "data/2505.19154v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991437064426872, "type": "Poster", "name": "FHGS: Feature-Homogenized Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119393", "abstract": "Scene understanding based on 3D Gaussian Splatting (3DGS) has recently achieved notable advances. Although 3DGS related methods have efficient rendering capability, they fail to address the inherent contradiction between the anisotropic color representation of GS primitives and the isotropic requirements of semantic features, leading to insufficient cross-view feature consistency.To overcome the limitation, this work proposes FHGS (Feature-Homogenized Gaussian Splatting), a novel 3D feature fusion framework inspired by physical models, which can achieve high-precision mapping of arbitrary 2D features from pre-trained models to 3D scenes while preserving the real-time rendering efficiency of 3DGS.Specifically, our FHGS introduces the following innovations: Firstly, a universal feature fusion architecture is proposed, enabling robust embedding of large-scale pre-trained models' semantic features (e.g., SAM, CLIP) into sparse 3D structures.Secondly, a non-differentiable feature fusion mechanism is introduced, which enables semantic features to exhibit viewpoint independent isotropic distribution. This fundamentally balances the anisotropic rendering of gaussian primitives and the isotropic expression of features; Thirdly, a dual-driven optimization strategy inspired by electric potential fields is proposed, which combines external supervision from semantic feature fields with internal primitive clustering guidance. This mechanism enables synergistic optimization of global semantic alignment and local structural consistency.Extensive comparison experiments with other state-of-the-art methods on benchmark datasets demonstrate that our FHGS exhibits superior reconstruction performance in feature fusion, noise suppression, geometric precision. This work establishes a novel Gaussian Splatting (GS) data structure, offering practical advancements for real-time semantic mapping, 3D stylization, and interactive tasks in unmanned systems.", "arxiv_id": "2505.19154v1", "arxiv_authors": ["Q. G. Duan", "Benyun Zhao", "Mingqiao Han Yijun Huang", "Ben M. Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a20f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1066384, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67e"}, "filepath": "data/2503.08805v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998148799916001, "type": "Poster", "name": "Filter Like You Test: Data-Driven Data Filtering for CLIP Pretraining", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116273", "abstract": "We introduce Filter Like You Test (FLYT), an algorithm for curating large-scale vision-language datasets that *learns* the usefulness of each data point as a pretraining example. FLYT trains a scoring model that learns to weigh each example's features using gradient signals from downstream tasks training sets. Based on FLYT, we implement Mixing-FLYT (M-FLYT), which takes the per-example scores generated by different scoring methods as features, and learns to unify them into a single score. FLYT naturally produces a distribution over the training examples, which we leverage through Soft Cap Sampling (SCS), a strategy for obtaining a filtered pretraining dataset from per-example probabilities that samples examples while preventing over-representation through a repetition penalty. Using these methods, we achieve 40.1\\% ImageNet zero-shot accuracy on the DataComp medium scale filtering benchmark, a 2\\% absolute accuracy increase over all previous results and a 5.5\\% increase over results that---like us---use only public resources. Our approach also yields 37.7\\% on the average of 38 DataComp evaluation tasks, outperforming previous public-resource approaches by 0.4\\%.", "arxiv_id": "2503.08805v2", "arxiv_authors": ["Mikey Shechter", "Yair Carmon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a210"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1079033, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a67f"}, "filepath": "data/2503.07038v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998275560031773, "type": "Poster", "name": "Find your Needle: Small Object Image Retrieval via Multi-Object Attention Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116342", "abstract": "We address the challenge of Small Object Image Retrieval (SoIR), where the goal is to retrieve images containing a specific small object, in a cluttered scene. The key challenge in this setting is constructing a single image descriptor, for scalable and efficient search, that effectively represents all objects in the image. In this paper, we first analyze the limitations of existing methods on this challenging task and then introduce new benchmarks to support SoIR evaluation. Next, we introduce Multi-object Attention Optimization (MaO), a novel retrieval framework which incorporates a dedicated multi-object pre-training phase. This is followed by a refinement process that leverages attention-based feature extraction with object masks, integrating them into a single unified image descriptor. Our MaO approach significantly outperforms existing retrieval methods and strong baselines, achieving notable improvements in both zero-shot and lightweight multi-object fine-tuning. We hope this work will lay the groundwork and inspire further research to enhance retrieval performance for this highly practical task.", "arxiv_id": "2503.07038v2", "arxiv_authors": ["Michael Green", "Matan Levy", "Issar Tzachor", "Dvir Samuel", "Nir Darshan", "Rami Ben-Ari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a211"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1506004, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a680"}, "filepath": "data/2506.21656v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998092078374703, "type": "Poster", "name": "Fine-Grained Preference Optimization Improves Spatial Reasoning in VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118573", "abstract": "Current Vision-Language Models (VLMs) struggle with fine-grained spatial reasoning, particularly when multi-step logic and precise spatial alignment are required. In this work, we introduce SpatialReasoner, a novel VLM designed to address these limitations. First, we propose Multi-LLM Guided Monte Carlo Tree Search (M3CTS) and Fine-Grained Spatial Rewards methods to construct a high-quality dataset. Second, we use fine-grained Direct Preference Optimization (fDPO) to train our model. fDPO introduces segment-specific preference granularity for descriptive grounding and logical reasoning, achieving an average improvement of 4.1% over standard DPO across spatial quality tasks, and a 9.0% boost in spatial quantity tasks. To address the scarcity of multi-step spatial reasoning data, M3CTS enables collaborative exploration of diverse reasoning paths, significantly enriching spatial comprehension and logical coherence. Empirical evaluations demonstrate that SpatialReasoner sets a new state-of-the-art on SpatialRGPT-Bench, outperforming the strongest baseline by 9.8% in average accuracy, while maintaining competitive performance on general vision-language tasks.", "arxiv_id": "2506.21656v2", "arxiv_authors": ["Yifan Shen", "Yuanzhe Liu", "Jingyuan Zhu", "Xu Cao", "Xiaofeng Zhang", "Yixiao He", "Wenming Ye", "James Matthew Rehg", "Ismini Lourentzou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a212"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3646900, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a681"}, "filepath": "data/2510.21311v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996768382177382, "type": "Poster", "name": "FineRS: Fine-grained Reasoning and Segmentation of Small Objects with Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115465", "abstract": "Multi-modal Large Language Models (MLLMs) have shown remarkable capabilities across a wide range of vision-language tasks. However, due to the restricted input resolutions, MLLMs face significant challenges in precisely understanding and localizing visual details in high-resolution images---particularly when dealing with extra-small objects embedded in cluttered contexts. To address this issue, we propose FineRS, a two-stage MLLM-based reinforcement learning framework for jointly reasoning and segmenting extremely small objects within high-resolution scenes. FineRS adopts a coarse-to-fine pipeline comprising Global Semantic Exploration (GSE) and Localized Perceptual Refinement (LPR). Specifically, GSE performs instruction-guided reasoning to generate a textural response and a coarse target region, while LPR refines this region to produce an accurate bounding box and segmentation mask. To couple the two stages, we introduce a locate-informed retrospective reward, where LPR's outputs are used to optimize GSE for more robust coarse region exploration. Additionally, we present FineRS-4k, a new dataset for evaluating MLLMs on attribute-level reasoning and pixel-level segmentation on subtle, small-scale targets in complex high-resolution scenes. Experimental results on FineRS-4k and public datasets demonstrate that our method consistently outperforms state-of-the-art MLLM-based approaches on both instruction-guided segmentation and visual reasoning tasks.", "arxiv_id": "2510.21311v1", "arxiv_authors": ["Lu Zhang", "Jiazuo Yu", "Haomiao Xiong", "Ping Hu", "Yunzhi Zhuge", "Huchuan Lu", "You He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a213"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1059406, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a682"}, "filepath": "data/2506.02167v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992412457871778, "type": "Poster", "name": "Fire360: A Benchmark for Robust Perception and Episodic Memory in Degraded 360\u00b0 Firefighting Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121673", "abstract": "Modern AI systems struggle most in environments where reliability is critical\u2014scenes with smoke, poor visibility, and structural deformation. Each year, tens of thousands of firefighters are injured on duty, often due to breakdowns in situational perception. We introduce Fire360, a benchmark for evaluating perception and reasoning in safety-critical firefighting scenarios. The dataset includes 228 360\u00b0 videos from professional training sessions under diverse conditions (e.g., low light, thermal distortion), annotated with action segments, object locations, and degradation metadata. Fire360 supports five tasks: Visual Question Answering, Temporal Action Captioning, Object Localization, Safety-Critical Reasoning, and Transformed Object Retrieval (TOR). TOR tests whether models can match pristine exemplars to fire-damaged counterparts in unpaired scenes, evaluating transformation-invariant recognition. While human experts achieve 83.5% on TOR, models like GPT-4o lag significantly, exposing failures in reasoning under degradation. By releasing Fire360 and its evaluation suite, we aim to advance models that not only see, but also remember, reason, and act under uncertainty.", "arxiv_id": "2506.02167v1", "arxiv_authors": ["Aditi Tiwari", "Farzaneh Masoud", "Dac Trong Nguyen", "Jill Kraft", "Heng Ji", "Klara Nahrstedt"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a214"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115469, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a683"}, "filepath": "data/2510.15736v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999837073508212, "type": "Poster", "name": "Fix False Transparency by Noise Guided Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115515", "abstract": "3D Gaussian Splatting (3DGS) has demonstrated impressive capabilities in 3D reconstruction. However, its \u03b1-blending can induce \u2019false transparency\u2019 artifacts, particularly where low point cloud density in sparse or low-texture regions causes foreground objects to appear improperly transparent. This issue stems from an ill-posed optimization. During training, background Gaussians blend with foreground ones, making them difficult to differentiate using only photometric loss, which leads to the observed transparency in these regions. This view-inconsistency issue is hard to detect in static renderings during training and validation, but becomes evident in object-centric reconstruction during interactive rotation. Although other causes of view-inconsistency (e.g., popping artifacts) have been explored recently, false transparency has not been explicitly identified. This paper proposes a novel explanation to the problem and a solution to remedy it by injecting opaque noise Gaussians in the object volume during training. Our strategy, Noise Guided Splatting ( NGS), encourages surface Gaussians to adopt higher opacity while minimally modifying the existing splatting process. To quantitatively evaluate the false transparency in static renderings, we propose a transmittance-based metric to characterize the extent of the false transparency problem. We also introduce a customized high-quality object-centric scan dataset exhibiting prominent transparency issues and supplement popular existing datasets (e.g., DTU) with new complementary infill noise specifically designed to evaluate false transparency handling in 3D reconstruction methods. Across various datasets, NGS substantially reduces surface transmittance while maintaining performance on standard rendering metrics (e.g., PSNR), demonstrating its effectiveness", "arxiv_id": "2510.15736v1", "arxiv_authors": ["Aly El Hakie", "Yiren Lu", "Yu Yin", "Michael Jenkins", "Yehe Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a215"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040021, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a684"}, "filepath": "data/2510.09995v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996618472162029, "type": "Poster", "name": "FlareX: A Physics-Informed Dataset for Lens Flare Removal via 2D Synthesis and 3D Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121599", "abstract": "Lens flare occurs when shooting towards strong light sources, significantly degrading the visual quality of images. Due to the difficulty in capturing flare-corrupted and flare-free image pairs in the real world, existing datasets are typically synthesized in 2D by overlaying artificial flare templates onto background images. However, the lack of flare diversity in templates and the neglect of physical principles in the synthesis process hinder models trained on these datasets from generalizing well to real-world scenarios. To address these challenges, we propose a new physics-informed method for flare data generation, which consists of three stages: parameterized template creation, the laws of illumination-aware 2D synthesis, and physical engine-based 3D rendering, which finally gives us a mixed flare dataset that incorporates both 2D and 3D perspectives, namely FlareX. This dataset offers 9,500 2D templates derived from 95 flare patterns and 3,000 flare image pairs rendered from 60 3D scenes. Furthermore, we design a masking approach to obtain real-world flare-free images from their corrupted counterparts to measure the performance of the model on real-world images. Extensive experiments demonstrate the effectiveness of our method and dataset. The code, dataset, and a 1-minute video demo are available in the supplementary materials.", "arxiv_id": "2510.09995v1", "arxiv_authors": ["Lishen Qu", "Zhihao Liu", "Jinshan Pan", "Shihao Zhou", "Jinglei Shi", "Duosheng Chen", "Jufeng Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a216"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070551, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a685"}, "filepath": "data/2510.11190v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995599179881515, "type": "Poster", "name": "FlexAC: Towards Flexible Control of Associative Reasoning in Multimodal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118016", "abstract": "Multimodal large language models (MLLMs) face an inherent trade-off between faithfulness and creativity, as different tasks require varying degrees of associative reasoning. However, existing methods lack the flexibility to modulate this reasoning strength, limiting MLLMs' adaptability across factual and creative scenarios. To bridge this gap, we propose equipping MLLMs with mechanisms that enable flexible control over associative reasoning. We begin by investigating the internal mechanisms underlying associative behavior in MLLMs and find that: (1) middle layers play a pivotal role in shaping model\u2019s associative tendencies, (2) modifying representations in these layers effectively regulates associative reasoning strength, and (3) hallucinations can be exploited to derive steering vectors that guide this modulation. Building on these findings, we introduce Flexible Association Control (FlexAC), a lightweight and training-free framework for modulating associative behavior in MLLMs. FlexAC first induces hallucination-guided intermediate representations to encode associative directions. Then, it selects high-association instances to construct effective associative steering vectors, whose strengths are adaptively calibrated to balance creative guidance with output stability. Finally, recognizing the multi-dimensional nature of associative reasoning, FlexAC incorporates task-specific associative vectors derived from a forward pass on a few target-domain samples, enabling models to follow diverse associative directions and better adapt to creative tasks. Notably, our method achieves up to a 5.8\u00d7 improvement in creativity on Creation-MMBench and a 29\\% reduction in hallucination rate on CHAIR, surpassing existing baselines and demonstrating its effectiveness in enabling flexible control over associative reasoning in MLLMs.", "arxiv_id": "2510.11190v2", "arxiv_authors": ["Shengming Yuan", "Xinyu Lyu", "Shuailong Wang", "Beitao Chen", "Jingkuan Song", "Lianli Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a217"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1035118, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a686"}, "filepath": "data/2412.06708v2.png", "tags": [], "_media_type": "image", "_rand": 0.999741370363225, "type": "Poster", "name": "FlexEvent: Towards Flexible Event-Frame Object Detection at Varying Operational Frequencies", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116987", "abstract": "Event cameras offer unparalleled advantages for real-time perception in dynamic environments, thanks to the microsecond-level temporal resolution and asynchronous operation. Existing event detectors, however, are limited by fixed-frequency paradigms and fail to fully exploit the high-temporal resolution and adaptability of event data. To address these limitations, we propose FlexEvent, a novel framework that enables detection at varying frequencies. Our approach consists of two key components: FlexFuse, an adaptive event-frame fusion module that integrates high-frequency event data with rich semantic information from RGB frames, and FlexTune, a frequency-adaptive fine-tuning mechanism that generates frequency-adjusted labels to enhance model generalization across varying operational frequencies. This combination allows our method to detect objects with high accuracy in both fast-moving and static scenarios, while adapting to dynamic environments. Extensive experiments on large-scale event camera datasets demonstrate that our approach surpasses state-of-the-art methods, achieving significant improvements in both standard and high-frequency settings. Notably, our method maintains robust performance when scaling from 20 Hz to 90 Hz and delivers accurate detection up to 180 Hz, proving its effectiveness in extreme conditions. Our framework sets a new benchmark for event-based object detection and paves the way for more adaptable, real-time vision systems. Code will be publicly available.", "arxiv_id": "2412.06708v2", "arxiv_authors": ["Dongyue Lu", "Lingdong Kong", "Gim Hee Lee", "Camille Simon Chane", "Wei Tsang Ooi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a218"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3011677, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a687"}, "filepath": "data/2506.00993v1.png", "tags": [], "_media_type": "image", "_rand": 0.999615581781643, "type": "Poster", "name": "FlexSelect: Flexible Token Selection for Efficient Long Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120343", "abstract": "Long-form video understanding poses a significant challenge for video large language models (VideoLLMs) due to prohibitively high computational and memory demands. In this paper, We propose $\\textbf{FlexSelect}$, a flexible and efficient token selection strategy for processing long videos.FlexSelect identifies and retains the most semantically relevant content by leveraging cross-modal attention patterns from a reference transformer layer.It comprises two key components: (1) $\\textbf{a training-free token ranking pipeline}$ that leverages faithful cross-modal attention weights to estimate each video token\u2019s importance, and (2) $\\textbf{a rank-supervised lightweight selector}$ that is trained to replicate these rankings and filter redundant tokens.This generic approach can be seamlessly integrated into various VideoLLM architectures, such as LLaVA-Video, InternVL and Qwen-VL, serving as a plug-and-play module to extend their temporal context length. Empirically, FlexSelect delivers strong gains across multiple long-video benchmarks \u2013 including VideoMME, MLVU, LongVB, and LVBench. Morever, it achieves significant speed-ups ($\\textit{e.g.,}$ up to 9 $\\times$ on a LLaVA-Video-7B model), highlighting FlexSelect\u2019s promise for efficient long-form video understanding. Project page: https://flexselect.github.io", "arxiv_id": "2506.00993v1", "arxiv_authors": ["Yunzhu Zhang", "Yu Lu", "Tianyi Wang", "Fengyun Rao", "Yi Yang", "Linchao Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a219"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1002939, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a688"}, "filepath": "data/2502.20313v1.png", "tags": [], "_media_type": "image", "_rand": 0.999360585262065, "type": "Poster", "name": "FlexVAR: Flexible Visual Autoregressive Modeling without Residual Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120190", "abstract": "This work challenges the residual prediction paradigm in visual autoregressive modeling and presents FlexVAR, a new Flexible Visual AutoRegressive image generation paradigm. FlexVAR facilitates autoregressive learning with ground-truth prediction, enabling each step to independently produce plausible images. This simple, intuitive approach swiftly learns visual distributions and makes the generation process more flexible and adaptable. Trained solely on low-resolution images (< 256px), FlexVAR can: (1) Generate images of various resolutions and aspect ratios, even exceeding the resolution of the training images. (2) Support various image-to-image tasks, including image refinement, in/out-painting, and image expansion. (3) Adapt to various autoregressive steps, allowing for faster inference with fewer steps or enhancing image quality with more steps. Our 1.0B model outperforms its VAR counterpart on the ImageNet 256 \u00d7 256 benchmark. Moreover, when zero-shot transfer the image generation process with 13 steps, the performance further improves to 2.08 FID, outperforming state-of-the-art autoregressive models AiM/VAR by 0.25/0.28 FID and popular diffusion models LDM/DiT by 1.52/0.19 FID, respectively. When transferring our 1.0B model to the ImageNet 512 \u00d7 512 benchmark in a zero-shot manner, FlexVAR achieves competitive results compared to the VAR 2.3B model, which is a fully supervised model trained at 512 \u00d7 512 resolution.", "arxiv_id": "2502.20313v1", "arxiv_authors": ["Siyu Jiao", "Gengwei Zhang", "Yinlong Qian", "Jiancheng Huang", "Yao Zhao", "Humphrey Shi", "Lin Ma", "Yunchao Wei", "Zequn Jie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6040213, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a689"}, "filepath": "data/2503.13265v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996838805609716, "type": "Poster", "name": "FlexWorld: Progressively Expanding 3D Scenes for Flexible-View Exploration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117746", "abstract": "Generating flexible-view 3D scenes, including 360\u00b0 rotation and zooming, from single images is challenging due to a lack of 3D data. To this end, we introduce FlexWorld, a novel framework that progressively constructs a persistent 3D Gaussian splatting representation by synthesizing and integrating new 3D content. To handle novel view synthesis under large camera variations, we leverage an advanced pre-trained video model fine-tuned on accurate depth-estimated training pairs. By combining geometry-aware scene integration and optimization, FlexWorld refines the scene representation, producing visually consistent 3D scenes with flexible viewpoints. Extensive experiments demonstrate the effectiveness of FlexWorld in generating high-quality novel view videos and flexible-view 3D scenes from single images, achieving superior visual quality under multiple popular metrics and datasets compared to existing state-of-the-art methods. Additionally, FlexWorld supports extrapolating from existing 3D scenes, further extending its applicability. Qualitatively, we highlight that FlexWorld can generate high-fidelity scenes that enable 360\u00b0 rotations and zooming exploration.", "arxiv_id": "2503.13265v2", "arxiv_authors": ["Luxi Chen", "Zihan Zhou", "Min Zhao", "Yikai Wang", "Ge Zhang", "Wenhao Huang", "Hao Sun", "Ji-Rong Wen", "Chongxuan Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.490Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1164118, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68a"}, "filepath": "data/2505.19536v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995935984295707, "type": "Poster", "name": "FlowCut: Rethinking Redundancy via Information Flow for Efficient Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118495", "abstract": "Large vision-language models (LVLMs) excel at multimodal understanding but suffer from high computational costs due to redundant vision tokens. Existing pruning methods typically rely on single-layer attention scores to rank and prune redundant visual tokens to solve this inefficiency. However, as the interaction between tokens and layers is complicated, this raises a basic question: Is such a simple single-layer criterion sufficient to identify redundancy? To answer this question, we rethink the emergence of redundant visual tokens from a fundamental perspective: information flow, which models the interaction between tokens and layers by capturing how information moves between tokens across layers. We find (1) the CLS token acts as an information relay, which can simplify the complicated flow analysis; (2) the redundancy emerges progressively and dynamically via layer-wise attention concentration; and (3) relying solely on attention scores from single layers can lead to contradictory redundancy identification. Based on this, we propose FlowCut, an information-flow-aware pruning framework, mitigating the insufficiency of the current criterion for identifying redundant tokens and better aligning with the model's inherent behaviors. Extensive experiments show FlowCut achieves superior results, outperforming SoTA by 1.6% on LLaVA-1.5-7B with 88.9% token reduction, and by 4.3% on LLaVA-NeXT-7B with 94.4% reduction, delivering 3.2$\\times$ speed-up in the prefilling stage. Our codes will be released.", "arxiv_id": "2505.19536v2", "arxiv_authors": ["Jintao Tong", "Wenwei Jin", "Pengda Qin", "Anqi Li", "Yixiong Zou", "Yuhong Li", "Yuhua Li", "Ruixuan Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1125117, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68b"}, "filepath": "data/2505.05470v5.png", "tags": [], "_media_type": "image", "_rand": 0.9996394802512973, "type": "Poster", "name": "Flow-GRPO: Training Flow Matching Models via Online RL", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116065", "abstract": "We propose Flow-GRPO, the first method to integrate online reinforcement learning (RL) into flow matching models. Our approach uses two key strategies: (1) an ODE-to-SDE conversion that transforms a deterministic Ordinary Differential Equation (ODE) into an equivalent Stochastic Differential Equation (SDE) that matches the original model's marginal distribution at all timesteps, enabling statistical sampling for RL exploration; and (2) a Denoising Reduction strategy that reduces training denoising steps while retaining the original number of inference steps, significantly improving sampling efficiency without sacrificing performance. Empirically, Flow-GRPO is effective across multiple text-to-image tasks. For compositional generation, RL-tuned SD3.5-M generates nearly perfect object counts, spatial relations, and fine-grained attributes, increasing GenEval accuracy from $63$\\% to $95$\\%. In visual text rendering, accuracy improves from $59$\\% to $92$\\%, greatly enhancing text generation. Flow-GRPO also achieves substantial gains in human preference alignment. Notably, very little reward hacking occurred, meaning rewards did not increase at the cost of appreciable image quality or diversity degradation.", "arxiv_id": "2505.05470v5", "arxiv_authors": ["Jie Liu", "Gongye Liu", "Jiajun Liang", "Yangguang Li", "Jiaheng Liu", "Xintao Wang", "Pengfei Wan", "Di Zhang", "Wanli Ouyang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1120573, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68c"}, "filepath": "data/2510.09537v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993276177942567, "type": "Poster", "name": "FLOWING: Implicit Neural Flows for Structure-Preserving Morphing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116569", "abstract": "Morphing is a long-standing problem in vision and computer graphics, requiring a time-dependent warping for feature alignment and a blending for smooth interpolation. Recently, multilayer perceptrons (MLPs) have been explored as implicit neural representations (INRs) for modeling such deformations, due to their meshlessness and differentiability; however, extracting coherent and accurate morphings from standard MLPs typically relies on costly regularizations, often leading to unstable training and impeding the effective alignment and interpolation between features. To overcome these limitations, we propose FLOWING (FLOW morphING), a framework that reframes warping as the construction of a differential vector flow, naturally ensuring continuity, invertibility, and temporal coherence.By design, FLOWING encodes structural flow propertiesdirectly into the network architectures, avoiding costly regularizations. This flow-centric approach yields principled and stable transformations that are smooth, reversible, and temporally coherent by construction, enabling accurate, structure-preserving morphing of both 2D images and 3D shapes.Extensive experiments across a range of applications\u2014including face and image morphing, as well as Gaussian Splatting morphing\u2014show that FLOWING achieves state-of-the-art morphing quality with substantially faster convergence. Code and pretrained models will be released.", "arxiv_id": "2510.09537v1", "arxiv_authors": ["Arthur Bizzi", "Matias Grynberg", "Vitor Matias", "Daniel Perazzo", "Jo\u00e3o Paulo Lima", "Luiz Velho", "Nuno Gon\u00e7alves", "Jo\u00e3o Pereira", "Guilherme Schardong", "Tiago Novello"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1117544, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68d"}, "filepath": "data/2510.11083v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990643391059131, "type": "Poster", "name": "Flow Matching-Based Autonomous Driving Planning with Advanced Interactive Behavior Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118469", "abstract": "Modeling interactive driving behaviors in complex scenarios remains a fundamental challenge for autonomous driving planning. Learning-based approaches attempt to address this challenge with advanced generative models, removing the dependency on over-engineered architectures for representation fusion. However, brute-force implementation by simply stacking transformer blocks lacks a dedicated mechanism for modeling interactive behaviors that is common in real driving scenarios. The scarcity of interactive driving data further exacerbates this problem, leaving conventional imitation learning methods ill-equipped to capture high-value interactive behaviors. We propose Flow Planner, which tackles these problems through coordinated innovations in data modeling, model architecture, and learning scheme. Specifically, we first introduce fine-grained trajectory tokenization, which decomposes the trajectory into overlapping segments to decrease the complexity of whole trajectory modeling. With a sophisticatedly designed architecture, we achieve efficient temporal and spatial fusion of planning and scene information, to better capture interactive behaviors. In addition, the framework incorporates flow matching with classifier-free guidance for multi-modal behavior generation, which dynamically reweights agent interactions during inference to maintain coherent response strategies, providing a critical boost for interactive scenario understanding. Experimental results on the large-scale nuPlan dataset demonstrate that Flow Planner achieves state-of-the-art performance among learning-based approaches while effectively modeling interactive behaviors in complex driving scenarios.", "arxiv_id": "2510.11083v1", "arxiv_authors": ["Tianyi Tan", "Yinan Zheng", "Ruiming Liang", "Zexu Wang", "Kexin Zheng", "Jinliang Zheng", "Jianxiong Li", "Xianyuan Zhan", "Jingjing Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a21f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 991944, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68e"}, "filepath": "data/2506.01144v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996679605719432, "type": "Poster", "name": "FlowMo: Variance-Based Flow Guidance for Coherent Motion in Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118232", "abstract": "Text-to-video diffusion models are notoriously limited in their ability to model temporal aspects such as motion, physics, and dynamic interactions. Existing approaches address this limitation by retraining the model or introducing external conditioning signals to enforce temporal consistency. In this work, we explore whether a meaningful temporal representation can be extracted directly from the predictions of a pre-trained model without any additional training or auxiliary inputs. We introduce __FlowMo__, a novel training-free guidance method that enhances motion coherence using only the model's own predictions in each diffusion step. FlowMo first derives an appearance-debiased temporal representation by measuring the distance between latents corresponding to consecutive frames. This highlights the implicit temporal structure predicted by the model. It then estimates motion coherence by measuring the patch-wise variance across the temporal dimension, and guides the model to reduce this variance dynamically during sampling. Extensive experiments across multiple text-to-video models demonstrate that FlowMo significantly improves motion coherence without sacrificing visual quality or prompt alignment, offering an effective plug-and-play solution for enhancing the temporal fidelity of pre-trained video diffusion models.", "arxiv_id": "2506.01144v2", "arxiv_authors": ["Ariel Shaulov", "Itay Hazan", "Lior Wolf", "Hila Chefer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a220"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4238582, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a68f"}, "filepath": "data/2506.02896v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992733606826203, "type": "Poster", "name": "FlySearch: Exploring how vision-language models explore", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121733", "abstract": "The real world is messy and unstructured. Uncovering critical information often requires active, goal-driven exploration. It remains to be seen whether Vision-Language Models (VLMs), which recently emerged as a popular zero-shot tool in many difficult tasks, can operate effectively in such conditions. In this paper, we answer this question by introducing FlySearch, a 3D, outdoor, photorealistic environment for searching and navigating to objects in complex scenes. We define three sets of scenarios with varying difficulty and observe that state-of-the-art VLMs cannot reliably solve even the simplest exploration tasks, with the gap to human performance increasing as the tasks get harder. We identify a set of central causes, ranging from vision hallucination, through context misunderstanding, to task planning failures, and we show that some of them can be addressed by finetuning. We publicly release the benchmark, scenarios, and the underlying codebase.", "arxiv_id": "2506.02896v3", "arxiv_authors": ["Adam Pardyl", "Dominik Matuszek", "Mateusz Przebieracz", "Marek Cygan", "Bartosz Zieli\u0144ski", "Maciej Wo\u0142czyk"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a221"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050936, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a690"}, "filepath": "data/2506.16806v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994419721659136, "type": "Poster", "name": "FOCUS: Unified Vision-Language Modeling for Interactive Editing Driven by Referential Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119062", "abstract": "Recent Large Vision Language Models (LVLMs) demonstrate promising capabilities in unifying visual understanding and generative modeling, enabling both accurate content understanding and flexible editing. However, current approaches treat \\textbf{\\textit{\"what to see\"}} and \\textbf{\\textit{\"how to edit\"}} separately: they either perform isolated object segmentation or utilize segmentation masks merely as conditional prompts for local edit generation tasks, often relying on multiple disjointed models. To bridge these gaps, we introduce FOCUS, a unified LVLM that integrates segmentation-aware perception and controllable object-centric generation within an end-to-end framework. FOCUS employs a dual-branch visual encoder to simultaneously capture global semantic context and fine-grained spatial details. In addition, we leverage a MoVQGAN-based visual tokenizer to produce discrete visual tokens that enhance generation quality. To enable accurate and controllable image editing, we propose a progressive multi-stage training pipeline, where segmentation masks are jointly optimized and used as spatial condition prompts to guide the diffusion decoder. This strategy aligns visual encoding, segmentation, and generation modules, effectively bridging segmentation-aware perception with fine-grained visual synthesis.Extensive experiments across three core tasks, including multimodal understanding, referring segmentation accuracy, and controllable image generation, demonstrate that FOCUS achieves strong performance by jointly optimizing visual perception and generative capabilities.", "arxiv_id": "2506.16806v2", "arxiv_authors": ["Fan Yang", "Yousong Zhu", "Xin Li", "Yufei Zhan", "Hongyin Zhao", "Shurong Zheng", "Yaowei Wang", "Ming Tang", "Jinqiao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a222"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1017368, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a691"}, "filepath": "data/2505.19386v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993260948954155, "type": "Poster", "name": "Force Prompting: Video Generation Models Can Learn And Generalize Physics-based Control Signals", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116909", "abstract": "Recent advances in video generation models have sparked interest in world models capable of simulating realistic environments.While navigation has been well-explored, physically meaningful interactions that mimic real-world forces remain largely understudied. In this work, we investigate using physical forces as a control signal for video generation and propose force prompts which enable users to interact with images through both localized point forces, such as poking a plant, and global wind force fields, such as wind blowing on fabric. We demonstrate that these force prompts can enable videos to respond realistically to physical control signals by leveraging the physical prior in the original pretrained model, without using any 3D asset or physics simulator at inference. The primary challenge of force prompting is the difficulty in obtaining high quality paired force-video training data, both in the real world due to the difficulty of obtaining force signals, and in synthetic data due to limitations in the visual quality and domain diversity of physics simulators. Our key finding is that video generation models can *generalize* remarkably well when adapted to follow physical force conditioning from videos synthesized by Blender, even with limited demonstrations of few objects (e.g., flying flags, rolling balls, etc.). Our method can generate videos which simulate forces across diverse geometries, settings, and materials. We also try to understand the source of this generalization and perform ablations on the training data that reveal two key elements: visual diversity and the use of specific text keywords during training. Our approach is trained on only around 15k training examples for a single day on four A100 GPUs, and outperforms existing methods on force adherence and physics realism, bringing world models closer to real-world physics interactions. All datasets, code, and model weights will be open-sourced. Video examples can be found at https://sites.google.com/view/force-prompting-neurips2025", "arxiv_id": "2505.19386v1", "arxiv_authors": ["Nate Gillman", "Charles Herrmann", "Michael Freeman", "Daksh Aggarwal", "Evan Luo", "Deqing Sun", "Chen Sun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a223"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037816, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a692"}, "filepath": "data/2505.22159v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990148495700971, "type": "Poster", "name": "ForceVLA: Enhancing VLA Models with a Force-aware MoE for Contact-rich Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120169", "abstract": "Vision-Language-Action (VLA) models have advanced general-purpose robotic manipulation by leveraging pretrained visual and linguistic representations. However, they struggle with contact-rich tasks that require fine-grained control involving force, especially under visual occlusion or dynamic uncertainty. To address these limitations, we propose \\textbf{ForceVLA}, a novel end-to-end manipulation framework that treats external force sensing as a first-class modality within VLA systems. ForceVLA introduces \\textbf{FVLMoE}, a force-aware Mixture-of-Experts fusion module that dynamically integrates pretrained visual-language embeddings with real-time 6-axis force feedback during action decoding. This enables context-aware routing across modality-specific experts, enhancing the robot's ability to adapt to subtle contact dynamics. We also introduce \\textbf{ForceVLA-Data}, a new dataset comprising synchronized vision, proprioception, and force-torque signals across five contact-rich manipulation tasks. ForceVLA improves average task success by 23.2\\% over strong $\\pi_0$-based baselines, achieving up to 80\\% success in tasks such as plug insertion. Our approach highlights the importance of multimodal integration for dexterous manipulation and sets a new benchmark for physically intelligent robotic control. Code and data will be released at https://sites.google.com/view/forcevla2025/.", "arxiv_id": "2505.22159v3", "arxiv_authors": ["Jiawen Yu", "Hairuo Liu", "Qiaojun Yu", "Jieji Ren", "Ce Hao", "Haitong Ding", "Guangyu Huang", "Guofan Huang", "Yan Song", "Panpan Cai", "Cewu Lu", "Wenqiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a224"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1131687, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a693"}, "filepath": "data/2505.11003v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998840995625657, "type": "Poster", "name": "ForensicHub: A Unified Benchmark & Codebase for All-Domain Fake Image Detection and Localization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121732", "abstract": "The field of Fake Image Detection and Localization (FIDL) is highly fragmented, encompassing four domains: deepfake detection (Deepfake), image manipulation detection and localization (IMDL), artificial intelligence-generated image detection (AIGC), and document image manipulation localization (Doc). Although individual benchmarks exist in some domains, a unified benchmark for all domains in FIDL remains blank. The absence of a unified benchmark results in significant domain silos, where each domain independently constructs its datasets, models, and evaluation protocols without interoperability, preventing cross-domain comparisons and hindering the development of the entire FIDL field. To close the domain silo barrier, we propose ForensicHub, the first unified benchmark \\& codebase for all-domain fake image detection and localization. Considering drastic variations on dataset, model, and evaluation configurations across all domains, as well as the scarcity of open-sourced baseline models and the lack of individual benchmarks in some domains, ForensicHub: i) proposes a modular and configuration-driven architecture that decomposes forensic pipelines into interchangeable components across datasets, transforms, models, and evaluators, allowing flexible composition across all domains; ii) fully implements 10 baseline models (3 of which are reproduced from scratch), 6 backbones, 2 new benchmarks for AIGC and Doc, and integrates 2 existing benchmarks of DeepfakeBench and IMDLBenCo through an adapter-based design; iii) establishes an image forensic fusion protocol evaluation mechanism that supports unified training and testing of diverse forensic models across tasks; iv) conducts indepth analysis based on the ForensicHub, offering 8 key actionable insights into FIDL model architecture, dataset characteristics, and evaluation standards. Specifically, ForensicHub includes 4 forensic tasks, 23 datasets, 42 baseline models, 6 backbones, 11 GPU-accelerated pixel- and image-level evaluation metrics, and realizes 16 kinds of cross-domain evaluations. ForensicHub represents a significant leap forward in breaking the domain silos in the FIDL field and inspiring future breakthroughs. Code is available at: https://github.com/scu-zjz/ForensicHub.", "arxiv_id": "2505.11003v2", "arxiv_authors": ["Bo Du", "Xuekang Zhu", "Xiaochen Ma", "Chenfan Qu", "Kaiwen Feng", "Zhe Yang", "Chi-Man Pun", "Jian Liu", "Jizhe Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a225"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061233, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a694"}, "filepath": "data/2411.19466v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998913873342277, "type": "Poster", "name": "ForgerySleuth: Empowering Multimodal Large Language Models for Image Manipulation Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120196", "abstract": "Multimodal large language models have unlocked new possibilities for various multimodal tasks. However, their potential in image manipulation detection remains unexplored. When directly applied to the IMD task, M-LLMs often produce reasoning texts that suffer from hallucinations and overthinking. To address this, in this work, we propose ForgerySleuth, which leverages M-LLMs to perform comprehensive clue fusion and generate segmentation outputs indicating specific regions that are tampered with. Moreover, we construct the ForgeryAnalysis dataset through a chain-of-clues process, which includes analysis and reasoning text to upgrade the image manipulation detection task. A data engine is also introduced to build a larger-scale dataset for the pre-training phase. Our extensive experiments demonstrate the effectiveness of ForgeryAnalysis and show that ForgerySleuth significantly outperforms existing methods in generalization, robustness, and explainability.", "arxiv_id": "2411.19466v1", "arxiv_authors": ["Zhihao Sun", "Haoran Jiang", "Haoran Chen", "Yixin Cao", "Xipeng Qiu", "Zuxuan Wu", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a226"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2597842, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a695"}, "filepath": "data/2506.02964v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999648309115451, "type": "Poster", "name": "FORLA:Federated Object-centric Representation Learning with Slot Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117568", "abstract": "Learning efficient visual representations across heterogeneous unlabeled datasets remains a central challenge in federated learning. Effective federated representations require features that are jointly informative across clients while disentangling domain-specific factors without supervision. We introduce FORLA, a novel framework for federated object-centric representation learning and feature adaptation across clients using unsupervised slot attention. At the core of our method is a shared feature adapter, trained collaboratively across clients to adapt features from foundation models, and a shared slot attention module that learns to reconstruct the adapted features. To optimize this adapter, we design a two-branch student\u2013teacher architecture. In each client, a student decoder learns to reconstruct full features from foundation models, while a teacher decoder reconstructs their adapted, low-dimensional counterpart. The shared slot attention module bridges cross-domain learning by aligning object-level representations across clients. Experiments in multiple real-world datasets show that our framework not only outperforms centralized baselines on object discovery but also learns a compact, universal representation that generalizes well across domains. This work highlights federated slot attention as an effective tool for scalable, unsupervised visual representation learning from cross-domain data with distributed concepts.", "arxiv_id": "2506.02964v1", "arxiv_authors": ["Guiqiu Liao", "Matjaz Jogan", "Eric Eaton", "Daniel A. Hashimoto"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a227"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 872169, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a696"}, "filepath": "data/2411.15277v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990025362294837, "type": "Poster", "name": "Foundation Cures Personalization: Improving Personalized Models\u2019 Prompt Consistency via Hidden Foundation Knowledge", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118648", "abstract": "Facial personalization faces challenges to maintain identity fidelity without disrupting the foundation model's prompt consistency. The mainstream personalization models employ identity embedding to integrate identity information within the attention mechanisms. However, our preliminary findings reveal that identity embeddings compromise the effectiveness of other tokens in the prompt, thereby limiting high prompt consistency and attribute-level controllability. Moreover, by deactivating identity embedding, personalization models still demonstrate the underlying foundation models' ability to control facial attributes precisely. It suggests that such foundation models' knowledge can be leveraged to cure the ill-aligned prompt consistency of personalization models. Building upon these insights, we propose FreeCure, a framework that improves the prompt consistency of personalization models with their latent foundation models' knowledge. First, by setting a dual inference paradigm with/without identity embedding, we identify attributes (e.g., hair, accessories, etc.) for enhancements. Second, we introduce a novel foundation-aware self-attention module, coupled with an inversion-based process to bring well-aligned attribute information to the personalization process. Our approach is training-free, and can effectively enhance a wide array of facial attributes; and it can be seamlessly integrated into existing popular personalization models based on both Stable Diffusion and FLUX. FreeCure has consistently demonstrated significant improvements in prompt consistency across these facial personalization models while maintaining the integrity of their original identity fidelity. Project page: https://freecure.github.io/.", "arxiv_id": "2411.15277v3", "arxiv_authors": ["Yiyang Cai", "Zhengkai Jiang", "Yulong Liu", "Chunyang Jiang", "Wei Xue", "Yike Guo", "Wenhan Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a228"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.491Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 972327, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a697"}, "filepath": "data/2506.04648v2.png", "tags": [], "_media_type": "image", "_rand": 0.999541294193281, "type": "Poster", "name": "FPSAttention: Training-Aware FP8 and Sparsity Co-Design for Fast Video Diffusion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117878", "abstract": "Diffusion generative models have become the standard for producing high-quality, coherent video content, yet their slow inference speeds and high computational demands hinder practical deployment. Although both quantization and sparsity can independently accelerate inference while maintaining generation quality, naively combining these techniques in existing training-free approaches leads to significant performance degradation, as they fail to achieve proper joint optimization.We introduce FPSAttention, a novel training-aware co-design of FP8 quantization and Sparsity for video generation, with a focus on the 3D bi-directional attention mechanism. Our approach features three key innovations: 1) A unified 3D tile-wise granularity that simultaneously supports both quantization and sparsity. 2) A denoising step-aware strategy that adapts to the noise schedule, addressing the strong correlation between quantization/sparsity errors and denoising steps. 3) A native, hardware-friendly kernel that leverages FlashAttention and is implemented with optimized Hopper architecture features, enabling highly efficient execution.Trained on Wan2.1's 1.3B and 14B models and evaluated on the vBench benchmark, FPSAttention achieves a 7.09$\\times$ kernel speedup for attention operations and a 4.96$\\times$ end-to-end speedup for video generation compared to the BF16 baseline at 720p resolution\u2014without sacrificing generation quality.", "arxiv_id": "2506.04648v2", "arxiv_authors": ["Akide Liu", "Zeyu Zhang", "Zhexin Li", "Xuehai Bai", "Yizeng Han", "Jiasheng Tang", "Yuanjie Xing", "Jichao Wu", "Mingyang Yang", "Weihua Chen", "Jiahao He", "Yuanyu He", "Fan Wang", "Gholamreza Haffari", "Bohan Zhuang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a229"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1469783, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a698"}, "filepath": "data/2504.12626v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999235986550821, "type": "Poster", "name": "Frame Context Packing and Drift Prevention in Next-Frame-Prediction Video Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118746", "abstract": "We present a neural network structure, FramePack, to train next-frame (or next-frame-section) prediction models for video generation. FramePack compresses input frame contexts with frame-wise importance so that more frames can be encoded within a fixed context length, with more important frames having longer contexts. The frame importance can be measured using time proximity, feature similarity, or hybrid metrics. The packing method allows for inference with thousands of frames and training with relatively large batch sizes. We also present drift prevention methods to address observation bias (error accumulation), including early-established endpoints, adjusted sampling orders, and discrete history representation. Ablation studies validate the effectiveness of the anti-drifting methods in both single-directional video streaming and bi-directional video generation. Finally, we show that existing video diffusion models can be finetuned with FramePack, and analyze the differences between different packing schedules.", "arxiv_id": "2504.12626v3", "arxiv_authors": ["Lvmin Zhang", "Shengqu Cai", "Muyang Li", "Gordon Wetzstein", "Maneesh Agrawala"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1011194, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a699"}, "filepath": "data/2505.21491v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995391425852572, "type": "Poster", "name": "Frame In-N-Out: Unbounded Controllable Image-to-Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116106", "abstract": "Controllability, temporal coherence, and detail synthesis remain the most critical challenges in video generation. In this paper, we focus on a commonly used yet underexplored cinematic technique known as Frame In and Frame Out. Specifically, starting from image-to-video generation, users can control the objects in the image to naturally leave the scene or provide breaking new identity references to enter the scene, guided by user-specified motion trajectory. To support this task, we introduce a new dataset curated semi-automatically, a comprehensive evaluation protocol targeting this setting, and an efficient identity-preserving motion-controllable video Diffusion Transformer architecture. Our evaluation shows that our proposed approach significantly outperforms existing baselines.", "arxiv_id": "2505.21491v1", "arxiv_authors": ["Boyang Wang", "Xuweiyi Chen", "Matheus Gadelha", "Zezhou Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4127358, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69a"}, "filepath": "data/2510.23444v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995571884449417, "type": "Poster", "name": "FRBNet: Revisiting Low-Light Vision through Frequency-Domain Radial Basis Network", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119034", "abstract": "Low-light vision remains a fundamental challenge in computer vision due to severe illumination degradation, which significantly affects the performance of downstream tasks such as detection and segmentation. While recent state-of-the-art methods have improved performance through invariant feature learning modules, they still fall short due to incomplete modeling of low-light conditions. Therefore, we revisit low-light image formation and extend the classical Lambertian model to better characterize low-light conditions. By shifting our analysis to the frequency domain, we theoretically prove that the frequency-domain channel ratio can be leveraged to extract illumination-invariant features via a structured filtering process. We then propose a novel and end-to-end trainable module named \\textbf{F}requency-domain \\textbf{R}adial \\textbf{B}asis \\textbf{Net}work (\\textbf{FRBNet}), which integrates the frequency-domain channel ratio operation with a learnable frequency domain filter for the overall illumination-invariant feature enhancement. As a plug-and-play module, FRBNet can be integrated into existing networks for low-light downstream tasks without modifying loss functions. Extensive experiments across various downstream tasks demonstrate that FRBNet achieves superior performance, including +2.2 mAP for dark object detection and +2.9 mIoU for nighttime segmentation. Code is available at \\href{https://anonymous.4open.science/r/FRBNet_Anony}{FRBNet\\_anony}.", "arxiv_id": "2510.23444v1", "arxiv_authors": ["Fangtong Sun", "Congyu Li", "Ke Yang", "Yuchen Pan", "Hanwen Yu", "Xichuan Zhang", "Yiying Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070397, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69b"}, "filepath": "data/2503.23035v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996618228506916, "type": "Poster", "name": "FreeInv: Free Lunch for Improving DDIM Inversion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118393", "abstract": "Naive DDIM inversion process usually suffers from a trajectory deviation issue, i.e., the latent trajectory during reconstruction deviates from the one during inversion. To alleviate this issue, previous methods either learn to mitigate the deviation or design cumbersome compensation strategy to reduce the mismatch error, exhibiting substantial time and computation cost. In this work, we present a nearly free-lunch method (named FreeInv) to address the issue more effectively and efficiently. In FreeInv, we randomly transform the latent representation and keep the transformation the same between the corresponding inversion and reconstruction time-step. It is motivated from a statistical perspective that an ensemble of DDIM inversion processes for multiple trajectories yields a smaller trajectory mismatch error on expectation.Moreover, through theoretical analysis and empirical study, we show that FreeInv performs an efficient ensemble of multiple trajectories. FreeInv can be freely integrated into existing inversion-based image and video editing techniques. Especially for inverting video sequences, it brings more significant fidelity and efficiency improvements. Comprehensive quantitative and qualitative evaluation on PIE benchmark and DAVIS dataset shows that FreeInv remarkably outperforms conventional DDIM inversion, and is competitive among previous state-of-the-art inversion methods, with superior computation efficiency.", "arxiv_id": "2503.23035v1", "arxiv_authors": ["Yuxiang Bao", "Huijie Liu", "Xun Gao", "Huan Fu", "Guoliang Kang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1595631, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69c"}, "filepath": "data/2503.14275v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992592146975727, "type": "Poster", "name": "Free-Lunch Color-Texture Disentanglement for Stylized Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116962", "abstract": "Recent advances in Text-to-Image (T2I) diffusion models have transformed image generation, enabling significant progress in stylized generation using only a few style reference images. However, current diffusion-based methods struggle with \\textit{fine-grained} style customization due to challenges in controlling multiple style attributes, such as color and texture. This paper introduces the first tuning-free approach to achieve free-lunch color-texture disentanglement in stylized T2I generation, addressing the need for independently controlled style elements for the Disentangled Stylized Image Generation (DisIG) problem. Our approach leverages the \\textit{Image-Prompt Additivity} property in the CLIP image embedding space to develop techniques for separating and extracting Color-Texture Embeddings (CTE) from individual color and texture reference images. To ensure that the color palette of the generated image aligns closely with the color reference, we apply a whitening and coloring transformation to enhance color consistency. Additionally, to prevent texture loss due to the signal-leak bias inherent in diffusion training, we introduce a noise term that preserves textural fidelity during the Regularized Whitening and Coloring Transformation (RegWCT). Through these methods, our Style Attributes Disentanglement approach (SADis) delivers a more precise and customizable solution for stylized image generation. Experiments on images from the WikiArt and StyleDrop datasets demonstrate that, both qualitatively and quantitatively, SADis surpasses state-of-the-art stylization methods in the DisIG task.", "arxiv_id": "2503.14275v3", "arxiv_authors": ["Jiang Qin", "Senmao Li", "Alexandra Gomez-Villa", "Shiqi Yang", "Yaxing Wang", "Kai Wang", "Joost van de Weijer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4733746, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69d"}, "filepath": "data/2506.08822v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999592186486485, "type": "Poster", "name": "FreqPolicy: Efficient Flow-based Visuomotor Policy via Frequency Consistency", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118381", "abstract": "Generative modeling-based visuomotor policies have been widely adopted in robotic manipulation attributed to their ability to model multimodal action distributions. However, the high inference cost of multi-step sampling limits their applicability in real-time robotic systems. To address this issue, existing approaches accelerate the sampling process in generative modeling-based visuomotor policies by adapting acceleration techniques originally developed for image generation, such as Consistency Models and Consistency-FM. Despite this progress, a major distinction remains: image generation typically involves producing independent samples without temporal dependencies, whereas robotic manipulation involves generating time-series action trajectories that require continuity and temporal coherence. To effectively exploit temporal information in robotic manipulation, we propose FreqPolicy, a novel approach that first imposes frequency consistency constraints on flow-based visuomotor policies. Our work enables the action model to capture temporal structure effectively while supporting efficient, high-quality one-step action generation. Inspired by advances in time-series forecasting and speech processing, we introduce a frequency consistency constraint objective that enforces alignment of frequency-domain action features across different timesteps along the flow, thereby promoting convergence of one-step action generation toward the target distribution. In addition, we design an adaptive consistency loss to capture structural temporal variations inherent in robotic manipulation tasks. We assess FreqPolicy on $53$ tasks across $3$ simulation benchmarks, proving its superiority over existing one-step action generators.We further integrate FreqPolicy into the vision-language-action (VLA) model and achieve acceleration without performance degradation on the $40$ tasks of Libero. Besides, we show efficiency and effectiveness in real-world robotic scenarios with an inference frequency $93.5~\\mathrm {Hz}$. The code will be publicly available.", "arxiv_id": "2506.08822v1", "arxiv_authors": ["Yifei Su", "Ning Liu", "Dong Chen", "Zhen Zhao", "Kun Wu", "Meng Li", "Zhiyuan Xu", "Zhengping Che", "Jian Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a22f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1010129, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69e"}, "filepath": "data/2506.01583v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994059781750012, "type": "Poster", "name": "FreqPolicy: Frequency Autoregressive Visuomotor Policy with Continuous Tokens", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118407", "abstract": "Learning effective visuomotor policies for robotic manipulation is challenging, as it requires generating precise actions while maintaining computational efficiency. Existing methods remain unsatisfactory due to inherent limitations in the essential action representation and the basic network architectures. We observe that representing actions in the frequency domain captures the structured nature of motion more effectively: low-frequency components reflect global movement patterns, while high-frequency components encode fine local details. Additionally, robotic manipulation tasks of varying complexity demand different levels of modeling precision across these frequency bands. Motivated by this, we propose a novel paradigm for visuomotor policy learning that progressively models hierarchical frequency components. To further enhance precision, we introduce continuous latent representations that maintain smoothness and continuity in the action space. Extensive experiments across diverse 2D and 3D robotic manipulation benchmarks demonstrate that our approach outperforms existing methods in both accuracy and efficiency, showcasing the potential of a frequency-domain autoregressive framework with continuous tokens for generalized robotic manipulation.", "arxiv_id": "2506.01583v2", "arxiv_authors": ["Yiming Zhong", "Yumeng Liu", "Chuyang Xiao", "Zemin Yang", "Youzhuo Wang", "Yufei Zhu", "Ye Shi", "Yujing Sun", "Xinge Zhu", "Yuexin Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a230"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1126785, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a69f"}, "filepath": "data/2505.15439v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990300027761699, "type": "Poster", "name": "FRN: Fractal-Based Recursive Spectral Reconstruction Network", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119659", "abstract": "Generating hyperspectral images (HSIs) from RGB images through spectral reconstruction can significantly reduce the cost of HSI acquisition. In this paper, we propose a Fractal-Based Recursive Spectral Reconstruction Network (FRN), which differs from existing paradigms that attempt to directly integrate the full-spectrum information from the R, G, and B channels in a one-shot manner. Instead, it treats spectral reconstruction as a progressive process, predicting from broad to narrow bands or employing a coarse-to-fine approach for predicting the next wavelength. Inspired by fractals in mathematics, FRN establishes a novel spectral reconstruction paradigm by recursively invoking an atomic reconstruction module. In each invocation, only the spectral information from neighboring bands is used to provide clues for the generation of the image at the next wavelength, which follows the low-rank property of spectral data. Moreover, we design a band-aware state space model that employs a pixel-differentiated scanning strategy at different stages of the generation process, further suppressing interference from low-correlation regions caused by reflectance differences. Through extensive experimentation across different datasets, FRN achieves superior reconstruction performance compared to state-of-the-art methods in both quantitative and qualitative evaluations.", "arxiv_id": "2505.15439v1", "arxiv_authors": ["Ge Meng", "Zhongnan Cai", "Ruizhe Chen", "Jingyan Tu", "Yingying Wang", "Yue Huang", "Xinghao Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a231"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1003218, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a0"}, "filepath": "data/2506.20977v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997514252094489, "type": "Poster", "name": "From Cradle to Cane: A Two-Pass Framework for High-Fidelity Lifespan Face Aging", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119168", "abstract": "Face aging has become a crucial task in computer vision, with applications ranging from entertainment to healthcare. However, existing methods struggle with achieving a realistic and seamless transformation across the entire lifespan, especially when handling large age gaps or extreme head poses. The core challenge lies in balancing $age\\ accuracy$ and $identity\\ preservation$\u2014what we refer to as the $Age\\text{-}ID\\ trade\\text{-}off$. Most prior methods either prioritize age transformation at the expense of identity consistency or vice versa. In this work, we address this issue by proposing a $two\\text{-}pass$ face aging framework, named $Cradle2Cane$, based on few-step text-to-image (T2I) diffusion models. The first pass focuses on solving $age\\ accuracy$ by introducing an adaptive noise injection ($AdaNI$) mechanism. This mechanism is guided by including prompt descriptions of age and gender for the given person as the textual condition.Also, by adjusting the noise level, we can control the strength of aging while allowing more flexibility in transforming the face.However, identity preservation is weakly ensured here to facilitate stronger age transformations.In the second pass, we enhance $identity\\ preservation$ while maintaining age-specific features by conditioning the model on two identity-aware embeddings ($IDEmb$): $SVR\\text{-}ArcFace$ and $Rotate\\text{-}CLIP$. This pass allows for denoising the transformed image from the first pass, ensuring stronger identity preservation without compromising the aging accuracy.Both passes are $jointly\\ trained\\ in\\ an\\ end\\text{-}to\\text{-}end\\ way\\$. Extensive experiments on the CelebA-HQ test dataset, evaluated through Face++ and Qwen-VL protocols, show that our $Cradle2Cane$ outperforms existing face aging methods in age accuracy and identity consistency.Additionally, $Cradle2Cane$ demonstrates superior robustness when applied to in-the-wild human face images, where prior methods often fail. This significantly broadens its applicability to more diverse and unconstrained real-world scenarios.", "arxiv_id": "2506.20977v2", "arxiv_authors": ["Tao Liu", "Dafeng Zhang", "Gengchen Li", "Shizhuo Liu", "Yongqi Song", "Senmao Li", "Shiqi Yang", "Boqian Li", "Kai Wang", "Yaxing Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a232"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1144490, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a1"}, "filepath": "data/2506.12779v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991285310314709, "type": "Poster", "name": "From Experts to a Generalist: Toward General Whole-Body Control for Humanoid Robots", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117371", "abstract": "Achieving general agile whole-body control on humanoid robots remains a major challenge due to diverse motion demands and data conflicts. While existing frameworks excel in training single motion-specific policies, they struggle to generalize across highly varied behaviors due to conflicting control requirements and mismatched data distributions. In this work, we propose BUMBLEBEE (BB), an expert-generalist learning framework that combines motion clustering and sim-to-real adaptation to overcome these challenges. BB first leverages an autoencoder-based clustering method to group behaviorally similar motions using motion features and motion descriptions. Expert policies are then trained within each cluster and refined with real-world data through iterative delta action modeling to bridge the sim-to-real gap. Finally, these experts are distilled into a unified generalist controller that preserves agility and robustness across all motion types. Experiments on two simulations and a real humanoid robot demonstrate that BB achieves state-of-the-art general whole-body control, setting a new benchmark for agile, robust, and generalizable humanoid performance in the real world.", "arxiv_id": "2506.12779v3", "arxiv_authors": ["Yuxuan Wang", "Ming Yang", "Ziluo Ding", "Yu Zhang", "Weishuai Zeng", "Xinrun Xu", "Haobin Jiang", "Zongqing Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a233"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5958390, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a2"}, "filepath": "data/2503.22976v5.png", "tags": [], "_media_type": "image", "_rand": 0.9995059390905459, "type": "Poster", "name": "From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121742", "abstract": "Recent advances in LVLMs have improved vision-language understanding, but they still struggle with spatial perception, limiting their ability to reason about complex 3D scenes. Unlike previous approaches that incorporate 3D representations into models to improve spatial understanding, we aim to unlock the potential of VLMs by leveraging spatially relevant image data. To this end, we introduce a novel 2D spatial data generation and annotation pipeline built upon scene data with 3D ground-truth. This pipeline enables the creation of a diverse set of spatial tasks, ranging from basic perception tasks to more complex reasoning tasks. Leveraging this pipeline, we construct SPAR-7M, a large-scale dataset generated from thousands of scenes across multiple public datasets. In addition, we introduce SPAR-Bench, a benchmark designed to offer a more comprehensive evaluation of spatial capabilities compared to existing spatial benchmarks, supporting both single-view and multi-view inputs. Training on both SPAR-7M and large-scale 2D datasets enables our models to achieve state-of-the-art performance on 2D spatial benchmarks. Further fine-tuning on 3D task-specific datasets yields competitive results, underscoring the effectiveness of our dataset in enhancing spatial reasoning.", "arxiv_id": "2503.22976v5", "arxiv_authors": ["Jiahui Zhang", "Yurui Chen", "Yanpeng Zhou", "Yueming Xu", "Ze Huang", "Jilin Mei", "Junhui Chen", "Yu-Jie Yuan", "Xinyue Cai", "Guowei Huang", "Xingyue Quan", "Hang Xu", "Li Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a234"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.492Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2243059, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a3"}, "filepath": "data/2510.19654v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992585099303948, "type": "Poster", "name": "From Forecasting to Planning: Policy World Model for Collaborative State-Action Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115790", "abstract": "Despite remarkable progress in driving world models, their potential for autonomous systems remains largely untapped: the world models are mostly learned for world simulation and decoupled from trajectory planning. While recent efforts aim to unify world modeling and planning in a single framework, the synergistic facilitation mechanism of world modeling for planning still requires further exploration. In this work, we introduce a new driving paradigm named Policy World Model (PWM), which not only integrates world modeling and trajectory planning within a unified architecture, but is also able to benefit planning using the learned world knowledge through the proposed action-free future state forecasting scheme. Through collaborative state-action prediction, PWM can mimic the human-like anticipatory perception, yielding more reliable planning performance. To facilitate the efficiency of video forecasting, we further introduce a new image tokenizer with context-guided compression and decoding alongside a dynamic focal loss. Despite utilizing only front camera input, our method matches or exceeds state-of-the-art approaches that rely on multi-view and multi-modal inputs. Code and model weights will be released.", "arxiv_id": "2510.19654v1", "arxiv_authors": ["Zhida Zhao", "Talas Fu", "Yifan Wang", "Lijun Wang", "Huchuan Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a235"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043434, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a4"}, "filepath": "data/2506.04897v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992963938265849, "type": "Poster", "name": "From Objects to Anywhere: A Holistic Benchmark for Multi-level Visual Grounding in 3D Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121709", "abstract": "3D visual grounding has made notable progress in localizing objects within complex 3D scenes. However, grounding referring expressions beyond objects in 3D scenes remains unexplored. In this paper, we introduce Anywhere3D-Bench, a holistic 3D visual grounding benchmark consisting of 2,632 referring expression-3D bounding box pairs spanning four different grounding levels: human-activity areas, unoccupied space beyond objects, individual objects in the scene, and fine-grained object parts. We assess a range of state-of-the-art 3D visual grounding methods alongside large language models (LLMs) and multimodal LLMs (MLLMs) on Anywhere3D-Bench. Experimental results reveal that space-level and part-level visual grounding pose the greatest challenges: space-level tasks require a more comprehensive spatial reasoning ability, for example, modeling distances and spatial relations within 3D space, while part-level tasks demand fine-grained perception of object composition. Even the best performance model, OpenAI o4-mini, achieves only 22.94% accuracy on space-level tasks and 33.68% on part-level tasks, significantly lower than its performance on area-level and object-level tasks. These findings underscore a critical gap in current models\u2019 capacity to understand and reason about 3D scene beyond object-level semantics.", "arxiv_id": "2506.04897v2", "arxiv_authors": ["Tianxu Wang", "Zhuofan Zhang", "Ziyu Zhu", "Yue Fan", "Jing Xiong", "Pengxiang Li", "Xiaojian Ma", "Qing Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a236"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2813611, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a5"}, "filepath": "data/2510.22577v1.png", "tags": [], "_media_type": "image", "_rand": 0.999848281131581, "type": "Poster", "name": "From Pixels to Views: Learning Angular-Aware and Physics-Consistent Representations for Light Field Microscopy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117859", "abstract": "Light field microscopy (LFM) has become an emerging tool in neuroscience for large-scale neural imaging in vivo, with XLFM (eXtended Light Field Microscopy) notable for its single-exposure volumetric imaging, broad field of view, and high temporal resolution. However, learning-based 3D reconstruction in XLFM remains underdeveloped due to two core challenges: the absence of standardized datasets and the lack of methods that can efficiently model its angular\u2013spatial structure while remaining physically grounded. We address these challenges by introducing three key contributions. First, we construct the XLFM-Zebrafish benchmark, a large-scale dataset and evaluation suite for XLFM reconstruction. Second, we propose Masked View Modeling for Light Fields (MVM-LF), a self-supervised task that learns angular priors by predicting occluded views, improving data efficiency. Third, we formulate the Optical Rendering Consistency Loss (ORC Loss), a differentiable rendering constraint that enforces alignment between predicted volumes and their PSF-based forward projections. On the XLFM-Zebrafish benchmark, our method improves PSNR by 7.7\\% over state-of-the-art baselines. Code, dataset, and evaluation protocol are publicly available at: xxx.", "arxiv_id": "2510.22577v1", "arxiv_authors": ["Feng He", "Guodong Tan", "Qiankun Li", "Jun Yu", "Quan Wen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a237"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1116405, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a6"}, "filepath": "data/2506.05274v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992657815295451, "type": "Poster", "name": "From Play to Replay: Composed Video Retrieval for Sports Highlights", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121717", "abstract": "Composed Video Retrieval (CoVR) retrieves a target video given a query video and a modification text describing the intended change. Existing CoVR benchmarks emphasize appearance shifts or coarse event changes and therefore do not test the ability to capture subtle, fast-paced temporal differences. We introduce TF-CoVR, the first large-scale benchmark dedicated to temporally fine-grained CoVR. TF-CoVR focuses on gymnastics and diving and provides 1.8 M triplets drawn from FineGym and FineDiving. Previous CoVR benchmarks focusing on temporal aspect, link each query to a single target segment taken from the same video, limiting practical usefulness. In TF-CoVR, we instead construct each pair by prompting an LLM with the label differences between clips drawn from different videos; every pair is thus associated with multiple valid target videos (3.9 on average), reflecting real-world tasks such as sports-highlight generation. To model these temporal dynamics we propose TF-CoVR-Base, a concise two-stage training framework: (i) pre-train a video encoder on fine-grained action classification to obtain temporally discriminative embeddings; (ii) align the composed query with candidate videos using contrastive learning. We conduct the first comprehensive study of image, video, and general multimodal embedding (GME) models on temporally fine-grained composed retrieval in both zero-shot and fine-tuning regimes. On TF-CoVR, TF-CoVR-Base improves zero-shot mAP@50 from 5.92 (LanguageBind) to 7.51, and after fine-tuning raises the state of the art from 19.83 to 25.82.", "arxiv_id": "2506.05274v1", "arxiv_authors": ["Animesh Gupta", "Jay Parmar", "Ishan Rajendrakumar Dave", "Mubarak Shah"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a238"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1808058, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a7"}, "filepath": "data/2505.19306v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995877708343673, "type": "Poster", "name": "From Single Images to Motion Policies via Video-Generation Environment Representations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118141", "abstract": "Autonomous robots typically need to construct representations of their surroundings and adapt their motions to the geometry of their environment. Here, we tackle the problem of constructing a policy model for collision-free motion generation, consistent with the environment, from a single input RGB image. Extracting 3D structures from a single image often involves monocular depth estimation. Developments in depth estimation have given rise to large pre-trained models such as \\emph{DepthAnything}. However, using outputs of these models for downstream motion generation is challenging due to frustum-shaped errors that arise. Instead, we propose a framework known as Video-Generation Environment Representation (VGER), which leverages the advances of large-scale video generation models to generate a moving camera video conditioned on the input image. Frames of this video, which form a multiview dataset, are then input into a pre-trained 3D foundation model to produce a dense point cloud. We then introduce a multi-scale noise approach to train an implicit representation of the environment structure and build a motion generation model that complies with the geometry of the representation. We extensively evaluate VGER over a diverse set of indoor and outdoor environments. We demonstrate its ability to produce smooth motions that account for the captured geometry of a scene, all from a single RGB input image.", "arxiv_id": "2505.19306v1", "arxiv_authors": ["Weiming Zhi", "Ziyong Ma", "Tianyi Zhang", "Matthew Johnson-Roberson"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a239"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1022854, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a8"}, "filepath": "data/2504.04827v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998167908654403, "type": "Poster", "name": "From Specificity to Generality: Revisiting Generalizable Artifacts in Detecting Face Deepfakes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118908", "abstract": "Detecting deepfakes has been an increasingly important topic, especially given the rapid development of AI generation techniques.In this paper, we ask: How can we build a universal detection framework that is effective for most facial deepfakes?One significant challenge is the wide variety of deepfake generators available, resulting in varying forgery artifacts (e.g., lighting inconsistency, color mismatch, etc).But should we ``teach\" the detector to learn all these artifacts separately? It is impossible and impractical to elaborate on them all.So the core idea is to pinpoint the more common and general artifacts across different deepfakes.Accordingly, we categorize deepfake artifacts into two distinct yet complementary types: Face Inconsistency Artifacts (FIA) and Up-Sampling Artifacts (USA). FIA arise from the challenge of generating all intricate details, inevitably causing inconsistencies between the complex facial features and relatively uniform surrounding areas.USA, on the other hand, are the inevitable traces left by the generator's decoder during the up-sampling process.This categorization stems from the observation that all existing deepfakes typically exhibit one or both of these artifacts.To achieve this, we propose a new data-level pseudo-fake creation framework that constructs fake samples with only the FIA and USA, without introducing extra less-general artifacts.Specifically, we employ a super-resolution to simulate the USA, while utilise image-level self-blending on diverse facial regions to create the FIA.We surprisingly found that, with this intuitive design, a standard image classifier trained only with our pseudo-fake data can non-trivially generalize well to previously unseen deepfakes.", "arxiv_id": "2504.04827v2", "arxiv_authors": ["Long Ma", "Zhiyuan Yan", "Jin Xu", "Yize Chen", "Qinglang Guo", "Zhen Bi", "Yong Liao", "Hui Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1009743, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6a9"}, "filepath": "data/2505.20147v3.png", "tags": [], "_media_type": "image", "_rand": 0.999340080353893, "type": "Poster", "name": "FUDOKI: Discrete Flow-based Unified Understanding and Generation via Kinetic-Optimal Velocities", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118024", "abstract": "The rapid progress of large language models (LLMs) has catalyzed the emergence of multimodal large language models (MLLMs) that unify visual understanding and image generation within a single framework. However, most existing MLLMs rely on autoregressive (AR) architectures, which impose inherent limitations on future development, such as the raster-scan order in image generation and restricted reasoning abilities in causal context modeling. In this work, we challenge the dominance of AR-based approaches by introducing FUDOKI, a unified multimodal model purely based on discrete flow matching, as an alternative to conventional AR paradigms. By leveraging metric-induced probability paths with kinetic optimal velocities, our framework goes beyond the previous masking-based corruption process, enabling iterative refinement with self-correction capability and richer bidirectional context integration during generation. To mitigate the high cost of training from scratch, we initialize FUDOKI from pre-trained AR-based MLLMs and adaptively transition to the discrete flow matching paradigm. Experimental results show that FUDOKI achieves performance comparable to state-of-the-art AR-based MLLMs across both visual understanding and image generation tasks, highlighting its potential as a foundation for next-generation unified multimodal models. Furthermore, we show that applying test-time scaling techniques to FUDOKI yields significant performance gains, further underscoring its promise for future enhancement through reinforcement learning.", "arxiv_id": "2505.20147v3", "arxiv_authors": ["Jin Wang", "Yao Lai", "Aoxue Li", "Shifeng Zhang", "Jiacheng Sun", "Ning Kang", "Chengyue Wu", "Zhenguo Li", "Ping Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044557, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6aa"}, "filepath": "data/2505.20834v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992906369696941, "type": "Poster", "name": "Fully Spiking Neural Networks for Unified Frame-Event Object Tracking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119010", "abstract": "The integration of image and event streams offers a promising approach for achieving robust visual object tracking in complex environments. However, current fusion methods achieve high performance at the cost of significant computational overhead and struggle to efficiently extract the sparse, asynchronous information from event streams, failing to leverage the energy-efficient advantages of event-driven spiking paradigms. To address this challenge, we propose the first fully Spiking Frame-Event Tracking framework called SpikeFET. This network achieves synergistic integration of convolutional local feature extraction and Transformer-based global modeling within the spiking paradigm, effectively fusing frame and event data. To overcome the degradation of translation invariance caused by convolutional padding, we introduce a Random Patchwork Module (RPM) that eliminates positional bias through randomized spatial reorganization and learnable type encoding while preserving residual structures. Furthermore, we propose a Spatial-Temporal Regularization (STR) strategy that overcomes similarity metric degradation from asymmetric features by enforcing spatio-temporal consistency among temporal template features in latent space. Extensive experiments across multiple benchmarks demonstrate that the proposed framework achieves superior tracking accuracy over existing methods while significantly reducing power consumption, attaining an optimal balance between performance and efficiency. The code will be released.", "arxiv_id": "2505.20834v2", "arxiv_authors": ["Jingjun Yang", "Liangwei Fan", "Jinpu Zhang", "Xiangkai Lian", "Hui Shen", "Dewen Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1056798, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ab"}, "filepath": "data/2506.18839v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995089343649841, "type": "Poster", "name": "Fused View-Time Attention and Feedforward Reconstruction for 4D Scene Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119396", "abstract": "We propose the first framework capable of computing a 4D spatio-temporal grid of video frames and 3D Gaussian particles for each time step using a feed-forward architecture. Our architecture has two main components, a 4D video model and a 4D reconstruction model. In the first part, we analyze current 4D video diffusion architectures that perform spatial and temporal attention either sequentially or in parallel within a two-stream design. We highlight the limitations of existing approaches and introduce a novel fused architecture that performs spatial and temporal attention within a single layer. The key to our method is a sparse attention pattern, where tokens attend to others in the same frame, at the same timestamp, or from the same viewpoint.In the second part, we extend existing 3D reconstruction algorithms by introducing a Gaussian head, a camera token replacement algorithm, and additional dynamic layers and training. Overall, we establish a new state of the art for 4D generation, improving both visual quality and reconstruction capability.", "arxiv_id": "2506.18839v1", "arxiv_authors": ["Chaoyang Wang", "Ashkan Mirzaei", "Vidit Goel", "Willi Menapace", "Aliaksandr Siarohin", "Avalon Vinella", "Michael Vasilkovsky", "Ivan Skorokhodov", "Vladislav Shakhrai", "Sergey Korolev", "Sergey Tulyakov", "Peter Wonka"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3681612, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ac"}, "filepath": "data/2510.11092v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992618791300344, "type": "Poster", "name": "Future-Aware End-to-End Driving: Bidirectional Modeling of Trajectory Planning and Scene Evolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119039", "abstract": "End-to-end autonomous driving methods aim to directly map raw sensor inputs to future driving actions such as planned trajectories, bypassing traditional modular pipelines. While these approaches have shown promise, they often operate under a one-shot paradigm that relies heavily on the current scene context, potentially underestimating the importance of scene dynamics and their temporal evolution. This limitation restricts the model\u2019s ability to make informed and adaptive decisions in complex driving scenarios. We propose a new perspective: the future trajectory of an autonomous vehicle is closely intertwined with the evolving dynamics of its environment, and conversely, the vehicle\u2019s own future states can influence how the surrounding scene unfolds. Motivated by this bidirectional relationship, we introduce **SeerDrive**, a novel end-to-end framework that jointly models future scene evolution and trajectory planning in a closed-loop manner. Our method first predicts future bird\u2019s-eye view (BEV) representations to anticipate the dynamics of the surrounding scene, then leverages this foresight to generate future-context-aware trajectories. Two key components enable this: (1) future-aware planning, which injects predicted BEV features into the trajectory planner, and (2) iterative scene modeling and vehicle planning, which refines both future scene prediction and trajectory generation through collaborative optimization. Extensive experiments on the NAVSIM and nuScenes benchmarks show that SeerDrive significantly outperforms existing state-of-the-art methods. Our code will be released.", "arxiv_id": "2510.11092v1", "arxiv_authors": ["Bozhou Zhang", "Nan Song", "Jingyu Li", "Xiatian Zhu", "Jiankang Deng", "Li Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025434, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ad"}, "filepath": "data/2505.17685v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999041762446989, "type": "Poster", "name": "FutureSightDrive: Thinking Visually with Spatio-Temporal CoT for Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116861", "abstract": "Visual language models (VLMs) have attracted increasing interest in autonomous driving due to their powerful reasoning capabilities. However, existing VLMs typically utilize discrete text Chain-of-Thought (CoT) tailored to the current scenario, which essentially represents highly abstract and symbolic compression of visual information, potentially leading to spatio-temporal relationship ambiguity and fine-grained information loss. Is autonomous driving better modeled on real-world simulation and imagination than on pure symbolic logic? In this paper, we propose a spatio-temporal CoT reasoning method that enables models to think visually. First, VLM serves as a world model to generate unified image frame for predicting future world states: where perception results (e.g., lane divider and 3D detection) represent the future spatial relationships, and ordinary future frame represent the temporal evolution relationships.This spatio-temporal CoT then serves as intermediate reasoning steps, enabling the VLM to function as an inverse dynamics model for trajectory planning based on current observations and future predictions. To implement visual generation in VLMs, we propose a unified pretraining paradigm integrating visual generation and understanding, along with a progressive visual CoT enhancing autoregressive image generation. Extensive experimental results demonstrate the effectiveness of the proposed method, advancing autonomous driving towards visual reasoning.", "arxiv_id": "2505.17685v1", "arxiv_authors": ["Shuang Zeng", "Xinyuan Chang", "Mengwei Xie", "Xinran Liu", "Yifan Bai", "Zheng Pan", "Mu Xu", "Xing Wei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a23f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1020735, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ae"}, "filepath": "data/2506.02882v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998368371015384, "type": "Poster", "name": "GaRA-SAM: Robustifying Segment Anything Model with Gated-Rank Adaptation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117681", "abstract": "Improving robustness of the Segment Anything Model (SAM) to input degradations is critical for its deployment in high-stakes applications such as autonomous driving and robotics. Our approach to this challenge prioritizes three key aspects: first, parameter efficiency to maintain the inherent generalization capability of SAM; second, fine-grained and input-aware robustification to precisely address the input corruption; and third, adherence to standard training protocols for ease of training. To this end, we propose gated-rank adaptation (GaRA). GaRA introduces lightweight adapters into intermediate layers of the frozen SAM, where each adapter dynamically adjusts the effective rank of its weight matrix based on the input by selectively activating (rank-1) components of the matrix using a learned gating module. This adjustment enables fine-grained and input-aware robustification without compromising the generalization capability of SAM. Our model, GaRA-SAM, significantly outperforms prior work on all robust segmentation benchmarks. In particular, it surpasses the previous best IoU score by up to 21.3\\%p on ACDC, a challenging real corrupted image dataset.", "arxiv_id": "2506.02882v2", "arxiv_authors": ["Sohyun Lee", "Yeho Gwon", "Lukas Hoyer", "Suha Kwak"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a240"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.493Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1055853, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6af"}, "filepath": "data/2506.00034v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996153535475665, "type": "Poster", "name": "GaussianFusion: Gaussian-Based Multi-Sensor Fusion for End-to-End Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118571", "abstract": "Multi-sensor fusion is crucial for improving the performance and robustness of end-to-end autonomous driving systems. Existing methods predominantly adopt either attention-based flatten fusion or bird\u2019s eye view fusion through geometric transformations. However, these approaches often suffer from limited interpretability or dense computational overhead. In this paper, we introduce GaussianFusion, a Gaussian-based multi-sensor fusion framework for end-to-end autonomous driving. Our method employs intuitive and compact Gaussian representations as intermediate carriers to aggregate information from diverse sensors. Specifically, we initialize a set of 2D Gaussians uniformly across the driving scene, where each Gaussian is parameterized by physical attributes and equipped with explicit and implicit features. These Gaussians are progressively refined by integrating multi-modal features. The explicit features capture rich semantic and spatial information about the traffic scene, while the implicit features provide complementary cues beneficial for trajectory planning. To fully exploit rich spatial and semantic information in Gaussians, we design a cascade planning head that iteratively refines trajectory predictions through interactions with Gaussians. Extensive experiments on the NAVSIM and Bench2Drive benchmarks demonstrate the effectiveness and robustness of the proposed GaussianFusion framework. The source code is included in the supplementary material and will be released publicly.", "arxiv_id": "2506.00034v1", "arxiv_authors": ["Shuai Liu", "Quanmin Liang", "Zefeng Li", "Boyang Li", "Kai Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a241"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080146, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b0"}, "filepath": "data/2506.09534v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994319414344446, "type": "Poster", "name": "Gaussian Herding across Pens: An Optimal Transport Perspective on Global Gaussian Reduction for 3DGS", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116503", "abstract": "3D Gaussian Splatting (3DGS) has emerged as a powerful technique for radiance field rendering, but it typically requires millions of redundant Gaussian primitives, overwhelming memory and rendering budgets. Existing compaction approaches address this by pruning Gaussians based on heuristic importance scores, without global fidelity guarantee. To bridge this gap, we propose a novel optimal transport perspective that casts 3DGS compaction as global Gaussian mixture reduction. Specifically, we first minimize the composite transport divergence over a KD-tree partition to produce a compact geometric representation, and then decouple appearance from geometry by fine-tuning color and opacity attributes with far fewer Gaussian primitives. Experiments on benchmark datasets show that our method (i) yields negligible loss in rendering quality (PSNR, SSIM, LPIPS) compared to vanilla 3DGS with only 10\\% Gaussians; and (ii) consistently outperforms state-of-the-art 3DGS compaction techniques. Notably, our method is applicable to any stage of vanilla or accelerated 3DGS pipelines, providing an efficient and agnostic pathway to lightweight neural rendering.", "arxiv_id": "2506.09534v2", "arxiv_authors": ["Tao Wang", "Mengyu Li", "Geduo Zeng", "Cheng Meng", "Qiong Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a242"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2597259, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b1"}, "filepath": "data/2506.08710v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997862920646445, "type": "Poster", "name": "GaussianWorld: A Large Dataset and Comprehensive Benchmark for Language Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121873", "abstract": "3D Gaussian Splatting (3DGS) serves as a highly performant and efficient encoding of scene geometry, appearance, and semantics. Moreover, grounding language in 3D scenes has proven to be an effective strategy for 3D scene understanding. Current Language Gaussian Splatting line of work fall into three main groups: (i) per-scene optimization-based, (ii) per-scene optimization-free, and (iii) generalizable approach. However, most of them are evaluated only on rendered 2D views of a handful of scenes and viewpoints close to the training views, limiting ability and insight into holistic 3D understanding. To address this gap, we propose the first large-scale benchmark that systematically assesses these three groups of methods directly in 3D space, evaluating on 1060 scenes across three indoor datasets and one outdoor dataset. Benchmark results demonstrate a clear advantage of the generalizable paradigm, particularly in relaxing the scene-specific limitation, enabling fast feed-forward inference on novel scenes, and achieving superior segmentation performance. We further introduce GaussianWorld-49K -- a carefully curated 3DGS dataset comprising of around 49K diverse indoor and outdoor scenes trained from multiple sources, with which we demonstrate generalizable approach could harness strong data priors. Our code, benchmark and datasets will be made public to accelerate research in generalizable 3DGS scene understanding.", "arxiv_id": "2506.08710v1", "arxiv_authors": ["Mengjiao Ma", "Qi Ma", "Yue Li", "Jiahuan Cheng", "Runyi Yang", "Bin Ren", "Nikola Popovic", "Mingqiang Wei", "Nicu Sebe", "Luc Van Gool", "Theo Gevers", "Martin R. Oswald", "Danda Pani Paudel"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a243"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3596539, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b2"}, "filepath": "data/2411.18624v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997493878188093, "type": "Poster", "name": "GeneMAN: Generalizable Single-Image 3D Human Reconstruction from Multi-Source Human Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118369", "abstract": "Given a single in-the-wild human photo, it remains a challenging task to reconstruct a high-fidelity 3D human model. Existing methods face difficulties including a) the varying body proportions captured by in-the-wild human images; b) diverse personal belongings within the shot; and c) ambiguities in human postures and inconsistency in human textures. In addition, the scarcity of high-quality human data intensifies the challenge. To address these problems, we propose a Generalizable image-to-3D huMAN reconstruction framework, dubbed GeneMAN, building upon a comprehensive multi-source collection of high-quality human data, including 3D scans, multi-view videos, single photos, and our generated synthetic human data. GeneMAN encompasses three key modules. 1) Without relying on parametric human models (e.g., SMPL), GeneMAN first trains a human-specific text-to-image diffusion model and a view-conditioned diffusion model, serving as GeneMAN 2D human prior and 3D human prior for reconstruction, respectively. 2) With the help of the pretrained human prior models, the Geometry Initialization-&-Sculpting pipeline is leveraged to recover high-quality 3D human geometry given a single image. 3) To achieve high-fidelity 3D human textures, GeneMAN employs the Multi-Space Texture Refinement pipeline, consecutively refining textures in the latent and the pixel spaces. Extensive experimental results demonstrate that GeneMAN could generate high-quality 3D human models from a single image input, outperforming prior state-of-the-art methods. Notably, GeneMAN could reveal much better generalizability in dealing with in-the-wild images, often yielding high-quality 3D human models in natural poses with common items, regardless of the body proportions in the input images.", "arxiv_id": "2411.18624v2", "arxiv_authors": ["Wentao Wang", "Hang Ye", "Fangzhou Hong", "Xue Yang", "Jianfu Zhang", "Yizhou Wang", "Ziwei Liu", "Liang Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a244"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3582911, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b3"}, "filepath": "data/2509.25638v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990836519021459, "type": "Poster", "name": "Generalized Contrastive Learning for Universal Multimodal Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117866", "abstract": "Despite their consistent performance improvements, cross-modal retrieval models (e.g., CLIP) show degraded performances with retrieving keys composed of fused image-text modality (e.g., Wikipedia pages with both images and text). To address this critical challenge, multimodal retrieval has been recently explored to develop a unified single retrieval model capable of retrieving keys across diverse modality combinations. A common approach involves constructing new composed sets of image-text triplets (e.g., retrieving a pair of image and text given a query image). However, such an approach requires careful curation to ensure the dataset quality and fails to generalize to unseen modality combinations. To overcome these limitations, this paper proposes Generalized Contrastive Learning (GCL), a novel loss formulation that improves multimodal retrieval performance without the burdensome need for new dataset curation. Specifically, GCL operates by enforcing contrastive learning across all modalities within a mini-batch, utilizing existing image-caption paired datasets to learn a unified representation space. We demonstrate the effectiveness of GCL by showing consistent performance improvements on off-the-shelf multimodal retrieval models (e.g., VISTA, CLIP, and TinyCLIP) using the M-BEIR, MMEB, and CoVR benchmarks.", "arxiv_id": "2509.25638v1", "arxiv_authors": ["Jungsoo Lee", "Janghoon Cho", "Hyojin Park", "Munawar Hayat", "Kyuwoong Hwang", "Fatih Porikli", "Sungha Choi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a245"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1122393, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b4"}, "filepath": "data/2504.13169v3.png", "tags": [], "_media_type": "image", "_rand": 0.999948915485264, "type": "Poster", "name": "Generate, but Verify: Reducing Hallucination in Vision-Language Models with Retrospective Resampling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120029", "abstract": "Vision-Language Models (VLMs) excel at visual understanding but often suffer from visual hallucinations, where they generate descriptions of nonexistent objects, actions, or concepts, posing significant risks in safety-critical applications. Existing hallucination mitigation methods typically follow one of two paradigms: generation adjustment, which modifies decoding behavior to align text with visual inputs, and post-hoc verification, where external models assess and correct outputs. While effective, generation adjustment methods often rely on heuristics and lack correction mechanisms, while post-hoc verification is complicated, typically requiring multiple models and tending to reject outputs rather than refine them. In this work, we introduce REVERSE, a unified framework that integrates hallucination-aware training with on-the-fly self-verification. By leveraging a new hallucination-verification dataset containing over 1.3M semi-synthetic samples, along with a novel inference-time retrospective resampling technique, our approach enables VLMs to both detect hallucinations during generation and dynamically revise those hallucinations. Our evaluations show that REVERSE achieves state-of-the-art hallucination reduction, outperforming the best existing methods by up to 12% on CHAIR-MSCOCO and 34% on HaloQuest.", "arxiv_id": "2504.13169v3", "arxiv_authors": ["Tsung-Han Wu", "Heekyung Lee", "Jiaxin Ge", "Joseph E. Gonzalez", "Trevor Darrell", "David M. Chan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a246"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1169456, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b5"}, "filepath": "data/2506.02473v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995120722919082, "type": "Poster", "name": "Generative Perception of Shape and Material from Differential Motion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118348", "abstract": "Perceiving the shape and material of an object from a single image is inherently ambiguous, especially when lighting is unknown and unconstrained. Despite this, humans can often disentangle shape and material, and when they are uncertain, they often move their head slightly or rotate the object to help resolve the ambiguities. Inspired by this behavior, we introduce a novel conditional denoising-diffusion model that generates samples of shape-and-material maps from a given short input video of an object undergoing differential motions. Our parameter-efficient architecture allows training directly in pixel-space and joint, generative disentanglement of multiple object attributes simultaneously. Trained on a modest number of synthetic object-motion videos with supervision on shape and material, the model exhibits compelling emergent properties: for static observations, it produces diverse, multimodal predictions of plausible shape-and-material maps that capture the inherent ambiguities; and when objects move, the distributions quickly converge to more accurate explanations. Meanwhile, it produces high-quality shape-and-material estimates on less ambiguous, real-world objects. By moving beyond single view to continuous observations, our work suggests an avenue of generative perception for improving visual reasoning for physically-embodied systems.", "arxiv_id": "2506.02473v1", "arxiv_authors": ["Xinran Nicole Han", "Ko Nishino", "Todd Zickler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a247"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1007126, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b6"}, "filepath": "data/2505.07344v5.png", "tags": [], "_media_type": "image", "_rand": 0.9992164536822282, "type": "Poster", "name": "Generative Pre-trained Autoregressive Diffusion Transformer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116094", "abstract": "In this work, we present GPDiT, a Generative Pre-trained Autoregressive Diffusion Transformer that unifies the strengths of diffusion and autoregressive modeling for long-range video synthesis, within a continuous latent space. Instead of predicting discrete tokens, GPDiT autoregressively predicts future latent frames using a diffusion loss, enabling natural modeling of motion dynamics and semantic consistency across frames. This continuous autoregressive framework not only enhances generation quality but also endows the model with representation capabilities. Additionally, we introduce a lightweight causal attention variant and a parameter-free rotation-based time-conditioning mechanism, improving both the training and inference efficiency. Extensive experiments demonstrate that GPDiT achieves strong performance in video generation quality, video representation ability, and few-shot learning tasks, highlighting its potential as an effective framework for video modeling in continuous space.", "arxiv_id": "2505.07344v5", "arxiv_authors": ["Yuan Zhang", "Jiacheng Jiang", "Guoqing Ma", "Zhiying Lu", "Haoyang Huang", "Jianlong Yuan", "Nan Duan", "Daxin Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a248"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1949882, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b7"}, "filepath": "data/2503.05153v2.png", "tags": [], "_media_type": "image", "_rand": 0.999623761274975, "type": "Poster", "name": "Generative Trajectory Stitching through Diffusion Composition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117705", "abstract": "Effective trajectory stitching for long-horizon planning is a significant challenge in robotic decision-making. While diffusion models have shown promise in planning, they are limited to solving tasks similar to those seen in their training data. We propose CompDiffuser, a novel generative approach that can solve new tasks by learning to compositionally stitch together shorter trajectory chunks from previously seen tasks. Our key insight is modeling the trajectory distribution by subdividing it into overlapping chunks and learning their conditional relationships through a single bidirectional diffusion model. This allows information to propagate between segments during generation, ensuring physically consistent connections. We conduct experiments on benchmark tasks of various difficulties, covering different environment sizes, agent state dimension, trajectory types, training data quality, and show that CompDiffuser significantly outperforms existing methods.", "arxiv_id": "2503.05153v2", "arxiv_authors": ["Yunhao Luo", "Utkarsh A. Mishra", "Yilun Du", "Danfei Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a249"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 990917, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b8"}, "filepath": "data/2506.07497v4.png", "tags": [], "_media_type": "image", "_rand": 0.9992092465243118, "type": "Poster", "name": "Genesis: Multimodal Driving Scene Generation with Spatio-Temporal and Cross-Modal Consistency", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118165", "abstract": "We present Genesis, a unified framework for joint generation of multi-view driving videos and LiDAR sequences with spatio-temporal and cross-modal consistency. Genesis employs a two-stage architecture that integrates a DiT-based video diffusion model with 3D-VAE encoding, and a BEV-aware LiDAR generator with NeRF-based rendering and adaptive sampling. Both modalities are directly coupled through a shared latent space, enabling coherent evolution across visual and geometric domains. To guide the generation with structured semantics, we introduce DataCrafter, a captioning module built on vision-language models that provides scene-level and instance-level supervision. Extensive experiments on the nuScenes benchmark demonstrate that Genesis achieves state-of-the-art performance across video and LiDAR metrics (FVD 16.95, FID 4.24, Chamfer 0.611), and benefits downstream tasks including segmentation and 3D detection, validating the semantic fidelity and practical utility of the generated data.", "arxiv_id": "2506.07497v4", "arxiv_authors": ["Xiangyu Guo", "Zhanqian Wu", "Kaixin Xiong", "Ziyang Xu", "Lijun Zhou", "Gangwei Xu", "Shaoqing Xu", "Haiyang Sun", "Bing Wang", "Guang Chen", "Hangjun Ye", "Wenyu Liu", "Xinggang Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1243989, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6b9"}, "filepath": "data/2506.06220v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999609804285394, "type": "Poster", "name": "GenIR: Generative Visual Feedback for Mental Image Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119413", "abstract": "Vision-language models (VLMs) have shown strong performance on text-to-image retrieval benchmarks. However, bridging this success to real-world applications remains a challenge. In practice, human search behavior is rarely a one-shot action. Instead, it is often a multi-round process guided by clues in mind. That is, a mental image ranging from vague recollections to vivid mental representations of the target image. Motivated by this gap, we study the task of Mental Image Retrieval (MIR), which targets the realistic yet underexplored setting where users refine their search for a mentally envisioned image through multi-round interactions with an image search engine. Central to successful interactive retrieval is the capability of machines to provide users with clear, actionable feedback; however, existing methods rely on indirect or abstract verbal feedback, which can be ambiguous, misleading, or ineffective for users to refine the query. To overcome this, we propose GenIR, a generative multi-round retrieval paradigm leveraging diffusion-based image generation to explicitly reify the AI system\u2019s understanding at each round. These synthetic visual representations provide clear, interpretable feedback, enabling users to refine their queries intuitively and effectively. We further introduce a fully automated pipeline to generate a high-quality multi-round MIR dataset. Experimental results demonstrate that GenIR significantly outperforms existing interactive methods in the MIR scenario. This work establishes a new task with a dataset and an effective generative retrieval method, providing a foundation for future research in this direction.", "arxiv_id": "2506.06220v1", "arxiv_authors": ["Diji Yang", "Minghao Liu", "Chung-Hsiang Lo", "Yi Zhang", "James Davis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029167, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ba"}, "filepath": "data/2505.24870v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994552624212056, "type": "Poster", "name": "GenSpace: Benchmarking Spatially-Aware Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121370", "abstract": "Humans can intuitively compose and arrange scenes in the 3D space for photography. However, can advanced AI image generators plan scenes with similar 3D spatial awareness when creating images from text or image prompts? We present GenSpace, a novel benchmark and evaluation pipeline to comprehensively assess the spatial awareness of current image generation models. Furthermore, standard evaluations using general Vision-Language Models (VLMs) frequently fail to capture the detailed spatial errors. To handle this challenge, we propose a specialized evaluation pipeline and metric, which reconstructs 3D scene geometry using multiple visual foundation models and provides a more accurate and human-aligned metric of spatial faithfulness. Our findings show that while AI models create visually appealing images and can follow general instructions, they struggle with specific 3D details like object placement, relationships, and measurements. We summarize three core limitations in the spatial perception of current state-of-the-art image generation models: 1) Object Perspective Understanding, 2) Egocentric-Allocentric Transformation, and 3) Metric Measurement Adherence, highlighting possible directions for improving spatial intelligence in image generation.", "arxiv_id": "2505.24870v2", "arxiv_authors": ["Zehan Wang", "Jiayang Xu", "Ziang Zhang", "Tianyu Pang", "Chao Du", "Hengshuang Zhao", "Zhou Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.494Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3013037, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6bb"}, "filepath": "data/2506.10337v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992885337577507, "type": "Poster", "name": "GeoCAD: Local Geometry-Controllable CAD Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120273", "abstract": "Local geometry-controllable computer-aided design (CAD) generation aims to modify local parts of CAD models automatically, enhancing design efficiency. It also ensures that the shapes of newly generated local parts follow user-specific geometric instructions (e.g., an isosceles right triangle or a rectangle with one corner cut off).However, existing methods encounter challenges in achieving this goal.Specifically, they either lack the ability to follow textual instructions or are unable to focus on the local parts.To address this limitation, we introduce GeoCAD, a user-friendly and local geometry-controllable CAD generation method. Specifically, we first propose a complementary captioning strategy to generate geometric instructions for local parts.This strategy involves vertex-based and VLLM-based captioning for systematically annotating simple and complex parts, respectively.In this way, we caption $\\sim$221k different local parts in total.In the training stage, given a CAD model, we randomly mask a local part.Then, using its geometric instruction and the remaining parts as input, we prompt large language models (LLMs) to predict the masked part.During inference, users can specify any local part for modification while adhering to a variety of predefined geometric instructions.Extensive experiments demonstrate the effectiveness of GeoCAD in generation quality, validity and text-to-CAD consistency.", "arxiv_id": "2506.10337v2", "arxiv_authors": ["Zhanwei Zhang", "Kaiyuan Liu", "Junjie Liu", "Wenxiao Wang", "Binbin Lin", "Liang Xie", "Chen Shen", "Deng Cai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1180009, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6bc"}, "filepath": "data/2510.03110v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994647024660537, "type": "Poster", "name": "GeoDiff: Geometry-Aware Diffusion for Reference-Driven Image Completion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120246", "abstract": "Reference-driven image completion, which restores missing regions in a target view using additional images, is particularly challenging when the target view differs significantly from the references. Existing generative methods rely solely on diffusion priors and, without geometric cues such as camera pose or depth, often produce misaligned or implausible content. We propose \\textbf{GeoDiff}, a novel framework that incorporates explicit 3D structural guidance to enforce geometric consistency in the completed regions, setting it apart from prior image-only approaches. GeoDiff introduces two key ideas: conditioning the diffusion process on projected point clouds to infuse geometric information, and applying target-aware masking to guide the model toward relevant reference cues. The framework features a dual-branch diffusion architecture. One branch synthesizes the missing regions from the masked target, while the other extracts geometric features from the projected point cloud. Joint self-attention across branches ensures coherent and accurate completion. To address regions visible in references but absent in the target, we project the target view into each reference to detect occluded areas, which are then masked during training. This target-aware masking directs the model to focus on useful cues, enhancing performance in difficult scenarios. To our knowledge, GeoDiff is the first to tightly couple explicit 3D geometry with diffusion-based image completion in a unified framework. Experiments show that GeoDiff achieves a 17.1% PSNR improvement over state-of-the-art methods, significantly boosting geometric accuracy while maintaining high visual quality.", "arxiv_id": "2510.03110v1", "arxiv_authors": ["Beibei Lin", "Tingting Chen", "Robby T. Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028593, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6bd"}, "filepath": "data/2509.26016v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992372291322807, "type": "Poster", "name": "GeoLink: Empowering Remote Sensing Foundation Model with OpenStreetMap Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116001", "abstract": "Integrating ground-level geospatial data with rich geographic context--such as OpenStreetMap (OSM)--into remote sensing (RS) foundation models (FMs) is essential for advancing geospatial intelligence and supporting a broad spectrum of tasks. However, modality gap between RS and OSM data--differences in data structure, content, and spatial granularity--makes effective synergy highly challenging, and most existing RS FMs focus on imagery alone. To this end, this study presents GeoLink, a multimodal framework that leverages OSM data to enhance RS FM during both the pretraining and downstream task stages. Specifically, GeoLink enhances RS self-supervised pretraining using multi-granularity learning signals derived from OSM data, guided by cross-modal spatial correlations for information interaction and collaboration. It also introduces image mask-reconstruction to enable sparse input for efficient pretraining. For downstream tasks, GeoLink generates both unimodal and multimodal fine-grained encodings to support a wide range of applications, from common RS interpretation tasks like land cover classification to more comprehensive geographic tasks like urban function zone mapping. Extensive experiments show that incorporating OSM data during pretraining enhances the performance of the RS image encoder, while fusing RS and OSM data in downstream tasks improves the FM\u2019s adaptability to complex geographic scenarios. These results underscore the potential of multimodal synergy in advancing high-level geospatial artificial intelligence. Moreover, we find that spatial correlation plays a crucial role in enabling effective multimodal geospatial data integration.", "arxiv_id": "2509.26016v1", "arxiv_authors": ["Lubian Bai", "Xiuyuan Zhang", "Siqi Zhang", "Zepeng Zhang", "Haoyu Wang", "Wei Qin", "Shihong Du"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a24f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106182, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6be"}, "filepath": "data/2505.21375v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993303783151329, "type": "Poster", "name": "GeoLLaVA-8K: Scaling Remote-Sensing Multimodal Large Language Models to 8K Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118553", "abstract": "Ultra-high-resolution (UHR) remote sensing (RS) imagery offers valuable data for Earth observation but pose challenges for existing multimodal foundation models due to two key bottlenecks: (1) limited availability of UHR training data, and (2) token explosion caused by the large image size. To address data scarcity, we introduce **SuperRS-VQA** (avg. 8,376$\\times$8,376) and **HighRS-VQA** (avg. 2,000$\\times$1,912), the highest-resolution vision-language datasets in RS to date, covering 22 real-world dialogue tasks. To mitigate token explosion, our pilot studies reveal significant redundancy in RS images: crucial information is concentrated in a small subset of object-centric tokens, while pruning background tokens (e.g., ocean or forest) can even improve performance.Motivated by these findings, we propose two strategies: *Background Token Pruning* and *Anchored Token Selection*, to reduce the memory footprint while preserving key semantics.Integrating these techniques, we introduce **GeoLLaVA-8K**, the first RS-focused multimodal large language model capable of handling inputs up to 8K$\\times$8K resolution, built on the LLaVA framework. Trained on SuperRS-VQA and HighRS-VQA, GeoLLaVA-8K sets a new state-of-the-art on the XLRS-Bench. Datasets and code will be released.", "arxiv_id": "2505.21375v1", "arxiv_authors": ["Fengxiang Wang", "Mingshuo Chen", "Yueying Li", "Di Wang", "Haotian Wang", "Zonghao Guo", "Zefan Wang", "Boqi Shan", "Long Lan", "Yulin Wang", "Hongzhen Wang", "Wenjing Yang", "Bo Du", "Jing Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a250"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112310, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6bf"}, "filepath": "data/2505.13731v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990866491454022, "type": "Poster", "name": "GeoRanker: Distance-Aware Ranking for Worldwide Image Geolocalization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117327", "abstract": "Worldwide image geolocalization\u2014the task of predicting GPS coordinates from images taken anywhere on Earth\u2014poses a fundamental challenge due to the vast diversity in visual content across regions. While recent approaches adopt a two-stage pipeline of retrieving candidates and selecting the best match, they typically rely on simplistic similarity heuristics and point-wise supervision, failing to model spatial relationships among candidates. In this paper, we propose **GeoRanker**, a distance-aware ranking framework that leverages large vision-language models to jointly encode query\u2013candidate interactions and predict geographic proximity. In addition, we introduce a *multi-order distance loss* that ranks both absolute and relative distances, enabling the model to reason over structured spatial relationships. To support this, we curate GeoRanking, the first dataset explicitly designed for geographic ranking tasks with multimodal candidate information. GeoRanker achieves state-of-the-art results on two well-established benchmarks (IM2GPS3K and YFCC4K), significantly outperforming current best methods. We also release our code, checkpoint, and dataset online for ease of reproduction.", "arxiv_id": "2505.13731v3", "arxiv_authors": ["Pengyue Jia", "Seongheon Park", "Song Gao", "Xiangyu Zhao", "Sharon Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a251"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1166888, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c0"}, "filepath": "data/2509.18538v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998659172776165, "type": "Poster", "name": "GeoRemover: Removing Objects and Their Causal Visual Artifacts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117991", "abstract": "Towards intelligent image editing, object removal should eliminate both the target object and its causal visual artifacts, such as shadows and reflections. However, existing image appearance-based methods either follow strictly mask-aligned training and fail to remove these casual effects which are not explicitly masked, or adopt loosely mask-aligned strategies that lack controllability and may unintentionally over-erase other objects. We identify that these limitations stem from ignoring the causal relationship between an object\u2019s geometry presence and its visual effects. To address this limitation, we propose a geometry-aware two-stage framework that decouples object removal into (1) geometry removal and (2) appearance rendering. In the first stage, we remove the object directly from the geometry (e.g., depth) using strictly mask-aligned supervision, enabling structure-aware editing with strong geometric constraints. In the second stage, we render a photorealistic RGB image conditioned on the updated geometry, where causal visual effects are considered implicitly as a result of the modified 3D geometry. To guide learning in the geometry removal stage, we introduce a preference-driven objective based on positive and negative sample pairs, encouraging the model to remove objects as well as their causal visual artifacts while avoiding new structural insertions. Extensive experiments demonstrate that our method achieves state-of-the-art performance in removing both objects and their associated artifacts on two popular benchmarks.", "arxiv_id": "2509.18538v2", "arxiv_authors": ["Zixin Zhu", "Haoxiang Li", "Xuelu Feng", "He Wu", "Chunming Qiao", "Junsong Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a252"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1072107, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c1"}, "filepath": "data/2506.00129v1.png", "tags": [], "_media_type": "image", "_rand": 0.999927981458077, "type": "Poster", "name": "Geo-Sign: Hyperbolic Contrastive Regularisation for Geometrically Aware Sign Language Translation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117570", "abstract": "Recent progress in Sign Language Translation has focussed primarily on improving the representational capacity of large language models to incorporate sign-language features. This work explores an alternative direction: enhancing the geometric properties of skeletal representations themselves. We propose Geo-Sign, a method that leverages the properties of hyperbolic geometry to model the hierarchical structure inherent in sign language kinematics. By projecting skeletal features derived from Spatio-Temporal Graph Convolutional Networks (ST-GCNs) into the Poincar\u00e9 ball model, we aim to create more discriminative embeddings, particularly for fine-grained motions like finger articulations. We introduce a hyperbolic projection layer, a weighted Fr\u00e9chet mean aggregation scheme, and a geometric contrastive loss operating directly in hyperbolic space. These components are integrated into an end-to-end translation framework as a regularisation function, to enhance the representations within the language model. This work demonstrates the potential of hyperbolic geometry to improve skeletal representations for Sign Language Translation, improving on SOTA RGB methods while preserving privacy and improving computational efficiency.", "arxiv_id": "2506.00129v1", "arxiv_authors": ["Edward Fish", "Richard Bowden"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a253"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073900, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c2"}, "filepath": "data/2509.18090v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990635841837245, "type": "Poster", "name": "GeoSVR: Taming Sparse Voxels for Geometrically Accurate Surface Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119090", "abstract": "Reconstructing accurate surfaces with radiance fields has achieved remarkable progress in recent years. However, prevailing approaches, primarily based on Gaussian Splatting, are increasingly constrained by representational bottlenecks. In this paper, we introduce GeoSVR, an explicit voxel-based framework that explores and extends the under-investigated potential of sparse voxels for achieving accurate, detailed, and complete surface reconstruction. As strengths, sparse voxels support preserving the coverage completeness and geometric clarity, while corresponding challenges also arise from absent scene constraints and locality in surface refinement. To ensure correct scene convergence, we first propose a Voxel-Uncertainty Depth Constraint that maximizes the effect of monocular depth cues while presenting a voxel-oriented uncertainty to avoid quality degradation, enabling effective and robust scene constraints yet preserving highly accurate geometries. Subsequently, Sparse Voxel Surface Regularization is designed to enhance geometric consistency for tiny voxels and facilitate the voxel-based formation of sharp and accurate surfaces. Extensive experiments demonstrate our superior performance compared to existing methods across diverse challenging scenarios, excelling in geometric accuracy, detail preservation, and reconstruction completeness while maintaining high efficiency. Our code will be made open-source upon acceptance.", "arxiv_id": "2509.18090v1", "arxiv_authors": ["Jiahe Li", "Jiawei Zhang", "Youmin Zhang", "Xiao Bai", "Jin Zheng", "Xiaohan Yu", "Lin Gu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a254"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3851909, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c3"}, "filepath": "data/2509.22700v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993824093735528, "type": "Poster", "name": "Global Prompt Refinement with Non-Interfering Attention Masking for One-Shot Federated Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116940", "abstract": "Federated Prompt Learning (FPL) enables communication-efficient adaptation by tuning lightweight prompts on top of frozen pre-trained models. Existing FPL methods typically rely on global information, which is only available after the second training round, to facilitate collaboration among client models. Therefore, they are inherently dependent on multi-round communication to fully exhibit their strengths. Moreover, existing one-shot federated learning methods typically focus on fitting seen tasks, but lack cross-task generalization. To bridge this gap, we propose the global prompt refinement with non-interfering attention masking (GPR-NIAM) method for one-shot FPL. The core idea is to design a masking mechanism that restricts excessive interaction between the original text embeddings and the learnable prompt embeddings. GPR-NIAM achieves this through the collaboration of two key modules. Firstly, the attention isolation module suppresses attention from the learnable prompt tokens to the original text tokens, and reweights the reverse attention which preserves generalization across tasks. Secondly, the cross-silo collaborative refinement module integrates decentralized visual knowledge into a unified base and calibrates the global prompt through multi-source cross-modal knowledge alignment, further mitigating the inconsistency caused by data heterogeneity. Extensive experiments conducted on ten benchmark datasets under two tasks show that GPR-NIAM outperforms eight state-of-the-art methods in both class-level and domain-level generalization.", "arxiv_id": "2509.22700v2", "arxiv_authors": ["Zhuang Qi", "Pan Yu", "Lei Meng", "Sijin Zhou", "Han Yu", "Xiaoxiao Li", "Xiangxu Meng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a255"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065777, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c4"}, "filepath": "data/2508.19972v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991408226486662, "type": "Poster", "name": "GLSim: Detecting Object Hallucinations in LVLMs via Global-Local Similarity", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117360", "abstract": "Object hallucination in large vision-language models presents a significant challenge to their safe deployment in real-world applications. Recent works have proposed object-level hallucination scores to estimate the likelihood of object hallucination; however, these methods typically adopt either a global or local perspective in isolation, which may limit detection reliability. In this paper, we introduce GLSim, a novel training-free object hallucination detection framework that leverages complementary global and local embedding similarity signals between image and text modalities, enabling more accurate and reliable hallucination detection in diverse scenarios. We comprehensively benchmark existing object hallucination detection methods and demonstrate that GLSim achieves superior detection performance, outperforming competitive baselines by a significant margin.", "arxiv_id": "2508.19972v3", "arxiv_authors": ["Seongheon Park", "Sharon Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a256"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061977, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c5"}, "filepath": "data/2510.06046v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997329492892727, "type": "Poster", "name": "GLVD: Guided Learned Vertex Descent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117874", "abstract": "Existing 3D face modeling methods usually depend on 3D Morphable Models, which inherently constrain the representation capacity to fixed shape priors. Optimization-based approaches offer high-quality reconstructions but tend to be computationally expensive. In this work, we introduce GLVD, a hybrid method for 3D face reconstruction from few-shot images that extends Learned Vertex Descent (LVD) by integrating per-vertex neural field optimization with global structural guidance from dynamically predicted 3D keypoints. By incorporating relative spatial encoding, GLVD iteratively refines mesh vertices without requiring dense 3D supervision. This enables expressive and adaptable geometry reconstruction while maintaining computational efficiency. GLVD achieves state-of-the-art performance in single-view settings and remains highly competitive in multi-view scenarios, all while substantially reducing inference time.", "arxiv_id": "2510.06046v1", "arxiv_authors": ["Pol Caselles Rico", "Francesc Moreno Noguer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a257"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3419630, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c6"}, "filepath": "data/2510.17131v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995814579705057, "type": "Poster", "name": "GOOD: Training-Free Guided Diffusion Sampling for Out-of-Distribution Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116282", "abstract": "Recent advancements have explored text-to-image diffusion models for synthesizing out-of-distribution (OOD) samples, substantially enhancing the performance of OOD detection. However, existing approaches typically rely on perturbing text-conditioned embeddings, resulting in semantic instability and insufficient shift diversity, which limit generalization to realistic OOD. To address these challenges, we propose GOOD, a novel and flexible framework that directly guides diffusion sampling trajectories towards OOD regions using off-the-shelf in-distribution (ID) classifiers. GOOD incorporates dual-level guidance: (1) Image-level guidance based on the gradient of log partition to reduce input likelihood, drives samples toward low-density regions in pixel space. (2) Feature-level guidance, derived from k-NN distance in the classifier\u2019s latent space, promotes sampling in feature-sparse regions. Hence, this dual-guidance design enables more controllable and diverse OOD sample generation. Additionally, we introduce a unified OOD score that adaptively combines image and feature discrepancies, enhancing detection robustness. We perform thorough quantitative and qualitative analyses to evaluate the effectiveness of GOOD, demonstrating that training with samples generated by GOOD can notably enhance OOD detection performance.", "arxiv_id": "2510.17131v1", "arxiv_authors": ["Xin Gao", "Jiyao Liu", "Guanghao Li", "Yueming Lyu", "Jianxiong Gao", "Weichen Yu", "Ningsheng Xu", "Liang Wang", "Caifeng Shan", "Ziwei Liu", "Chenyang Si"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a258"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1104976, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c7"}, "filepath": "data/2503.10639v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996349658919333, "type": "Poster", "name": "GoT: Unleashing Reasoning Capability of MLLM for Visual Generation and Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116945", "abstract": "Current image generation and editing methods primarily process textual prompts as direct inputs without explicit reasoning about visual composition or operational steps. We present Generation Chain-of-Thought (GoT), a novel paradigm that empowers a Multimodal Large Language Model (MLLM) to first generate an explicit, structured reasoning chain in natural language\u2014detailing semantic relationships, object attributes, and, crucially, precise spatial coordinates\u2014before any image synthesis occurs. This intermediate reasoning output directly guides the subsequent visual generation or editing process. This approach transforms conventional text-to-image generation and editing into a reasoning-guided framework that analyzes semantic relationships and spatial arrangements. We define the formulation of GoT and construct large-scale GoT datasets containing over \\textbf{9M} samples with detailed reasoning chains capturing semantic-spatial relationships. To leverage the advantages of GoT, we implement a unified framework that integrates Qwen2.5-VL for reasoning chain generation with an end-to-end diffusion model enhanced by our novel Semantic-Spatial Guidance Module. Experiments show our GoT framework achieves excellent performance on both generation and editing tasks, with significant improvements over baselines. Additionally, our approach enables interactive visual generation, allowing users to explicitly modify reasoning steps for precise image adjustments. GoT pioneers a new direction for reasoning-driven visual generation and editing, producing images that better align with human intent. We will release our datasets and models to facilitate future research.", "arxiv_id": "2503.10639v1", "arxiv_authors": ["Rongyao Fang", "Chengqi Duan", "Kun Wang", "Linjiang Huang", "Hao Li", "Shilin Yan", "Hao Tian", "Xingyu Zeng", "Rui Zhao", "Jifeng Dai", "Xihui Liu", "Hongsheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a259"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.495Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3202854, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c8"}, "filepath": "data/2506.11784v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994270932179621, "type": "Poster", "name": "GPLQ: A General, Practical, and Lightning QAT Method for Vision Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119892", "abstract": "Vision Transformers (ViTs) are essential in computer vision but are computationally intensive, too. Model quantization, particularly to low bit-widths like 4-bit, aims to alleviate this difficulty, yet existing Post-Training Quantization (PTQ) and Quantization-Aware Training (QAT) methods exhibit significant limitations. PTQ often incurs substantial accuracy drop, while QAT achieves high accuracy but suffers from prohibitive computational costs, limited generalization to downstream tasks, training instability, and lacking of open-source codebase. To address these challenges, this paper introduces General, Practical, and Lightning Quantization (GPLQ), a novel framework designed for efficient and effective ViT quantization. GPLQ is founded on two key empirical insights: the paramount importance of activation quantization and the necessity of preserving the model's original optimization basin to maintain generalization. Consequently, GPLQ employs a sequential activation-first, weights-later strategy. Stage 1 keeps weights in FP32 while quantizing activations with a feature mimicking loss in only 1 epoch to keep it stay in the same basin, thereby preserving generalization. Stage 2 quantizes weights using a PTQ method. As a result, GPLQ is 100x faster than existing QAT methods, lowers memory footprint to levels even below FP32 training, and achieves 4-bit model performance that is highly competitive with FP32 models in terms of both accuracy on ImageNet and generalization to diverse downstream tasks, including fine-grained visual classification and object detection. We will release an easy-to-use open-source toolkit supporting multiple vision tasks.", "arxiv_id": "2506.11784v1", "arxiv_authors": ["Guang Liang", "Xinyao Liu", "Jianxin Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058933, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6c9"}, "filepath": "data/2509.01109v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993594338017951, "type": "Poster", "name": "GPSToken: Gaussian Parameterized Spatially-adaptive Tokenization for Image Representation and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119331", "abstract": "Effective and efficient tokenization plays an important role in image representation and generation. Conventional methods, constrained by uniform 2D/1D grid tokenization, are inflexible to represent regions with varying shapes and textures and at different locations, limiting their efficacy of feature representation. In this work, we propose **GPSToken**, a novel **G**aussian **P**arameterized **S**patially-adaptive **Token**ization framework, to achieve non-uniform image tokenization by leveraging parametric 2D Gaussians to dynamically model the shape, position, and textures of different image regions. We first employ an entropy-driven algorithm to partition the image into texture-homogeneous regions of variable sizes. Then, we parameterize each region as a 2D Gaussian (mean for position, covariance for shape) coupled with texture features. A specialized transformer is trained to optimize the Gaussian parameters, enabling continuous adaptation of position/shape and content-aware feature extraction. During decoding, Gaussian parameterized tokens are reconstructed into 2D feature maps through a differentiable splatting-based renderer, bridging our adaptive tokenization with standard decoders for end-to-end training. GPSToken disentangles spatial layout (Gaussian parameters) from texture features to enable efficient two-stage generation: structural layout synthesis using lightweight networks, followed by structure-conditioned texture generation. Experiments demonstrate the state-of-the-art performance of GPSToken, which achieves rFID and FID scores of 0.65 and 1.64 on image reconstruction and generation tasks using 128 tokens, respectively. Codes and models will be released.", "arxiv_id": "2509.01109v2", "arxiv_authors": ["Zhengqiang Zhang", "Rongyuan Wu", "Lingchen Sun", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1091861, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ca"}, "filepath": "data/2506.02489v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990474769889782, "type": "Poster", "name": "Grasp2Grasp: Vision-Based Dexterous Grasp Translation via Schr\u00f6dinger Bridges", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116528", "abstract": "We propose a new approach to vision-based dexterous grasp translation, which aims to transfer grasp intent across robotic hands with differing morphologies. Given a visual observation of a source hand grasping an object, our goal is to synthesize a functionally equivalent grasp for a target hand without requiring paired demonstrations or hand-specific simulations. We frame this problem as a stochastic transport between grasp distributions using the Schr\u00f6dinger Bridge formalism. Our method learns to map between source and target latent grasp spaces via score and flow matching, conditioned on visual observations. To guide this translation, we introduce physics-informed cost functions that encode alignment in base pose, contact maps, wrench space, and manipulability. Experiments across diverse hand-object pairs demonstrate our approach generates stable, physically grounded grasps with strong generalization. This work enables semantic grasp transfer for heterogeneous manipulators and bridges vision-based grasping with probabilistic generative modeling.", "arxiv_id": "2506.02489v2", "arxiv_authors": ["Tao Zhong", "Jonah Buchanan", "Christine Allen-Blanchette"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25c"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070309, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6cb"}, "filepath": "data/2507.06806v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990289212617224, "type": "Poster", "name": "GreenHyperSpectra: A multi-source hyperspectral dataset for global vegetation trait prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121449", "abstract": "Plant traits such as leaf carbon content and leaf mass are essential variables in the study of biodiversity and climate change. However, conventional field sampling cannot feasibly cover trait variation at ecologically meaningful spatial scales. Machine learning represents a transformative solution for plant trait prediction across ecosystems, leveraging hyperspectral data from remote sensing.Nevertheless, trait prediction from hyperspectral data is challenged by label scarcity and substantial domain shifts (e.g. across sensors, ecological distributions), requiring robust cross-domain methods.Here, we present GreenHyperSpectra, a pretraining dataset encompassing real-world cross-sensor and cross-ecosystem samples designed to benchmark trait prediction with semi- and self-supervised methods. We adopt an evaluation framework encompassing in-distribution and out-of-distribution scenarios. We successfully leverage GreenHyperSpectra to pretrain label-efficient multi-output regression models that outperform the state-of-the-art supervised baseline. Our empirical analyses demonstrate substantial improvements in learning spectral representations for trait prediction, establishing a comprehensive methodological framework to catalyze research at the intersection of representation learning and plant functional traits assessment.", "arxiv_id": "2507.06806v2", "arxiv_authors": ["Eya Cherif", "Arthur Ouaknine", "Luke A. Brown", "Phuong D. Dao", "Kyle R. Kovach", "Bing Lu", "Daniel Mederer", "Hannes Feilhauer", "Teja Kattenborn", "David Rolnick"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001349, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6cc"}, "filepath": "data/2505.18700v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995279185378827, "type": "Poster", "name": "GRE Suite: Geo-localization Inference via Fine-Tuned Vision-Language Models and Enhanced Reasoning Chains", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119959", "abstract": "Recent advances in Visual Language Models (VLMs) have demonstrated exceptional performance in visual reasoning tasks. However, geo-localization presents unique challenges, requiring the extraction of multigranular visual cues from images and their integration with external world knowledge for systematic reasoning. Current approaches to geo-localization tasks often lack robust reasoning mechanisms and explainability, limiting their effectiveness. To address these limitations, we propose the Geo Reason Enhancement (GRE) Suite, a novel framework that augments VLMs with structured reasoning chains for accurate and interpretable location inference. The GRE Suite is systematically developed across three key dimensions: dataset, model, and benchmark. First, we introduce GRE30K, a high-quality geo-localization reasoning dataset designed to facilitate fine-grained visual and contextual analysis. Next, we present the GRE model, which employs a multi-stage reasoning strategy to progressively infer scene attributes, local details, and semantic features, thereby narrowing down potential geographic regions with enhanced precision. Finally, we construct the Geo Reason Evaluation Benchmark (GREval-Bench), a comprehensive evaluation framework that assesses VLMs across diverse urban, natural, and landmark scenes to measure both coarse-grained (e.g., country, continent) and fine-grained (e.g., city, street) localization performance. Experimental results demonstrate that GRE significantly outperforms existing methods across all granularities of geo-localization tasks, underscoring the efficacy of reasoning-augmented VLMs in complex geographic inference. Code and data will be released at https://anonymous.4open.science/r/GRE-74C0.", "arxiv_id": "2505.18700v4", "arxiv_authors": ["Chun Wang", "Xiaojun Ye", "Xiaoran Pan", "Zihao Pan", "Haofan Wang", "Yiren Song"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1072226, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6cd"}, "filepath": "data/2505.15879v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999376838158145, "type": "Poster", "name": "GRIT: Teaching MLLMs to Think with Images", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118020", "abstract": "Recent studies have demonstrated the efficacy of using Reinforcement Learning (RL) in building reasoning models that articulate chains of thoughts prior to producing final answers. However, despite ongoing advances that aim at enabling reasoning for vision-language tasks, existing open-source visual reasoning models typically generate reasoning content with pure natural language, lacking explicit integration of visual information. This limits their ability to produce clearly articulated and visually grounded reasoning chains. To this end, we propose Grounded Reasoning with Images and Texts (GRIT), a novel method for training MLLMs to think with images. GRIT introduces a grounded reasoning paradigm, in which models generate reasoning chains that interleave natural language and explicit bounding box coordinates. These coordinates point to regions of the input image that the model consults during its reasoning process. Additionally, GRIT is equipped with a reinforcement learning approach, GRPO-GR, built upon the GRPO algorithm. GRPO-GR employs robust rewards focused on the final answer accuracy and format of the grounded reasoning output, which eliminates the need for data with reasoning chain annotations or explicit bounding box labels. As a result, GRIT achieves exceptional data efficiency, requiring as few as 20 image-question-answer triplets from existing datasets. Comprehensive evaluations demonstrate that GRIT effectively trains MLLMs to produce coherent and visually grounded reasoning chains, showing a successful unification of reasoning and grounding abilities. All code, data, and checkpoints will be released.", "arxiv_id": "2505.15879v1", "arxiv_authors": ["Yue Fan", "Xuehai He", "Diji Yang", "Kaizhi Zheng", "Ching-Chen Kuo", "Yuting Zheng", "Sravana Jyothi Narayanaraju", "Xinze Guan", "Xin Eric Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a25f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1530378, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ce"}, "filepath": "data/2505.23678v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990557754872721, "type": "Poster", "name": "Grounded Reinforcement Learning for Visual Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120218", "abstract": "While reinforcement learning (RL) over chains of thought has significantly advanced language models in tasks such as mathematics and coding, visual reasoning introduces added complexity by requiring models to direct visual attention, interpret perceptual inputs, and ground abstract reasoning in spatial evidence. We introduce ViGoRL (Visually Grounded Reinforcement Learning), a vision-language model trained with RL to explicitly anchor each reasoning step to specific visual coordinates. Inspired by human visual decision-making, ViGoRL learns to produce spatially grounded reasoning traces, guiding visual attention to task-relevant regions at each step. Across a diverse set of visual reasoning benchmarks\u2014including SAT-2 and BLINK for spatial reasoning, and ScreenSpot and VisualWebArena for web-based grounding\u2014ViGoRL consistently outperforms both supervised fine-tuning and conventional RL baselines that lack explicit grounding mechanisms. Incorporating multi-turn RL with visual feedback further improves ViGoRL\u2019s performance on localizing small elements. Additionally, we find that grounding amplifies other visual behaviors such as region exploration, visual subgoal setting, and verification. Finally, human evaluations show that the model\u2019s visual references are not only spatially accurate but also helpful for understanding model reasoning steps. Our results show that visually grounded RL is a strong paradigm for imbuing models with general-purpose visual reasoning.", "arxiv_id": "2505.23678v2", "arxiv_authors": ["Gabriel Sarch", "Snigdha Saha", "Naitik Khandelwal", "Ayush Jain", "Michael J. Tarr", "Aviral Kumar", "Katerina Fragkiadaki"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a260"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3530578, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6cf"}, "filepath": "data/2505.15287v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998774011655147, "type": "Poster", "name": "GS2E: Gaussian Splatting is an Effective Data Generator for Event Stream Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121591", "abstract": "We introduce GS2E (Gaussian Splatting to Event Generation), a large-scale synthetic event dataset designed for high-fidelity event vision tasks, captured from real-world sparse multi-view RGB images. Existing event datasets are often synthesized from dense RGB videos, which typically suffer from limited viewpoint diversity and geometric inconsistency, or rely on expensive, hard-to-scale hardware setups. GS2E addresses these limitations by first reconstructing photorealistic static scenes using 3D Gaussian Splatting, followed by a novel, physically-informed event simulation pipeline. This pipeline integrates adaptive trajectory interpolation with physically-consistent event contrast threshold modeling. As a result, it generates temporally dense and geometrically consistent event streams under diverse motion and lighting conditions, while maintaining strong alignment with the underlying scene structure. Experimental results on event-based 3D reconstruction highlight GS2E\u2019s superior generalization capabilities and its practical value as a benchmark for advancing event vision research.", "arxiv_id": "2505.15287v1", "arxiv_authors": ["Yuchen Li", "Chaoran Feng", "Zhenyu Tang", "Kaiyuan Deng", "Wangbo Yu", "Yonghong Tian", "Li Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a261"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7745842, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d0"}, "filepath": "data/2510.22268v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994547459480765, "type": "Poster", "name": "GSAlign: Geometric and Semantic Alignment Network for Aerial-Ground Person Re-Identification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117136", "abstract": "Aerial-Ground person re-identification (AG-ReID) is an emerging yet challenging task that aims to match pedestrian images captured from drastically different viewpoints, typically from unmanned aerial vehicles (UAVs) and ground-based surveillance cameras. The task poses significant challenges due to extreme viewpoint discrepancies, occlusions, and domain gaps between aerial and ground imagery. While prior works have made progress by learning cross-view representations, they remain limited in handling severe pose variations and spatial misalignment. To address these issues, we propose a Geometric and Semantic Alignment Network (GSAlign) tailored for AG-ReID. GSAlign introduces two key components to jointly tackle geometric distortion and semantic misalignment in aerial-ground matching: a Learnable Thin Plate Spline (LTPS) Transformation Module and a Dynamic Alignment Module (DAM). The LTPS module adaptively warps pedestrian features based on a set of learned keypoints, effectively compensating for geometric variations caused by extreme viewpoint changes. In parallel, the DAM estimates visibility-aware representation masks that highlight visible body regions at the semantic level, thereby alleviating the negative impact of occlusions and partial observations in cross-view correspondence. Extensive experiments on the challenging CARGO benchmark demonstrate the effectiveness of GSAlign, achieving significant improvements of +18.8\\% in mAP and +16.8\\% in Rank-1 accuracy over previous state-of-the-art methods.", "arxiv_id": "2510.22268v1", "arxiv_authors": ["Qiao Li", "Jie Li", "Yukang Zhang", "Lei Tan", "Jing Chen", "Jiayi Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a262"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054663, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d1"}, "filepath": "data/2507.14697v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991508028829611, "type": "Poster", "name": "GTPBD: A Fine-Grained Global Terraced Parcel and Boundary Dataset", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121803", "abstract": "Agricultural parcels serve as basic units for conducting agricultural practices and applications, which is vital for land ownership registration, food security assessment, soil erosion monitoring, etc. However, existing agriculture parcel extraction studies only focus on mid-resolution mapping or regular plain farmlands while lacking representation of complex terraced terrains due to the demands of precision agriculture. In this paper, we introduce a more fine-grained terraced parcel dataset named GTPBD (Global Terraced Parcel and Boundary Dataset), which is the first fine-grained dataset covering major worldwide terraced regions with more than 200,000 complex terraced parcels with manually annotation. GTPBD comprises 24,238 high-resolution images with three-level labels, including pixel-level boundary labels, mask labels, and parcel labels. It covers seven major geographic zones in China and transcontinental climatic regions around the world. Compared to the existing datasets, the GTPBD dataset brings considerable challenges due to the: (1) terrain diversity; (2) complex and irregular parcel objects; and (3) multiple domain styles. Our proposed GTPBD dataset is suitable for four different tasks, including semantic segmentation, edge detection, terraced parcel extraction and unsupervised domain adaptation (UDA) tasks. Accordingly, we benchmark the GTPBD dataset on eight semantic segmentation methods, four edge extraction methods, three parcel extraction methods and five UDA methods, along with a multi-dimensional evaluation framework integrating pixel-level and object-level metrics. GTPBD fills a critical gap in terraced remote sensing research, providing a basic infrastructure for fine-grained agricultural terrain analysis and cross-scenario knowledge transfer. The code and data are available at https://github.com/Z-ZW-WXQ/GTPBG/.", "arxiv_id": "2507.14697v2", "arxiv_authors": ["Zhiwei Zhang", "Zi Ye", "Yibin Wen", "Shuai Yuan", "Haohuan Fu", "Jianxi Huang", "Juepeng Zheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a263"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047322, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d2"}, "filepath": "data/2505.19582v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994127845748111, "type": "Poster", "name": "Guard Me If You Know Me: Protecting Specific Face-Identity from Deepfakes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119672", "abstract": "Securing personal identity against deepfake attacks is increasingly critical in the digital age, especially for celebrities and political figures whose faces are easily accessible and frequently targeted. Most existing deepfake detection methods focus on general-purpose scenarios and often ignore the valuable prior knowledge of known facial identities, e.g., \"VIP individuals\" whose authentic facial data are already available. In this paper, we propose **VIPGuard**, a unified multimodal framework designed to capture fine-grained and comprehensive facial representations of a given identity, compare them against potentially fake or similar-looking faces, and reason over these comparisons to make accurate and explainable predictions. Specifically, our framework consists of three main stages. First, fine-tune a multimodal large language model (MLLM) to learn detailed and structural facial attributes. Second, we perform identity-level discriminative learning to enable the model to distinguish subtle differences between highly similar faces, including real and fake variations. Finally, we introduce user-specific customization, where we model the unique characteristics of the target face identity and perform semantic reasoning via MLLM to enable personalized and explainable deepfake detection. Our framework shows clear advantages over previous detection works, where traditional detectors mainly rely on low-level visual cues and provide no human-understandable explanations, while other MLLM-based models often lack a detailed understanding of specific face identities. To facilitate the evaluation of our method, we built a comprehensive identity-aware benchmark called **VIPBench** for personalized deepfake detection, involving the latest 7 face-swapping and 7 entire face synthesis techniques for generation. Extensive experiments show that our model outperforms existing methods in both detection and explanation.", "arxiv_id": "2505.19582v1", "arxiv_authors": ["Kaiqing Lin", "Zhiyuan Yan", "Ke-Yue Zhang", "Li Hao", "Yue Zhou", "Yuzhen Lin", "Weixiang Li", "Taiping Yao", "Shouhong Ding", "Bin Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a264"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 988629, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d3"}, "filepath": "data/2509.18631v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999890855601683, "type": "Poster", "name": "Guided Optimal Transport for Sim-and-Real Policy Co-Training", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115477", "abstract": "Behavior cloning has shown promise for robot manipulation by mimicking human demonstrations, but achieving robust, generalizable performance in the real world often requires costly and labor-intensive data collection to obtain these demonstrations. Recent advances in simulation and automated motion synthesis offer scalable alternatives for generating training data. However, transferring policies from simulation to the real world remains challenging due to simulation modeling inaccuracies. In this work, we propose a framework for learning generalizable manipulation policies that primarily leverages simulation and only requires a few real-world demonstrations. Central to our approach is learning a shared feature space that preserves task-relevant structure across simulation and the real world. Specifically, augment traditional imitation learning objective functions with a new loss inspired by optimal transport that encourages domain-invariant feature learning. We pair this with a motion generator that automatically synthesizes diverse simulated trajectories from a few manual demonstrations. We validate our method on challenging manipulation tasks in both simulation, where investigate sim-to-sim transfer, and the real world, demonstrating effective and data-efficient policy transfer.", "arxiv_id": "2509.18631v2", "arxiv_authors": ["Shuo Cheng", "Liqian Ma", "Zhenyang Chen", "Ajay Mandlekar", "Caelan Garrett", "Danfei Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a265"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070415, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d4"}, "filepath": "data/2510.16136v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990831273741568, "type": "Poster", "name": "GuideFlow3D: Optimization-Guided Flow For Appearance Transfer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118755", "abstract": "Transferring appearance to 3D assets using different representations of the appearance object--such as images or text--has garnered interest due to its wide range of applications in industries like gaming, augmented reality, and digital content creation. However, state-of-the-art methods still fail when the geometry between the input and appearance objects is significantly different. A straightforward approach is to directly apply a 3D generative model, but we show that this ultimately fails to produce appealing results. Instead, we propose a principled approach inspired by universal guidance. Given a pretrained rectified flow model conditioned on image or text, our training-free method interacts with the sampling process by periodically adding guidance. This guidance can be modeled as a differentiable loss function, and we experiment with two different types of guidance including part-aware losses for appearance and self-similarity. Our experiments show that our approach successfully transfers texture and geometric details to the input 3D asset. We outperform baselines both qualitatively and quantitatively. Traditional metrics are not suitable for evaluating the task due to inability of focusing on local details and comparing dissimilar inputs, in absence of ground truth data. We thus evaluate appearance transfer quality with a GPT-based system objectively ranking outputs, ensuring robust and human-like assessment. Beyond showcased scenarios, our method is general and could be extended to different types of diffusion models and guidance functions.", "arxiv_id": "2510.16136v1", "arxiv_authors": ["Sayan Deb Sarkar", "Sinisa Stekovic", "Vincent Lepetit", "Iro Armeni"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a266"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065217, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d5"}, "filepath": "data/2506.06970v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998681675037753, "type": "Poster", "name": "Guiding Cross-Modal Representations with MLLM Priors via Preference Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116450", "abstract": "Despite Contrastive Language\u2013Image Pre-training (CLIP)'s remarkable capability to retrieve content across modalities, a substantial modality gap persists in its feature space. Intriguingly, we discover that off-the-shelf MLLMs (Multimodal Large Language Models) demonstrate powerful inherent modality alignment properties. While recent MLLM-based retrievers with unified architectures partially mitigate this gap, their reliance on coarse modality alignment mechanisms fundamentally limits their potential. In this work, We introduce MAPLE (Modality-Aligned Preference Learning for Embeddings), a novel framework that leverages the fine-grained alignment priors inherent in MLLM to guide cross-modal representation learning. MAPLE formulates the learning process as reinforcement learning with two key components: (1) Automatic preference data construction using off-the-shelf MLLM, and (2) a new Relative Preference Alignment (RPA) loss, which adapts Direct Preference Optimization (DPO) to the embedding learning setting. Experimental results show that our preference-guided alignment achieves substantial gains in fine-grained cross-modal retrieval, underscoring its effectiveness in handling nuanced semantic distinctions.", "arxiv_id": "2506.06970v2", "arxiv_authors": ["Pengfei Zhao", "Rongbo Luan", "Wei Zhang", "Peng Wu", "Sifeng He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a267"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.496Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062096, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d6"}, "filepath": "data/2408.13036v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992856006328955, "type": "Poster", "name": "H3D-DGS: Exploring Heterogeneous 3D Motion Representation for Deformable 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119783", "abstract": "Dynamic scene reconstruction poses a persistent challenge in 3D vision. Deformable 3D Gaussian Splatting has emerged as an effective method for this task, offering real-time rendering and high visual fidelity.This approach decomposes a dynamic scene into a static representation in a canonical space and time-varying scene motion.Scene motion is defined as the collective movement of all Gaussian points, and for compactness, existing approaches commonly adopt implicit neural fields or sparse control points. However, these methods predominantly rely on gradient-based optimization for all motion information. Due to the high degree of freedom, they struggle to converge on real-world datasets exhibiting complex motion.To preserve the compactness of motion representation and address convergence challenges, this paper proposes heterogeneous 3D control points, termed \\textbf{H3D control points}, whose attributes are obtained using a hybrid strategy combining optical flow back-projection and gradient-based methods. This design decouples directly observable motion components from those that are geometrically occluded.Specifically, components of 3D motion that project onto the image plane are directly acquired via optical flow back projection, while unobservable portions are refined through gradient-based optimization.Experiments on the Neu3DV and CMU-Panoptic datasets demonstrate that our method achieves superior performance over state-of-the-art 4D Gaussian splatting techniques. Remarkably, our method converges within just 100 iterations and achieves a per-frame processing speed of 2 seconds on a single NVIDIA RTX 4070 GPU.", "arxiv_id": "2408.13036v3", "arxiv_authors": ["Bing He", "Yunuo Chen", "Guo Lu", "Qi Wang", "Qunshan Gu", "Rong Xie", "Li Song", "Wenjun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a268"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1600193, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d7"}, "filepath": "data/2506.09518v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993742032608532, "type": "Poster", "name": "HAIF-GS: Hierarchical and Induced Flow-Guided Gaussian Splatting for Dynamic Scene", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115007", "abstract": "Reconstructing dynamic 3D scenes from monocular videos remains a fundamental challenge in 3D vision. While 3D Gaussian Splatting (3DGS) achieves real-time rendering in static settings, extending it to dynamic scenes is challenging due to the difficulty of learning structured and temporally consistent motion representations. This challenge often manifests as three limitations in existing methods: redundant Gaussian updates, insufficient motion supervision, and weak modeling of complex non-rigid deformations. These issues collectively hinder coherent and efficient dynamic reconstruction. To address these limitations, we propose HAIF-GS, a unified framework that enables structured and consistent dynamic modeling through sparse anchor-driven deformation. It first identifies motion-relevant regions via an Anchor Filter to suppresses redundant updates in static areas. A self-supervised Induced Flow-Guided Deformation module induces anchor motion using multi-frame feature aggregation, eliminating the need for explicit flow labels. To further handle fine-grained deformations, a Hierarchical Anchor Propagation mechanism increases anchor resolution based on motion complexity and propagates multi-level transformations. Extensive experiments on synthetic and real-world benchmarks validate that HAIF-GS significantly outperforms prior dynamic 3DGS methods in rendering quality, temporal coherence, and reconstruction efficiency.", "arxiv_id": "2506.09518v1", "arxiv_authors": ["Jianing Chen", "Zehao Li", "Yujun Cai", "Hao Jiang", "Chengxuan Qian", "Juyuan Kang", "Shuqin Gao", "Honglong Zhao", "Tianlu Mao", "Yucheng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a269"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1009044, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d8"}, "filepath": "data/2506.07227v1.png", "tags": [], "_media_type": "image", "_rand": 0.999055748083778, "type": "Poster", "name": "Hallucination at a Glance: Controlled Visual Edits and Fine-Grained Multimodal Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115500", "abstract": "Multimodal large language models (MLLMs) have achieved strong performance on vision-language tasks but still struggle with fine-grained visual differences, leading to hallucinations or missed semantic shifts. We attribute this to limitations in both training data and learning objectives. To address these issues, we propose a controlled data generation pipeline that produces minimally edited image pairs with semantically aligned captions. Using this pipeline, we construct the Micro Edit Dataset (MED), containing over 50K image-text pairs spanning 11 fine-grained edit categories, including attribute, count, position, and object presence changes.Building on MED Dataset, we introduce a supervised fine-tuning (SFT) framework with a feature-level consistency loss that promotes stable visual embeddings under small edits. We evaluate our approach on the Micro Edit Detection benchmark, which includes carefully balanced evaluation pairs designed to test sensitivity to subtle visual variations across the same edit categories.Our method improves difference detection accuracy and reduces hallucinations compared to strong baselines, including GPT-4o. Moreover, it yields consistent gains on standard vision-language tasks such as image captioning and visual question answering. These results demonstrate the effectiveness of combining targeted data and alignment objectives for enhancing fine-grained visual reasoning in MLLMs.", "arxiv_id": "2506.07227v1", "arxiv_authors": ["Tianyi Bai", "Yuxuan Fan", "Jiantao Qiu", "Fupeng Sun", "Jiayi Song", "Junlin Han", "Zichen Liu", "Conghui He", "Wentao Zhang", "Binhang Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 975703, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6d9"}, "filepath": "data/2505.19742v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992701858349334, "type": "Poster", "name": "HAODiff: Human-Aware One-Step Diffusion via Dual-Prompt Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118246", "abstract": "Human-centered images often suffer from severe generic degradation during transmission and are prone to human motion blur (HMB), making restoration challenging. Existing research lacks sufficient focus on these issues, as both problems often coexist in practice. To address this, we design a degradation pipeline that simulates the coexistence of HMB and generic noise, generating synthetic degraded data to train our proposed HAODiff, a human-aware one-step diffusion. Specifically, we propose a triple-branch dual-prompt guidance (DPG), which leverages high-quality images, residual noise (LQ minus HQ), and HMB segmentation masks as training targets. It produces a positive\u2013negative prompt pair for classifier\u2011free guidance (CFG) in a single diffusion step. The resulting adaptive dual prompts let HAODiff exploit CFG more effectively, boosting robustness against diverse degradations. For fair evaluation, we introduce MPII\u2011Test, a benchmark rich in combined noise and HMB cases. Extensive experiments show that our HAODiff surpasses existing state-of-the-art (SOTA) methods in terms of both quantitative metrics and visual quality on synthetic and real-world datasets, including our introduced MPII-Test. The code and model will be released soon.", "arxiv_id": "2505.19742v1", "arxiv_authors": ["Jue Gong", "Tingyu Yang", "Jingkai Wang", "Zheng Chen", "Xing Liu", "Hong Gu", "Yulun Zhang", "Xiaokang Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1151028, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6da"}, "filepath": "data/2504.10804v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992613934262458, "type": "Poster", "name": "Harnessing the Computation Redundancy in ViTs to Boost Adversarial Transferability", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117352", "abstract": "Vision Transformers (ViTs) have demonstrated impressive performance across a range of applications, including many safety-critical tasks. Many previous studies have observed that adversarial examples crafted on ViTs exhibit higher transferability than those crafted on CNNs, indicating that ViTs contain structural characteristics favorable for transferable attacks. In this work, we take a further step to deeply investigate the role of computational redundancy brought by its unique characteristics in ViTs and its impact on adversarial transferability. Specifically, we identify two forms of redundancy, including the data-level and model-level, that can be harnessed to amplify attack effectiveness. Building on this insight, we design a suite of techniques, including attention sparsity manipulation, attention head permutation, clean token regularization, ghost MoE diversification, and learn to robustify before the attack. A dynamic online learning strategy is also proposed to fully leverage these operations to enhance the adversarial transferability. Extensive experiments on the ImageNet-1k dataset validate the effectiveness of our approach, showing that our methods significantly outperform existing baselines in both transferability and generality across diverse model architectures, including different variants of ViTs and mainstream Vision Large Language Models (VLLMs).", "arxiv_id": "2504.10804v2", "arxiv_authors": ["Jiani Liu", "Zhiyuan Wang", "Zeliang Zhang", "Chao Huang", "Susan Liang", "Yunlong Tang", "Chenliang Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110559, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6db"}, "filepath": "data/2506.19072v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996537746017236, "type": "Poster", "name": "Hawaii: Hierarchical Visual Knowledge Transfer for Efficient Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118565", "abstract": "Improving the visual understanding ability of vision-language models (VLMs) is crucial for enhancing their performance across various tasks. While using multiple pretrained visual experts has shown great promise, it often incurs significant computational costs during training and inference. To address this challenge, we propose HAWAII, a novel framework that distills knowledge from multiple visual experts into a single vision encoder, enabling it to inherit the complementary strengths of several experts with minimal computational overhead. To mitigate conflicts among different teachers and switch between different teacher-specific knowledge, instead of using a fixed set of adapters for multiple teachers, we propose to use teacher-specific Low-Rank Adaptation (LoRA) adapters with a corresponding router. Each adapter is aligned with a specific teacher, avoiding noisy guidance during distillation. To enable efficient knowledge distillation, we propose fine-grained and coarse-grained distillation. At the fine-grained level, token importance scores are employed to emphasize the most informative tokens from each teacher adaptively. At the coarse-grained level, we summarize the knowledge from multiple teachers and transfer it to the student using a set of general-knowledge LoRA adapters with a router. Extensive experiments on various vision-language tasks demonstrate the superiority of HAWAII, compared to the popular open-source VLMs.", "arxiv_id": "2506.19072v1", "arxiv_authors": ["Yimu Wang", "Mozhgan Nasr Azadani", "Sean Sedwards", "Krzysztof Czarnecki"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1095403, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6dc"}, "filepath": "data/2505.15793v2.png", "tags": [], "_media_type": "image", "_rand": 0.999724203241769, "type": "Poster", "name": "HCRMP: An LLM-Hinted Contextual Reinforcement Learning Framework for Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120251", "abstract": "Integrating the understanding and reasoning capabilities of Large Language Models (LLM) with the self-learning capabilities of Reinforcement Learning (RL) enables more reliable driving performance under complex driving conditions. There has been a lot of work exploring LLM-Dominated RL methods in the field of autonomous driving motion planning. These methods, which utilize LLM to directly generate policies or provide decisive instructions during policy learning of RL agent, are centrally characterized by an over-reliance on LLM outputs. However, LLM outputs are susceptible to hallucinations. Evaluations show that state-of-the-art LLM indicates a non-hallucination rate of only approximately 57.95\\% when assessed on essential driving-related tasks. Thus, in these methods, hallucinations from the LLM can directly jeopardize the performance of driving policies. This paper argues that maintaining relative independence between the LLM and the RL is vital for solving the hallucinations problem. Consequently, this paper is devoted to propose a novel LLM-Hinted RL paradigm. The LLM is used to generate semantic hints for state augmentation and policy optimization to assist RL agent in motion planning, while the RL agent counteracts potential erroneous semantic indications through policy learning to achieve excellent driving performance. Based on this paradigm, we propose the HCRMP (LLM-Hinted Contextual Reinforcement Learning Motion Planner) architecture, which is designed that includes \u2460Augmented Semantic Representation Module to extend state space. \u2461Contextual Stability Anchor Module enhances the reliability of multi-critic weight hints by utilizing information from the knowledge base. \u2462Semantic Cache Module is employed to seamlessly integrate LLM low-frequency guidance with RL high-frequency control. Extensive experiments in CARLA validate HCRMP's strong overall driving performance. HCRMP achieves a task success rate of up to 80.3\\% under diverse driving conditions with different traffic densities. Under safety-critical driving conditions, HCRMP significantly reduces the collision rate by 11.4\\%, which effectively improves the driving performance in complex scenarios.", "arxiv_id": "2505.15793v2", "arxiv_authors": ["Zhiwen Chen", "Bo Leng", "Zhuoren Li", "Hanming Deng", "Guizhe Jin", "Ran Yu", "Huanxi Wen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26e"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1154602, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6dd"}, "filepath": "data/2510.21518v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998474013673003, "type": "Poster", "name": "Head Pursuit: Probing Attention Specialization in Multimodal Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117597", "abstract": "Language and vision-language models have shown impressive performance across a wide range of tasks, but their internal mechanisms remain only partly understood. In this work, we study how individual attention heads in text-generative models specialize in specific semantic or visual attributes. Building on an established interpretability method, we reinterpret the practice of probing intermediate activations with the final decoding layer through the lens of signal processing. This lets us analyze multiple samples in a principled way and rank attention heads based on their relevance to target concepts. Our results show consistent patterns of specialization at the head level across both unimodal and multimodal transformers. Remarkably, we find that editing as few as 1% of the heads, selected using our method, can reliably suppress or enhance targeted concepts in the model output. We validate our approach on language tasks such as question answering and toxicity mitigation, as well as vision-language tasks including image classification and captioning. Our findings highlight an interpretable and controllable structure within attention layers, offering simple tools for understanding and editing large-scale generative models.", "arxiv_id": "2510.21518v1", "arxiv_authors": ["Lorenzo Basile", "Valentino Maiorca", "Diego Doimo", "Francesco Locatello", "Alberto Cazzaniga"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a26f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 962181, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6de"}, "filepath": "data/2502.12148v2.png", "tags": [], "_media_type": "image", "_rand": 0.999365743447348, "type": "Poster", "name": "HermesFlow: Seamlessly Closing the Gap in Multimodal Understanding and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118982", "abstract": "The remarkable success of the autoregressive paradigm has made significant advancement in Multimodal Large Language Models (MLLMs), with powerful models like Show-o, Transfusion and Emu3 made notable strides in unified image understanding and generation. For the first time, we uncover a common phenomenon: the understanding capability of MLLMs is usually stronger than their generative capability, with a significant gap between them. Building on this insight, we propose HermesFlow, a simple and general framework designed to seamlessly bridge the gap between understanding and generation in MLLMs. Specifically, we take the homologous data as input to curate homologous preference data of both understanding and generation. Through Pair-DPO and self-play iterative optimization, HermesFlow effectively aligns multimodal understanding and generation using homologous preference data. Extensive experiments demonstrate the significant superiority of our approach over prior methods, particularly in narrowing the gap between multimodal understanding and generation. These findings highlight the potential of HermesFlow as a general alignment framework for next-generation multimodal foundation models.", "arxiv_id": "2502.12148v2", "arxiv_authors": ["Ling Yang", "Xinchen Zhang", "Ye Tian", "Chenming Shang", "Minghao Xu", "Wentao Zhang", "Bin Cui"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a270"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1459218, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6df"}, "filepath": "data/2508.05609v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990790281485752, "type": "Poster", "name": "Hi3DEval: Advancing 3D Generation Evaluation with Hierarchical Validity", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121787", "abstract": "Despite rapid advances in 3D content generation, quality assessment for the generated 3D assets remains challenging.Existing methods mainly rely on image-based metrics and operate solely at the object level, limiting their ability to capture spatial Despite rapid advances in 3D content generation, quality assessment for the generated 3D assets remains challenging.Existing methods mainly rely on image-based metrics and operate solely at the object level, limiting their ability to capture spatial coherence, material authenticity, and high-fidelity local details.1) To address these challenges, we introduce Hi3DEval, a hierarchical evaluation framework tailored for 3D generative content. It combines both object-level and part-level evaluation, enabling holistic assessments across multiple dimensions as well as fine-grained quality analysis. Additionally, we extend texture evaluation beyond aesthetic appearance by explicitly assessing material realism, focusing on attributes such as albedo, saturation, and metallicness. 2) To support this framework, we construct Hi3DBench, a large-scale dataset comprising diverse 3D assets and high-quality annotations, accompanied by a reliable multi-agent annotation pipeline.We further propose a 3D-aware automated scoring system based on hybrid 3D representations. Specifically, we leverage video-based representations for object-level and material-subject evaluations to enhance modeling of spatio-temporal consistency and employ pretrained 3D features for part-level perception.Extensive experiments demonstrate that our approach outperforms existing image-based metrics in modeling 3D characteristics and achieves superior alignment with human preference, providing a scalable alternative to manual evaluations.", "arxiv_id": "2508.05609v1", "arxiv_authors": ["Yuhan Zhang", "Long Zhuo", "Ziyang Chu", "Tong Wu", "Zhibing Li", "Liang Pan", "Dahua Lin", "Ziwei Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a271"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1701647, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e0"}, "filepath": "data/2510.17188v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990400029890056, "type": "Poster", "name": "HIDISC: A Hyperbolic Framework for Domain Generalization with Generalized Category Discovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119696", "abstract": "Generalized Category Discovery (GCD) aims to classify test-time samples into either seen categories\u2014available during training\u2014or novel ones, without relying on label supervision. Most existing GCD methods assume simultaneous access to labeled and unlabeled data during training and arising from the same domain, limiting applicability in open-world scenarios involving distribution shifts. Domain Generalization with GCD (DG-GCD) lifts this constraint by requiring models to generalize to unseen domains containing novel categories, without accessing target-domain data during training.The only prior DG-GCD method, DG$^2$CD-Net~\\cite{dg2net}, relies on episodic training with multiple synthetic domains and task vector aggregation, incurring high computational cost and error accumulation. We propose \\textsc{HiDISC}, a hyperbolic representation learning framework that achieves domain and category-level generalization without episodic simulation. To expose the model to minimal but diverse domain variations, we augment the source domain using GPT-guided diffusion, avoiding overfitting while maintaining efficiency.To structure the representation space, we introduce \\emph{Tangent CutMix}, a curvature-aware interpolation that synthesizes pseudo-novel samples in tangent space, preserving manifold consistency. A unified loss\u2014combining penalized Busemann alignment, hybrid hyperbolic contrastive regularization, and adaptive outlier repulsion\u2014facilitates compact, semantically structured embeddings. A learnable curvature parameter further adapts the geometry to dataset complexity.\\textsc{HiDISC} achieves state-of-the-art results on PACS~\\cite{pacs}, Office-Home~\\cite{officehome}, and DomainNet~\\cite{domainnet}, consistently outperforming the existing Euclidean and hyperbolic (DG)-GCD baselines.", "arxiv_id": "2510.17188v1", "arxiv_authors": ["Vaibhav Rathore", "Divyam Gupta", "Biplab Banerjee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a272"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1098296, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e1"}, "filepath": "data/2508.10858v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998917106751296, "type": "Poster", "name": "Hierarchical Fine-grained Preference Optimization for Physically Plausible Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115193", "abstract": "Recent advancements in video generation have enabled the creation of high-quality, visually compelling videos. However, generating videos that adhere to the laws of physics remains a critical challenge for applications requiring realism and accuracy. In this work, we propose **PhysHPO**, a novel framework for Hierarchical Cross-Modal Direct Preference Optimization, to tackle this challenge by enabling fine-grained preference alignment for physically plausible video generation. PhysHPO optimizes video alignment across four hierarchical granularities: a) ***Instance Level***, aligning the overall video content with the input prompt; b) ***State Level***, ensuring temporal consistency using boundary frames as anchors; c) ***Motion Level***, modeling motion trajectories for realistic dynamics; and d) ***Semantic Level***, maintaining logical consistency between narrative and visuals. Recognizing that real-world videos are the best reflections of physical phenomena, we further introduce an automated data selection pipeline to efficiently identify and utilize *\"good data\"* from existing large-scale text-video datasets, thereby eliminating the need for costly and time-intensive dataset construction. Extensive experiments on both physics-focused and general capability benchmarks demonstrate that PhysHPO significantly improves physical plausibility and overall video generation quality of advanced models. To the best of our knowledge, this is the first work to explore fine-grained preference alignment and data selection for video generation, paving the way for more realistic and human-preferred video generation paradigms.", "arxiv_id": "2508.10858v1", "arxiv_authors": ["Harold Haodong Chen", "Haojian Huang", "Qifeng Chen", "Harry Yang", "Ser-Nam Lim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a273"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097383, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e2"}, "filepath": "data/2504.06232v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996009112058531, "type": "Poster", "name": "HiFlow: Training-free High-Resolution Image Generation with Flow-Aligned Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116343", "abstract": "Text-to-image (T2I) diffusion/flow models have drawn considerable attention recently due to their remarkable ability to deliver flexible visual creations. Still, high-resolution image synthesis presents formidable challenges due to the scarcity and complexity of high-resolution content. Recent approaches have investigated training-free strategies to enable high-resolution image synthesis with pre-trained models. However, these techniques often struggle with generating high-quality visuals and tend to exhibit artifacts or low-fidelity details, as they typically rely solely on the endpoint of the low-resolution sampling trajectory while neglecting intermediate states that are critical for preserving structure and synthesizing finer detail. To this end, we present HiFlow, a training-free and model-agnostic framework to unlock the resolution potential of pre-trained flow models. Specifically, HiFlow establishes a virtual reference flow within the high-resolution space that effectively captures the characteristics of low-resolution flow information, offering guidance for high-resolution generation through three key aspects: initialization alignment for low-frequency consistency, direction alignment for structure preservation, and acceleration alignment for detail fidelity. By leveraging such flow-aligned guidance, HiFlow substantially elevates the quality of high-resolution image synthesis of T2I models and demonstrates versatility across their personalized variants. Extensive experiments validate HiFlow's capability in achieving superior high-resolution image quality over state-of-the-art methods.", "arxiv_id": "2504.06232v2", "arxiv_authors": ["Jiazi Bu", "Pengyang Ling", "Yujie Zhou", "Pan Zhang", "Tong Wu", "Xiaoyi Dong", "Yuhang Zang", "Yuhang Cao", "Dahua Lin", "Jiaqi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a274"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.497Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 11039707, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e3"}, "filepath": "data/2507.07136v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999550721607501, "type": "Poster", "name": "High-dimensional 3D Language Gaussian Splatting with 450+ FPS", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117503", "abstract": "In this paper, we introduce LangSplatV2, which achieves high-dimensional featuresplatting at 476.2 FPS and 3D open-vocabulary text querying at 384.6 FPS forhigh-resolution images, providing a 42 \u00d7 speedup and a 47 \u00d7 boost over LangSplatrespectively, along with improved query accuracy. LangSplat employs GaussianSplatting to embed 2D CLIP language features into 3D, significantly enhancingspeed and learning a precise 3D language field with SAM semantics. Such advancements in 3D language fields are crucial for applications that require languageinteraction within complex scenes. However, LangSplat does not yet achieve real-time performance (8.2 FPS), even with advanced A100 GPUs, severely limitingits broader application. In this paper, we first conduct a detailed time analysis ofLangSplat, identifying the heavyweight decoder as the primary speed bottleneck.Our solution, LangSplatV2 assumes that each Gaussian acts as a sparse code withina global dictionary, leading to the learning of a 3D sparse coefficient field thatentirely eliminates the need for a heavyweight decoder. By leveraging this sparsity,we further propose an efficient sparse coefficient splatting method with CUDA optimization, rendering high-dimensional feature maps at high quality while incurringonly the time cost of splatting an ultra-low-dimensional feature. Our experimentalresults demonstrate that LangSplatV2 not only achieves better or competitive queryaccuracy but is also significantly faster.", "arxiv_id": "2507.07136v2", "arxiv_authors": ["Wanhua Li", "Yujie Zhao", "Minghan Qin", "Yang Liu", "Yuanhao Cai", "Chuang Gan", "Hanspeter Pfister"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a275"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1102187, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e4"}, "filepath": "data/2505.15877v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996835360411124, "type": "Poster", "name": "Highlighting What Matters: Promptable Embeddings for Attribute-Focused Image Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116759", "abstract": "While an image is worth more than a thousand words, only a few provide crucial information for a given task and thus should be focused on. In light of this, ideal text-to-image (T2I) retrievers should prioritize specific visual attributes relevant to queries. To evaluate current retrievers on handling attribute-focused queries, we build COCO-Facet, a COCO-based benchmark with 9,112 queries about diverse attributes of interest. We find that CLIP-like retrievers, which are widely adopted due to their efficiency and zero-shot ability, have poor and imbalanced performance, possibly because their image embeddings focus on global semantics and subjects while leaving out other details. Notably, we reveal that even recent Multimodal Large Language Model (MLLM)-based, stronger retrievers with a larger output dimension struggle with this limitation. Hence, we hypothesize that retrieving with *general* image embeddings is suboptimal for performing such queries. As a solution, we propose to use *promptable* image embeddings enabled by these multimodal retrievers, which boost performance by highlighting required attributes. Our pipeline for deriving such embeddings generalizes across query types, image pools, and base retriever architectures. To enhance real-world applicability, we offer two acceleration strategies: Pre-processing promptable embeddings and using linear approximations. We show that the former yields a 15% improvement in Recall@5 when prompts are predefined, while the latter achieves an 8% improvement when prompts are only available during inference.", "arxiv_id": "2505.15877v2", "arxiv_authors": ["Siting Li", "Xiang Gao", "Simon Shaolei Du"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a276"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1026760, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e5"}, "filepath": "data/2509.17212v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994441394612915, "type": "Poster", "name": "High Resolution UDF Meshing via Iterative Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116591", "abstract": "Unsigned Distance Fields (UDFs) are a natural implicit representation for open surfaces but, unlike Signed Distance Fields (SDFs), are challenging to triangulate into explicit meshes. This is especially true at high resolutions where neural UDFs exhibit higher noise levels, which makes it hard to capture fine details.Most current techniques perform within single voxels without reference to their neighborhood, resulting in missing surface and holes where the UDF is ambiguous or noisy. We show that this can be remedied by performing several passes and by reasoning on previously extracted surface elements to incorporate neighborhood information. Our key contribution is an iterative neural network that does this and progressively improves surface recovery within each voxel by spatially propagating information from increasingly distant neighbors. Unlike single-pass methods, our approach integrates newly detected surfaces, distance values, and gradients across multiple iterations, effectively correcting errors and stabilizing extraction in challenging regions. Experiments on diverse 3D models demonstrate that our method produces significantly more accurate and complete meshes than existing approaches, particularly for complex geometries, enabling UDF surface extraction at higher resolutions where traditional methods fail.", "arxiv_id": "2509.17212v1", "arxiv_authors": ["Federico Stella", "Nicolas Talabot", "Hieu Le", "Pascal Fua"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a277"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1019698, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e6"}, "filepath": "data/2406.06843v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995965326552786, "type": "Poster", "name": "HO-Cap: A Capture System and Dataset for 3D Reconstruction and Pose Tracking of Hand-Object Interaction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121512", "abstract": "We introduce a data capture system and a new dataset, HO-Cap, for 3D reconstruction and pose tracking of hands and objects in videos. The system leverages multiple RGB-D cameras and a HoloLens headset for data collection, avoiding the use of expensive 3D scanners or motion capture systems. We propose a semi-automatic method for annotating the shape and pose of hands and objects in the collected videos, significantly reducing the annotation time and cost compared to manual labeling. With this system, we captured a video dataset of humans performing various single- and dual-hand manipulation tasks, including simple pick-and-place actions, handovers between hands, and using objects according to their affordance. This dataset can serve as human demonstrations for research in embodied AI and robot manipulation. Our capture setup and annotation framework will be made available to the community for reconstructing 3D shapes of objects and human hands, as well as tracking their poses in videos.", "arxiv_id": "2406.06843v4", "arxiv_authors": ["Jikai Wang", "Qifan Zhang", "Yu-Wei Chao", "Bowen Wen", "Xiaohu Guo", "Yu Xiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a278"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4416884, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e7"}, "filepath": "data/2507.16813v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992464113438081, "type": "Poster", "name": "HOComp: Interaction-Aware Human-Object Composition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115373", "abstract": "While existing image\u2011guided composition methods may help insert a foreground object onto a user-specified region of a background image, achieving natural blending inside the region with the rest of the image unchanged, we observe that these existing methods often struggle in synthesizing seamless interaction-aware compositions when the task involves human-object interactions.In this paper, we first propose HOComp, a novel approach for compositing a foreground object onto a human-centric background image, while ensuring harmonious interactions between the foreground object and the background person and their consistent appearances. Our approach includes two key designs: (1) MLLMs-driven Region-based Pose Guidance (MRPG), which utilizes MLLMs to identify the interaction region as well as the interaction type (e.g., holding and lefting) to provide coarse-to-fine constraints to the generated pose for the interaction while incorporating human pose landmarks to track action variations and enforcing fine-grained pose constraints; and (2) Detail-Consistent Appearance Preservation (DCAP), which unifies a shape-aware attention modulation mechanism, a multi-view appearance loss, and a background consistency loss to ensure consistent shapes/textures of the foreground and faithful reproduction of the background human. We then propose the first dataset, named Interaction-aware Human-Object Composition (IHOC), for the task.Experimental results on our dataset show that HOComp effectively generates harmonious human-object interactions with consistent appearances, and outperforms relevant methods qualitatively and quantitatively.", "arxiv_id": "2507.16813v1", "arxiv_authors": ["Dong Liang", "Jinyuan Jia", "Yuhao Liu", "Rynson W. H. Lau"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a279"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 981950, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e8"}, "filepath": "data/2507.01737v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996916555319866, "type": "Poster", "name": "HOI-Dyn: Learning Interaction Dynamics for Human-Object Motion Diffusion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117147", "abstract": "Generating realistic 3D human-object interactions (HOIs) remains a challenging task due to the difficulty of modeling detailed interaction dynamics. Existing methods treat human and object motions independently, resulting in physically implausible and causally inconsistent behaviors. In this work, we present HOI-Dyn, a novel framework that formulates HOI generation as a driver-responder system, where human actions drive object responses. At the core of our method is a lightweight transformer-based interaction dynamics model that explicitly predicts how objects should react to human motion. To further enforce consistency, we introduce a residual-based dynamics loss that mitigates the impact of dynamics prediction errors and prevents misleading optimization signals. The dynamics model is used only during training, preserving inference efficiency. Through extensive qualitative and quantitative experiments, we demonstrate that our approach not only enhances the quality of HOI generation but also establishes a feasible metric for evaluating the quality of generated interactions.", "arxiv_id": "2507.01737v3", "arxiv_authors": ["Lin Wu", "Zhixiang Chen", "Jianglin Lan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 960805, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6e9"}, "filepath": "data/2506.19291v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999481347765918, "type": "Poster", "name": "HoliGS: Holistic Gaussian Splatting for Embodied View Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117722", "abstract": "We propose HoliGS, a novel deformable Gaussian splatting framework that addresses embodied view synthesis from long monocular RGB videos. Unlike prior 4D Gaussian splatting and dynamic NeRF pipelines, which struggle with training overhead in minute-long captures, our method leverages invertible Gaussian Splatting deformation networks to reconstruct large-scale, dynamic environments accurately. Specifically, we decompose each scene into a static background plus time-varying objects, each represented by learned Gaussian primitives undergoing global rigid transformations, skeleton-driven articulation, and subtle non-rigid deformations via an invertible neural flow. This hierarchical warping strategy enables robust free-viewpoint novel-view rendering from various embodied camera trajectories by attaching Gaussians to a complete canonical foreground shape (\\eg, egocentric or third-person follow), which may involve substantial viewpoint changes and interactions between multiple actors. Our experiments demonstrate that \\ourmethod~ achieves superior reconstruction quality on challenging datasets while significantly reducing both training and rendering time compared to state-of-the-art monocular deformable NeRFs. These results highlight a practical and scalable solution for EVS in real-world scenarios. The source code will be released.", "arxiv_id": "2506.19291v1", "arxiv_authors": ["Xiaoyuan Wang", "Yizhou Zhao", "Botao Ye", "Xiaojun Shan", "Weijie Lyu", "Lu Qi", "Kelvin C. K. Chan", "Yinxiao Li", "Ming-Hsuan Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1016601, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ea"}, "filepath": "data/2505.23280v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998050989754883, "type": "Poster", "name": "Holistic Large-Scale Scene Reconstruction via Mixed Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115137", "abstract": "Recent advances in 3D Gaussian Splatting have shown remarkable potential for novel view synthesis. However, most existing large-scale scene reconstruction methods rely on the divide-and-conquer paradigm, which often leads to the loss of global scene information and requires complex parameter tuning due to scene partitioning and local optimization. To address these limitations, we propose MixGS, a novel holistic optimization framework for large-scale 3D scene reconstruction. MixGS models the entire scene holistically by integrating camera pose and Gaussian attributes into a view-aware representation, which is decoded into fine-detailed Gaussians. Furthermore, a novel mixing operation combines decoded and original Gaussians to jointly preserve global coherence and local fidelity. Extensive experiments on large-scale scenes demonstrate that MixGS achieves state-of-the-art rendering quality and competitive speed, while significantly reducing computational requirements, enabling large-scale scene reconstruction training on a single 24GB VRAM GPU. The code will be released publicly.", "arxiv_id": "2505.23280v1", "arxiv_authors": ["Chuandong Liu", "Huijiao Wang", "Lei Yu", "Gui-Song Xia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1107938, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6eb"}, "filepath": "data/2510.01704v1.png", "tags": [], "_media_type": "image", "_rand": 0.999056729749097, "type": "Poster", "name": "Holistic Order Prediction in Natural Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118021", "abstract": "Even in controlled settings, understanding instance-wise geometries is a challenging task for a wide range of visual models. Although expert systems exist, modern arts still rely on expensive input formats (category labels, binary segmentation masks) and inference costs (quadratic amount of forward passes). We mitigate these limitations by proposing InstaFormer, a network capable of holistic order prediction. That is, solely given an input RGB image, InstaFormer returns the full segmentation masks along with occlusion and depth orderings for all the instances in the scene in a single forward pass. At its core, InstaFormer relies on interactions between object queries and latent mask descriptors that semantically represent the same objects while carrying complementary information. We comprehensively benchmark and ablate our approach to highlight its effectiveness. The link to our repository will be shared upon publication.", "arxiv_id": "2510.01704v1", "arxiv_authors": ["Pierre Musacchio", "Hyunmin Lee", "Jaesik Park"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2733546, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ec"}, "filepath": "data/2505.21334v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994926176362574, "type": "Poster", "name": "HoliTom: Holistic Token Merging for Fast Video Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119772", "abstract": "Video large language models (video LLMs) excel at video comprehension but face significant computational inefficiency due to redundant video tokens. Existing token pruning methods offer solutions. However, approaches operating within the LLM (inner-LLM pruning), such as FastV, incur intrinsic computational overhead in shallow layers. In contrast, methods performing token pruning before the LLM (outer-LLM pruning) primarily address spatial redundancy within individual frames or limited temporal windows, neglecting the crucial global temporal dynamics and correlations across longer video sequences. This leads to sub-optimal spatio-temporal reduction and does not leverage video compressibility fully. Crucially, the synergistic potential and mutual influence of combining these strategies remain unexplored. To further reduce redundancy, we introduce HoliTom, a novel training-free holistic token merging framework. HoliTom employs outer-LLM pruning through global redundancy-aware temporal segmentation, followed by spatial-temporal merging to reduce visual tokens by over 90%, significantly alleviating the LLM's computational burden. Complementing this, we introduce a robust inner-LLM token similarity-based merging approach, designed for superior performance and compatibility with outer-LLM pruning. Evaluations demonstrate our method's promising efficiency-performance trade-off on LLaVA-OneVision-7B, reducing computational costs to 6.9% of FLOPs while maintaining 99.1% of the original performance. Furthermore, we achieve a 2.28\u00d7 reduction in Time-To-First-Token (TTFT) and a 1.32\u00d7 acceleration in decoding throughput, highlighting the practical benefits of our integrated pruning approach for efficient video LLMs inference.", "arxiv_id": "2505.21334v3", "arxiv_authors": ["Kele Shao", "Keda Tao", "Can Qin", "Haoxuan You", "Yang Sui", "Huan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1374265, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ed"}, "filepath": "data/2505.17645v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999907932614822, "type": "Poster", "name": "HoloLLM: Multisensory Foundation Model for Language-Grounded Human Sensing and Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117109", "abstract": "Embodied agents operating in smart homes must understand human behavior through diverse sensory inputs and communicate via natural language. While Vision-Language Models (VLMs) have enabled impressive language-grounded perception, their reliance on visual data limits robustness in real-world scenarios with occlusions, poor lighting, or privacy constraints. In this paper, we introduce HoloLLM, a Multimodal Large Language Model (MLLM) that integrates uncommon but powerful sensing modalities, such as LiDAR, infrared, mmWave radar, and WiFi, to enable seamless human perception and reasoning across heterogeneous environments. We address two key challenges: (1) the scarcity of aligned modality-text data for rare sensors, and (2) the heterogeneity of their physical signal representations. To overcome these, we design a Universal Modality-Injection Projector (UMIP) that enhances pre-aligned modality embeddings with fine-grained, text-aligned features from tailored encoders via coarse-to-fine cross-attention without introducing significant alignment overhead. We further introduce a human-VLM collaborative data curation pipeline to generate paired textual annotations for sensing datasets. Extensive experiments on two newly constructed benchmarks show that HoloLLM significantly outperforms existing MLLMs, improving language-grounded human sensing accuracy by up to 30%. This work establishes a new foundation for real-world, language-informed multisensory embodied intelligence.", "arxiv_id": "2505.17645v1", "arxiv_authors": ["Chuhao Zhou", "Jianfei Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a27f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1048050, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ee"}, "filepath": "data/2510.05560v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998443501834432, "type": "Poster", "name": "HoloScene: Simulation\u2011Ready Interactive 3D Worlds from a Single Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119376", "abstract": "Digitizing the physical world into accurate simulation\u2011ready virtual environments offers significant opportunities in a variety of fields such as augmented and virtual reality, gaming, and robotics. However, current 3D reconstruction and scene-understanding methods commonly fall short in one or more critical aspects, such as geometry completeness, object interactivity, physical plausibility, photorealistic rendering, or realistic physical properties for reliable dynamic simulation. To address these limitations, we introduce HoloScene, a novel interactive 3D reconstruction framework that simultaneously achieves these requirements. HoloScene leverages a comprehensive interactive scene-graph representation, encoding object geometry, appearance, and physical properties alongside hierarchical and inter-object relationships. Reconstruction is formulated as an energy-based optimization problem, integrating observational data, physical constraints, and generative priors into a unified, coherent objective. Optimization is efficiently performed via a hybrid approach combining sampling-based exploration with gradient-based refinement. The resulting digital twins exhibit complete and precise geometry, physical stability, and realistic rendering from novel viewpoints. Evaluations conducted on multiple benchmark datasets demonstrate superior performance, while practical use-cases in interactive gaming and real-time digital-twin manipulation illustrate HoloScene's broad applicability and effectiveness.", "arxiv_id": "2510.05560v1", "arxiv_authors": ["Hongchi Xia", "Chih-Hao Lin", "Hao-Yu Hsu", "Quentin Leboutet", "Katelyn Gao", "Michael Paulitsch", "Benjamin Ummenhofer", "Shenlong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a280"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.498Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3209180, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ef"}, "filepath": "data/2506.09650v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993996611784866, "type": "Poster", "name": "HopaDIFF: Holistic-Partial Aware Fourier Conditioned Diffusion for Referring Human Action Segmentation in Multi-Person Scenarios", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115245", "abstract": "Action segmentation is a core challenge in high-level video understanding, aiming to partition untrimmed videos into segments and assign each a label from a predefined action set. Existing methods primarily address single-person activities with fixed action sequences, overlooking multi-person scenarios. In this work, we pioneer textual reference-guided human action segmentation in multi-person settings, where a textual description specifies the target person for segmentation. We introduce the first dataset for Referring Human Action Segmentation, i.e., RHAS133, built from 133 movies and annotated with 137 fine-grained actions with 33h video data, together with textual descriptions for this new task. Benchmarking existing action recognition methods on RHAS133 using VLM-based feature extractors reveals limited performance and poor aggregation of visual cues for the target person. To address this, we propose aholistic-partial aware Fourier-conditioned diffusion framework, i.e., HopaDIFF, leveraging a novel cross-input gate attentional xLSTM to enhance holistic-partial long-range reasoning and a novel Fourier condition to introduce more fine-grained control to improve the action segmentation generation. HopaDIFF achieves state-of-the-art results on RHAS133 in diverse evaluation settings. The dataset and code are available in the supplementary.", "arxiv_id": "2506.09650v2", "arxiv_authors": ["Kunyu Peng", "Junchao Huang", "Xiangsheng Huang", "Di Wen", "Junwei Zheng", "Yufan Chen", "Kailun Yang", "Jiamin Wu", "Chongqing Hao", "Rainer Stiefelhagen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a281"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1120019, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f0"}, "filepath": "data/2507.00833v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990783842678759, "type": "Poster", "name": "HumanoidGen: Data Generation for Bimanual Dexterous Manipulation via LLM Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118438", "abstract": "For robotic manipulation, existing robotics datasets and simulation benchmarks predominantly cater to robot-arm platforms. However, for humanoid robots equipped with dual arms and dexterous hands, simulation tasks and high-quality demonstrations are notably lacking. Bimanual dexterous manipulation is inherently more complex, as it requires coordinated arm movements and hand operations, making autonomous data collection challenging. This paper presents HumanoidGen, an automated task creation and demonstration collection framework that leverages atomic dexterous operations and LLM reasoning to generate relational constraints. Specifically, we provide spatial annotations for both assets and dexterous hands based on the atomic operations, and perform an LLM planner to generate a chain of actionable spatial constraints for arm movements based on object affordances and scenes. To further improve planning ability, we employ a variant of Monte Carlo tree search to enhance LLM reasoning for long-horizon tasks and insufficient annotation. In experiments, we create a novel benchmark with augmented scenarios to evaluate the quality of the collected data. The results show that the performance of the 2D and 3D diffusion policies can scale with the generated dataset.", "arxiv_id": "2507.00833v1", "arxiv_authors": ["Zhi Jing", "Siyuan Yang", "Jicong Ao", "Ting Xiao", "Yugang Jiang", "Chenjia Bai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a282"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1007987, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f1"}, "filepath": "data/2510.20322v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999473943045976, "type": "Poster", "name": "HyperET: Efficient Training in Hyperbolic Space for Multi-modal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118372", "abstract": "Multi-modal large language models (MLLMs) have emerged as a transformative approach for aligning visual and textual understanding. They typically require extremely high computational resources (e.g., thousands of GPUs) for training to achieve cross-modal alignment at multi-granularity levels. We argue that a key source of this inefficiency lies in the vision encoders they widely equip with, e.g., CLIP and SAM, which lack the alignment with language at multi-granularity levels. To address this issue, in this paper, we leverage hyperbolic space, which inherently models hierarchical levels and thus provides a principled framework for bridging the granularity gap between visual and textual modalities at an arbitrary granularity level. Concretely, we propose an efficient training paradigm for MLLMs, dubbed as \\blg, which can optimize visual representations to align with their textual counterparts at an arbitrary granularity level through dynamic hyperbolic radius adjustment in hyperbolic space. \\alg employs learnable matrices with M\\\"{o}bius multiplication operations, implemented via three effective configurations: diagonal scaling matrices, block-diagonal matrices, and banded matrices, providing a flexible yet efficient parametrization strategy. Comprehensive experiments across multiple MLLM benchmarks demonstrate that \\alg consistently improves both existing pre-training and fine-tuning MLLMs by large margins with less than 1\\% additional parameters.", "arxiv_id": "2510.20322v1", "arxiv_authors": ["Zelin Peng", "Zhengqin Xu", "Qingyang Liu", "Xiaokang Yang", "Wei Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a283"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053412, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f2"}, "filepath": "data/2507.11932v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997204910084708, "type": "Poster", "name": "Hyperphantasia: A Benchmark for Evaluating the Mental Visualization Capabilities of Multimodal LLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121686", "abstract": "Mental visualization, the ability to construct and manipulate visual representations internally, is a core component of human cognition and plays a vital role in tasks involving reasoning, prediction, and abstraction. Despite the rapid progress of Multimodal Large Language Models (MLLMs), current benchmarks primarily assess passive visual perception, offering limited insight into the more active capability of internally constructing visual patterns to support problem solving. Yet mental visualization is a critical cognitive skill in humans, supporting abilities such as spatial navigation, predicting physical trajectories, and solving complex visual problems through imaginative simulation. To bridge this gap, we introduce Hyperphantasia, a synthetic benchmark designed to evaluate the mental visualization abilities of MLLMs through four carefully constructed puzzles. Each task is procedurally generated and presented at three difficulty levels, enabling controlled analysis of model performance across increasing complexity. Our comprehensive evaluation of state-of-the-art models reveals a substantial gap between the performance of humans and MLLMs. Additionally, we explore the potential of reinforcement learning to improve visual simulation capabilities. Our findings suggest that while some models exhibit partial competence in recognizing visual patterns, robust mental visualization remains an open challenge for current MLLMs.", "arxiv_id": "2507.11932v1", "arxiv_authors": ["Mohammad Shahab Sepehri", "Berk Tinaz", "Zalan Fabian", "Mahdi Soltanolkotabi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a284"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057065, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f3"}, "filepath": "data/2509.16748v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994023151002563, "type": "Poster", "name": "HyPlaneHead: Rethinking Tri-plane-like Representations in Full-Head Image Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119389", "abstract": "Tri-plane-like representations have been widely adopted in 3D-aware GANs for head image synthesis and other 3D object/scene modeling tasks due to their efficiency. However, querying features via Cartesian coordinate projection often leads to feature entanglement, which results in mirroring artifacts. A recent work, SphereHead, attempted to address this issue by introducing spherical tri-planes based on a spherical coordinate system. While it successfully mitigates feature entanglement, SphereHead suffers from uneven mapping between the square feature maps and the spherical planes, leading to inefficient feature map utilization during rendering and difficulties in generating fine image details.Moreover, both tri-plane and spherical tri-plane representations share a subtle yet persistent issue: feature penetration across convolutional channels can cause interference between planes, particularly when one plane dominates the others (see Fig. 1). These challenges collectively prevent tri-plane-based methods from reaching their full potential. In this paper, we systematically analyze these problems for the first time and propose innovative solutions to address them. Specifically, we introduce a novel hybrid-plane (hy-plane for short) representation that combines the strengths of both planar and spherical planes while avoiding their respective drawbacks. We further enhance the spherical plane by replacing the conventional theta-phi warping with a novel near-equal-area warping strategy, which maximizes the effective utilization of the square feature map. In addition, our generator synthesizes a single-channel unified feature map instead of multiple feature maps in separate channels, thereby effectively eliminating feature penetration. With a series of technical improvements, our hy-plane representation enables our method, HyPlaneHead, to achieve state-of-the-art performance in full-head image synthesis.", "arxiv_id": "2509.16748v1", "arxiv_authors": ["Heyuan Li", "Kenkun Liu", "Lingteng Qiu", "Qi Zuo", "Keru Zheng", "Zilong Dong", "Xiaoguang Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a285"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 919950, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f4"}, "filepath": "data/2509.17083v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990253493805723, "type": "Poster", "name": "HyRF: Hybrid Radiance Fields for Memory-efficient and High-quality Novel View Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120004", "abstract": "Recently, 3D Gaussian Splatting (3DGS) has emerged as a powerful alternative to NeRF-based approaches, enabling real-time, high-quality novel view synthesis through explicit, optimizable 3D Gaussians. However, 3DGS suffers from significant memory overhead due to its reliance on per-Gaussian parameters to model view-dependent effects and anisotropic shapes. While recent works propose compressing 3DGS with neural fields, these methods struggle to capture high-frequency spatial variations in Gaussian properties, leading to degraded reconstruction of fine details. We present Hybrid Radiance Fields (HyRF), a novel scene representation that combines the strengths of explicit Gaussians and neural fields. HyRF decomposes the scene into (1) a compact set of explicit Gaussians storing only critical high-frequency parameters and (2) grid-based neural fields that predict remaining properties. To enhance representational capacity, we introduce a decoupled neural field architecture, separately modeling geometry (scale, opacity, rotation) and view-dependent color. Additionally, we propose a hybrid rendering scheme that composites Gaussian splatting with a neural field-predicted background, addressing limitations in distant scene representation.Experiments demonstrate that HyRF achieves state-of-the-art rendering quality while reducing model size by over 20\u00d7 compared to 3DGS and maintaining real-time performance.", "arxiv_id": "2509.17083v2", "arxiv_authors": ["Zipeng Wang", "Dan Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a286"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110066, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f5"}, "filepath": "data/2510.22161v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994326415885395, "type": "Poster", "name": "I2-NeRF: Learning Neural Radiance Fields Under Physically-Grounded Media Interactions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118758", "abstract": "Participating in efforts to endow generative AI with the 3D physical world perception, we propose I2-NeRF, a novel neural radiance field framework that enhances isometric and isotropic metric perception under media degradation. While existing NeRF models predominantly rely on object-centric sampling, I2-NeRF introduces a reverse-stratified upsampling strategy to achieve near-uniform sampling across 3D space, thereby preserving isometry. We further present a general radiative formulation for media degradation that unifies emission, absorption, and scattering into a particle model governed by the Beer\u2013Lambert attenuation law. By matting direct and media-induced in-scatter radiance, this formulation extends naturally to complex media environments such as underwater, haze, and even low-light scenes. By treating light propagation uniformly in both vertical and horizontal directions, I2-NeRF enables isotropic metric perception and can even estimate medium properties such as water depth. Experiments on real-world datasets demonstrate that our method significantly improves both reconstruction fidelity and physical plausibility compared to existing approaches. The source code will be released.", "arxiv_id": "2510.22161v1", "arxiv_authors": ["Shuhong Liu", "Lin Gu", "Ziteng Cui", "Xuangeng Chu", "Tatsuya Harada"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a287"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1089346, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f6"}, "filepath": "data/2509.19552v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993975977479945, "type": "Poster", "name": "iFinder: Structured Zero-Shot Vision-Based LLM Grounding for Dash-Cam Video Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116305", "abstract": "Grounding large language models (LLMs) in domain-specific tasks like post-hoc dash-cam driving video analysis is challenging due to their general-purpose training and lack of structured inductive biases. As vision is often the sole modality available for such analysis (i.e., no LiDAR, GPS, etc.), existing video-based vision-language models (V-VLMs) struggle with spatial reasoning, causal inference, and explainability of events in the input video. To this end, we introduce iFinder, a structured semantic grounding framework that decouples perception from reasoning by translating dash-cam videos into a hierarchical, interpretable data structure for LLMs. iFinder operates as a modular, training-free pipeline that employs pretrained vision models to extract critical cues\u2014object pose, lane positions, and object trajectories\u2014which are hierarchically organized into frame- and video-level structures. Combined with a three-block prompting strategy, it enables step-wise, grounded reasoning for the LLM to refine a peer V-VLM's outputs and provide accurate reasoning.Evaluations on four public dash-cam video benchmarks show that iFinder's proposed grounding with domain-specific cues\u2014especially object orientation and global context\u2014significantly outperforms end-to-end V-VLMs on four zero-shot driving benchmarks, with up to 39% gains in accident reasoning accuracy. By grounding LLMs with driving domain-specific representations, iFinder offers a zero-shot, interpretable, and reliable alternative to end-to-end V-VLMs for post-hoc driving video understanding.", "arxiv_id": "2509.19552v2", "arxiv_authors": ["Manyi Yao", "Bingbing Zhuang", "Sparsh Garg", "Amit Roy-Chowdhury", "Christian Shelton", "Manmohan Chandraker", "Abhishek Aich"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a288"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113614, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f7"}, "filepath": "data/2506.03150v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996739528965977, "type": "Poster", "name": "IllumiCraft: Unified Geometry and Illumination Diffusion for Controllable Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115390", "abstract": "Although diffusion-based models can generate high-quality and high-resolution video sequences from textual or image inputs, they lack explicit integration of geometric cues when controlling scene lighting and visual appearance across frames. To address this limitation, we propose IllumiCraft, an end-to-end diffusion framework accepting three complementary inputs: (1) high-dynamic-range (HDR) video maps for detailed lighting control; (2) synthetically relit frames with randomized illumination changes (optionally paired with a static background reference image) to provide appearance cues; and (3) 3D point tracks that capture precise 3D geometry information. By integrating the lighting, appearance, and geometry cues within a unified diffusion architecture, IllumiCraft generates temporally coherent videos aligned with user-defined prompts. It supports the background-conditioned and text-conditioned video relighting and provides better fidelity than existing controllable video generation methods.", "arxiv_id": "2506.03150v1", "arxiv_authors": ["Yuanze Lin", "Yi-Wen Chen", "Yi-Hsuan Tsai", "Ronald Clark", "Ming-Hsuan Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a289"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4253720, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f8"}, "filepath": "data/2506.04158v1.png", "tags": [], "_media_type": "image", "_rand": 0.999327991563095, "type": "Poster", "name": "Image Editing As Programs with Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118240", "abstract": "While diffusion models have achieved remarkable success in text-to-image generation, they encounter significant challenges with instruction-driven image editing. Our research highlights a key challenge: these models particularly struggle with structurally-inconsistent edits that involve substantial layout changes. To address this gap, we introduce Image Editing As Programs (IEAP), a unified image editing framework built upon the Diffusion Transformer (DiT) architecture. Specifically, IEAP deals with complex instructions by decomposing them into a sequence of programmable atomic operations. Each atomic operation manages a specific type of structurally consistent edit; when sequentially combined, IEAP enables the execution of arbitrary, structurally-inconsistent transformations. This reductionist approach enables IEAP to robustly handle a wide spectrum of edits, encompassing both structurally-consistent and -inconsistent changes. Extensive experiments demonstrate that IEAP significantly outperforms state-of-the-art methods on standard benchmarks across various editing scenarios. In these evaluations, our framework delivers superior accuracy and semantic fidelity, particularly for complex, multi-step instructions.", "arxiv_id": "2506.04158v1", "arxiv_authors": ["Yujia Hu", "Songhua Liu", "Zhenxiong Tan", "Xingyi Yang", "Xinchao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5897715, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6f9"}, "filepath": "data/2509.20234v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999468790372114, "type": "Poster", "name": "ImageNet-trained CNNs are not biased towards texture: Revisiting feature reliance through controlled suppression", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116594", "abstract": "The hypothesis that Convolutional Neural Networks (CNNs) are inherently texture-biased has shaped much of the discourse on feature use in deep learning. We revisit this hypothesis by examining limitations in the cue-conflict experiment by Geirhos et al. To address these limitations, we propose a domain-agnostic framework that quantifies feature reliance through systematic suppression of shape, texture, and color cues, avoiding the confounds of forced-choice conflicts. By evaluating humans and neural networks under controlled suppression conditions, we find that CNNs are not inherently texture-biased but predominantly rely on local shape features. Nonetheless, this reliance can be substantially mitigated through modern training strategies or architectures (ConvNeXt, ViTs). We further extend the analysis across computer vision, medical imaging, and remote sensing, revealing that reliance patterns differ systematically: computer vision models prioritize shape, medical imaging models emphasize color, and remote sensing models exhibit a stronger reliance towards texture.", "arxiv_id": "2509.20234v2", "arxiv_authors": ["Tom Burgert", "Oliver Stoll", "Paolo Rota", "Beg\u00fcm Demir"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.499Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1035733, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6fa"}, "filepath": "data/2510.12119v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997283209432786, "type": "Poster", "name": "ImageSentinel: Protecting Visual Datasets from Unauthorized Retrieval-Augmented Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118186", "abstract": "The widespread adoption of Retrieval-Augmented Image Generation (RAIG) has raised significant concerns about the unauthorized use of private image datasets. While these systems have shown remarkable capabilities in enhancing generation quality through reference images, protecting visual datasets from unauthorized use in such systems remains a challenging problem. Traditional digital watermarking approaches face limitations in RAIG systems, as the complex feature extraction and recombination processes fail to preserve watermark signals during generation. To address these challenges, we propose ImageSentinel, a novel framework for protecting visual datasets in RAIG. Our framework synthesizes sentinel images that maintain visual consistency with the original dataset. These sentinels enable protection verification through randomly generated character sequences that serve as retrieval keys. To ensure seamless integration, we leverage vision-language models to generate the sentinel images. Experimental results demonstrate that ImageSentinel effectively detects unauthorized dataset usage while preserving generation quality for authorized applications.", "arxiv_id": "2510.12119v1", "arxiv_authors": ["Ziyuan Luo", "Yangyi Zhao", "Ka Chun Cheung", "Simon See", "Renjie Wan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1066469, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6fb"}, "filepath": "data/2502.09664v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990482511968144, "type": "Poster", "name": "Image Super-Resolution with Guarantees via Conformalized Generative Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117061", "abstract": "The increasing use of generative ML foundation models for image restoration tasks such as super-resolution calls for robust and interpretable uncertainty quantification methods. We address this need by presenting a novel approach based on conformal prediction techniques to create a `confidence mask' capable of reliably and intuitively communicating where the generated image can be trusted. Our method is adaptable to any black-box generative model, including those locked behind an opaque API, requires only easily attainable data for calibration, and is highly customizable via the choice of a local image similarity metric. We prove strong theoretical guarantees for our method that span fidelity error control (according to our local image similarity metric), reconstruction quality, and robustness in the face of data leakage. Finally, we empirically evaluate these results and establish our method's solid performance.", "arxiv_id": "2502.09664v2", "arxiv_authors": ["Eduardo Adame", "Daniel Csillag", "Guilherme Tegoni Goedert"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 921156, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6fc"}, "filepath": "data/2505.21547v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994179493350528, "type": "Poster", "name": "Image Token Matters: Mitigating Hallucination in Discrete Tokenizer-based Large Vision-Language Models via Latent Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117602", "abstract": "Large Vision-Language Models (LVLMs) with discrete image tokenizers unify multimodal representations by encoding visual inputs into a finite set of tokens. Despite their effectiveness, we find that these models still hallucinate non-existent objects. We hypothesize that one reason is due to visual priors induced during training: when certain image tokens frequently co-occur in the same spatial regions and represent shared objects, they become strongly associated with the verbalizations of those objects. As a result, the model may hallucinate by evoking visually absent tokens that often co-occur with present ones. To test this assumption, we construct a co-occurrence graph of image tokens using a segmentation dataset and employ a Graph Neural Network (GNN) with contrastive learning followed by a clustering method to group tokens that frequently co-occur in similar visual contexts. We find that hallucinations predominantly correspond to clusters whose tokens dominate the input, and more specifically, that the visually absent tokens in those clusters show much higher correlation with hallucinated objects compared to tokens present in the image. Based on this observation, we propose a hallucination mitigation method that suppresses the influence of visually absent tokens by modifying latent image embeddings during generation. Experiments show our method reduces hallucinations while preserving expressivity.", "arxiv_id": "2505.21547v1", "arxiv_authors": ["Weixing Wang", "Zifeng Ding", "Jindong Gu", "Rui Cao", "Christoph Meinel", "Gerard de Melo", "Haojin Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1022822, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6fd"}, "filepath": "data/2412.03552v1.png", "tags": [], "_media_type": "image", "_rand": 0.999269959299707, "type": "Poster", "name": "Imagine360: Immersive 360 Video Generation from Perspective Anchor", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119364", "abstract": "$360^\\circ$ videos offer a hyper-immersive experience that allows the viewers to explore a dynamic scene from full 360 degrees. To achieve more accessible and personalized content creation in $360^\\circ$ video format, we seek to lift standard perspective videos into $360^\\circ$ equirectangular videos. To this end, we introduce **Imagine360**, the first perspective-to-$360^\\circ$ video generation framework that creates high-quality $360^\\circ$ videos with rich and diverse motion patterns from video anchors.Imagine360 learns fine-grained spherical visual and motion patterns from limited $360^\\circ$ video data with several key designs. **1)** Firstly we adopt the dual-branch design, including a perspective and a panorama video denoising branch to provide local and global constraints for $360^\\circ$ video generation, with motion module and spatial LoRA layers fine-tuned on $360^\\circ$ videos.**2)** Additionally, an antipodal mask is devised to capture long-range motion dependencies, enhancing the reversed camera motion between antipodal pixels across hemispheres.**3)** To handle diverse perspective video inputs, we propose rotation-aware designs that adapt to varying video masking due to changing camera poses across frames.**4)** Lastly, we introduce a new 360 video dataset featuring 10K high-quality, trimmed 360 video clips with structured motion to facilitate training.Extensive experiments show Imagine360 achieves superior graphics quality and motion coherence with our curated dataset among state-of-the-art $360^\\circ$ video generation methods. We believe Imagine360 holds promise for advancing personalized, immersive $360^\\circ$ video creation.", "arxiv_id": "2412.03552v1", "arxiv_authors": ["Jing Tan", "Shuai Yang", "Tong Wu", "Jingwen He", "Yuwei Guo", "Ziwei Liu", "Dahua Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a28f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4727572, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6fe"}, "filepath": "data/2505.20275v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998556640438048, "type": "Poster", "name": "ImgEdit: A Unified Image Editing Dataset and Benchmark", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121410", "abstract": "Recent advancements in generative models have enabled high-fidelity text-to-image generation. However, open-source image-editing models still lag behind their proprietary counterparts, primarily due to limited high-quality data and insufficient benchmarks.To overcome these limitations, we introduce **ImgEdit**, a large-scale, high-quality image-editing dataset comprising one million carefully curated edit pairs, which contain both novel and complex single-turn edits, as well as challenging multi-turn tasks.To ensure the data quality, we employ a multi-stage pipeline that integrates a cutting-edge vision-language model, a detection model, a segmentation model, alongside task-specific in-painting procedures and strict post-processing. ImgEdit surpasses existing datasets in both task novelty and data quality.Using ImgEdit, we train **ImgEdit-E1**, an editing model using Vision Language Model to process the reference image and editing prompt, which outperforms existing open-source models on multiple tasks, highlighting the value of ImgEdit and model design.For comprehensive evaluation, we introduce **ImgEdit-Bench**, a benchmark designed to evaluate image editing performance in terms of instruction adherence, editing quality, and detail preservation.It includes a basic testsuite, a challenging single-turn suite, and a dedicated multi-turn suite. We evaluate both open-source and proprietary models, as well as ImgEdit-E1, providing deep analysis and actionable insights into the current behavior of image-editing models.", "arxiv_id": "2505.20275v1", "arxiv_authors": ["Yang Ye", "Xianyi He", "Zongjian Li", "Bin Lin", "Shenghai Yuan", "Zhiyuan Yan", "Bohan Hou", "Li Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a290"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096516, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a6ff"}, "filepath": "data/2405.12895v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994799596179741, "type": "Poster", "name": "Implicit-ARAP: Efficient Handle-Guided Neural Field Deformation via Local Patch Meshing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115014", "abstract": "Neural fields have emerged as a powerful representation for 3D geometry, enabling compact and continuous modeling of complex shapes. Despite their expressive power, manipulating neural fields in a controlled and accurate manner -- particularly under spatial constraints -- remains an open challenge, as existing approaches struggle to balance surface quality, robustness, and efficiency. We address this by introducing a novel method for handle-guided neural field deformation, which leverages discrete local surface representations to optimize the As-Rigid-As-Possible deformation energy. To this end, we propose the local patch mesh representation, which discretizes level sets of a neural signed distance field by projecting and deforming flat mesh patches guided solely by the SDF and its gradient. We conduct a comprehensive evaluation showing that our method consistently outperforms baselines in deformation quality, robustness, and computational efficiency. We also present experiments that motivate our choice of discretization over marching cubes. By bridging classical geometry processing and neural representations through local patch meshing, our work enables scalable, high-quality deformation of neural fields and paves the way for extending other geometric tasks to neural domains.", "arxiv_id": "2405.12895v3", "arxiv_authors": ["Daniele Baieri", "Filippo Maggioli", "Emanuele Rodol\u00e0", "Simone Melzi", "Zorah L\u00e4hner"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a291"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 938850, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a700"}, "filepath": "data/2510.23145v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993281917377763, "type": "Poster", "name": "Implicit Modeling for Transferability Estimation of Vision Foundation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118099", "abstract": "Transferability estimation identifies the best pre-trained models for downstream tasks without incurring the high computational cost of full fine-tuning. This capability facilitates deployment and advances the pre-training and fine-tuning paradigm. However, existing methods often struggle to accurately assess transferability for emerging pre-trained models with diverse architectures, training strategies, and task alignments. In this work, we propose Implicit Transferability Modeling (ITM), a novel framework that implicitly models each model\u2019s intrinsic transferability, coupled with a Divide-and-Conquer Variational Approximation (DVA) strategy to efficiently approximate embedding space evolution. This design enables generalization across a broader range of models and downstream tasks. Extensive experiments on a comprehensive benchmark\u2014spanning fuller training regimes and a wider variety of model types\u2014demonstrate that ITM consistently outperforms existing methods in terms of stability, effectiveness, and efficiency.", "arxiv_id": "2510.23145v1", "arxiv_authors": ["Yaoyan Zheng", "Huiqun Wang", "Nan Zhou", "Di Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a292"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1022412, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a701"}, "filepath": "data/2505.23757v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993977551891868, "type": "Poster", "name": "Impromptu VLA: Open Weights and Open Data for Driving Vision-Language-Action Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121495", "abstract": "Vision-Language-Action (VLA) models for autonomous driving show promise but falter in unstructured corner case scenarios, largely due to a scarcity of targeted benchmarks. To address this, we introduce Impromptu VLA. Our core contribution is the Impromptu VLA Dataset: over 80,000 meticulously curated video clips, distilled from over 2M source clips sourced from 8 open-source large-scale datasets. This dataset is built upon our novel taxonomy of four challenging unstructured categories and features rich, planning-oriented question-answering annotations and action trajectories. Crucially, experiments demonstrate that VLAs trained with our dataset achieve substantial performance gains on established benchmarks\u2014improving closed-loop NeuroNCAP scores and collision rates, and reaching near state-of-the-art L2 accuracy in open-loop nuScenes trajectory prediction. Furthermore, our Q&A suite serves as an effective diagnostic, revealing clear VLM improvements in perception, prediction, and planning. Our code, data and models are available at https://anonymous.4open.science/r/Impromptu-VLA-54ED/", "arxiv_id": "2505.23757v1", "arxiv_authors": ["Haohan Chi", "Huan-ang Gao", "Ziming Liu", "Jianing Liu", "Chenyu Liu", "Jinwei Li", "Kaisen Yang", "Yangcheng Yu", "Zeda Wang", "Wenyi Li", "Leichen Wang", "Xingtao Hu", "Hao Sun", "Hang Zhao", "Hao Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a293"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1635709, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a702"}, "filepath": "data/2506.15564v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995893450114853, "type": "Poster", "name": "Improved Native Unified Multimodal Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119700", "abstract": "This paper presents improved native unified multimodal models that leverage autoregressive modeling and flow matching. Built upon a 3D causal variational autoencoder space, unified visual representations are constructed through a dual-path of spatial (-temporal) fusion, enabling scalability across image and video modalities while ensuring effective multimodal understanding and generation. Based on a language model, autoregressive modeling and flow matching are natively applied to the language head and flow head, respectively, to facilitate text token prediction and image/video generation. A two-stage training recipe is designed to effectively learn and scale to larger models. The resulting model demonstrates versatility in handling a wide range of multimodal understanding and generation tasks across diverse modalities, including text, images, and videos. The training code and pre-trained models will be fully open-sourced.", "arxiv_id": "2506.15564v3", "arxiv_authors": ["Jinheng Xie", "Zhenheng Yang", "Mike Zheng Shou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a294"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073346, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a703"}, "filepath": "data/2510.21250v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993602996674206, "type": "Poster", "name": "Improved Training Technique for Shortcut Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115998", "abstract": "Shortcut models represent a promising pathway for generative modeling, supporting one-step, few-step, and many-step sampling without adversarial training. Yet their practical adoption has been hindered by several key issues: the need to fix its classifier-free guidance scale at training time, which limits flexibility at inference; high variance from joint training on random noise\u2013data pairs, which decelerates and destabilizes convergence; and reliance on low-level distance on direct domain that bias reconstructions toward low frequencies and degrade sample quality. Moreover, we uncover a previously overlooked problem of accumulation scale in classifier-free guidance and a subtle conflict between EMA updates and the self-consistency objective. To address these challenges, we introduce a unified training framework for shortcut models that (1) parameterizes guidance scales to support dynamic guidance sampling at inference, (2) mitigates frequency bias with a Multi-level Wavelet Loss, (3) incorporates interval guidance directly into the loss, (4) reduces training variance via Scaling Optimal Transport Matching, and (5) preserves self-consistency alongside training stability through a Twin EMA strategy. Extensive experiments on ImageNet $256\\times256$ demonstrate that our approach yields substantial FID improvements over baseline shortcut models in one-step, few-step, and multi-step generation.", "arxiv_id": "2510.21250v1", "arxiv_authors": ["Anh Nguyen", "Viet Nguyen", "Duc Vu", "Trung Dao", "Chi Tran", "Toan Tran", "Anh Tran"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a295"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083692, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a704"}, "filepath": "data/2508.13822v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996051884474754, "type": "Poster", "name": "Improving Deep Learning for Accelerated MRI With Data Filtering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121672", "abstract": "Deep neural networks achieve state-of-the-art results for accelerated MRI reconstruction. Most research on deep learning based imaging focuses on improving neural network architectures trained and evaluated on fixed and homogeneous training and evaluation data. In this work, we investigate data curation strategies for improving MRI reconstruction. We assemble a large dataset of raw k-space data from 18 public sources and construct a diverse evaluation set comprising 48 test sets, capturing variations in anatomy, contrast, number of coils, and other key factors. We propose and study different data filtering strategies to enhance performance of current state-of-the-art neural networks for accelerated MRI reconstruction. Our experiments show that filtering the training data leads to consistent, albeit modest, performance gains. These performance gains are robust across different training set sizes and accelerations, and we find that filtering is particularly beneficial when the proportion of in-distribution data in the unfiltered training set is low.", "arxiv_id": "2508.13822v1", "arxiv_authors": ["Kang Lin", "Anselm Krainovic", "Kun Wang", "Reinhard Heckel"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a296"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069230, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a705"}, "filepath": "data/2503.10103v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993246940651832, "type": "Poster", "name": "Improving Diffusion-based Inverse Algorithms under Few-Step Constraint via Linear Extrapolation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119148", "abstract": "Diffusion-based inverse algorithms have shown remarkable performance across various inverse problems, yet their reliance on numerous denoising steps incurs high computational costs. While recent developments of fast diffusion ODE solvers offer effective acceleration for diffusion sampling without observations, their application in inverse problems remains limited due to the heterogeneous formulations of inverse algorithms and their prevalent use of approximations and heuristics, which often introduce significant errors that undermine the reliability of analytical solvers. In this work, we begin with an analysis of ODE solvers for inverse problems that reveals a linear combination structure of approximations for the inverse trajectory. Building on this insight, we propose a canonical form that unifies a broad class of diffusion-based inverse algorithms and facilitates the design of more generalizable solvers. Inspired by the linear subspace search strategy, we propose Learnable Linear Extrapolation (LLE), a lightweight approach that universally enhances the performance of any diffusion-based inverse algorithm conforming to our canonical form. LLE optimizes the combination coefficients to refine current predictions using previous estimates, alleviating the sensitivity of analytical solvers for inverse algorithms. Extensive experiments demonstrate consistent improvements of the proposed LLE method across multiple algorithms and tasks, indicating its potential for more efficient solutions and boosted performance of diffusion-based inverse algorithms with limited steps.", "arxiv_id": "2503.10103v3", "arxiv_authors": ["Jiawei Zhang", "Ziyuan Liu", "Leon Yan", "Gen Li", "Yuantao Gu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a297"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 950804, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a706"}, "filepath": "data/2510.12392v1.png", "tags": [], "_media_type": "image", "_rand": 0.999465552305151, "type": "Poster", "name": "Improving Generative Behavior Cloning via Self-Guidance and Adaptive Chunking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118955", "abstract": "Generative Behavior Cloning (GBC) is a simple yet effective framework for robot learning, particularly in multi-task settings. Recent GBC methods often employ diffusion policies with open-loop (OL) control, where actions are generated via a diffusion process and executed in multi-step chunks without replanning. While this approach has demonstrated strong success rates and generalization, its inherent stochasticity can result in erroneous action sampling, occasionally leading to unexpected task failures. Moreover, OL control suffers from delayed responses, which can degrade performance in noisy or dynamic environments. To address these limitations, we propose two novel techniques to enhance the consistency and reactivity of diffusion policies: (1) self-guidance, which improves action fidelity by leveraging past observations and implicitly promoting future-aware behavior; and (2) adaptive chunking, which selectively updates action sequences when the benefits of reactivity outweigh the need for temporal consistency. Extensive experiments show that our approach substantially improves GBC performance across a wide range of simulated and real-world robotic manipulation tasks.", "arxiv_id": "2510.12392v1", "arxiv_authors": ["Junhyuk So", "Chiwoong Lee", "Shinyoung Lee", "Jungseul Ok", "Eunhyeok Park"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a298"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.500Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073280, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a707"}, "filepath": "data/2506.19839v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992824974257923, "type": "Poster", "name": "Improving Progressive Generation with Decomposable Flow Matching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120013", "abstract": "Generating high-dimensional visual modalities is a computationally intensive task. A common solution is progressive generation, where the outputs are synthesized in a coarse-to-fine spectral autoregressive manner. While diffusion models benefit from the coarse-to-fine nature of denoising, explicit multi-stage architectures are rarely adopted. These architectures have increased the complexity of the overall approach, introducing the need for a custom diffusion formulation, decomposition-dependent stage transitions, add-hoc samplers, or a model cascade. Our contribution, Decomposable Flow Matching (DFM), is a simple and effective framework for the progressive generation of visual media. DFM applies Flow Matching independently at each level of a user-defined multi-scale representation (such as Laplacian pyramid). As shown by our experiments, our approach improves visual quality for both images and videos, featuring superior results compared to prior multistage frameworks. On Imagenet-1k 512px, DFM achieves 35.2% improvements in FDD scores over the base architecture and 26.4% over the best-performing baseline, under the same training compute. When applied to finetuning of large models, such as FLUX, DFM shows faster convergence speed to the training distribution. Crucially, all these advantages are achieved with a single model, architectural simplicity, and minimal modifications to existing training pipelines.", "arxiv_id": "2506.19839v1", "arxiv_authors": ["Moayed Haji-Ali", "Willi Menapace", "Ivan Skorokhodov", "Arpit Sahni", "Sergey Tulyakov", "Vicente Ordonez", "Aliaksandr Siarohin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a299"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 981616, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a708"}, "filepath": "data/2501.13918v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990642314359237, "type": "Poster", "name": "Improving Video Generation with Human Feedback", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116150", "abstract": "Video generation has achieved significant advances through rectified flow techniques, but issues like unsmooth motion and misalignment between videos and prompts persist. In this work, we develop a systematic pipeline that harnesses human feedback to mitigate these problems and refine the video generation model. Specifically, we begin by constructing a large-scale human preference dataset focused on modern video generation models, incorporating pairwise annotations across multi-dimensions. We then introduce VideoReward, a multi-dimensional video reward model, and examine how annotations and various design choices impact its rewarding efficacy. From a unified reinforcement learning perspective aimed at maximizing reward with KL regularization, we introduce three alignment algorithms for flow-based models. These include two training-time strategies: direct preference optimization for flow (Flow-DPO) and reward weighted regression for flow (Flow-RWR), and an inference-time technique, Flow-NRG, which applies reward guidance directly to noisy videos. Experimental results indicate that VideoReward significantly outperforms existing reward models, and Flow-DPO demonstrates superior performance compared to both Flow-RWR and supervised fine-tuning methods. Additionally, Flow-NRG lets users assign custom weights to multiple objectives during inference, meeting personalized video quality needs.", "arxiv_id": "2501.13918v2", "arxiv_authors": ["Jie Liu", "Gongye Liu", "Jiajun Liang", "Ziyang Yuan", "Xiaokun Liu", "Mingwu Zheng", "Xiele Wu", "Qiulin Wang", "Menghan Xia", "Xintao Wang", "Xiaohong Liu", "Fei Yang", "Pengfei Wan", "Di Zhang", "Kun Gai", "Yujiu Yang", "Wanli Ouyang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1117872, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a709"}, "filepath": "data/2506.01413v8.png", "tags": [], "_media_type": "image", "_rand": 0.9991820678418881, "type": "Poster", "name": "Incentivizing Reasoning for Advanced Instruction-Following of Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115410", "abstract": "Existing large language models (LLMs) face challenges of following complex instructions, especially when multiple constraints are present and organized in paralleling, chaining, and branching structures. One intuitive solution, namely chain-of-thought (CoT), is expected to universally improve capabilities of LLMs. However, we find that the vanilla CoT exerts a negative impact on performance due to its superficial reasoning pattern of simply paraphrasing the instructions. It fails to peel back the compositions of constraints for identifying their relationship across hierarchies of types and dimensions. To this end, we propose a systematic method to boost LLMs in dealing with complex instructions via incentivizing reasoning for test-time compute scaling. First, we stem from the decomposition of complex instructions under existing taxonomies and propose a reproducible data acquisition method. Second, we exploit reinforcement learning (RL) with verifiable rule-centric reward signals to cultivate reasoning specifically for instruction following. We address the shallow, non-essential nature of reasoning under complex instructions via sample-wise contrast for superior CoT enforcement. We also exploit behavior cloning of experts to facilitate steady distribution shift from fast-thinking LLMs to skillful reasoners. Extensive evaluations on seven comprehensive benchmarks confirm the validity of the proposed method, where a 1.5B LLM achieves 11.74% gains with performance comparable to a 8B LLM. Codes and data are available at https://anonymous.4open.science/r/IRAIF-B3A0/README.md", "arxiv_id": "2506.01413v8", "arxiv_authors": ["Yulei Qin", "Gang Li", "Zongyi Li", "Zihan Xu", "Yuchen Shi", "Zhekai Lin", "Xiao Cui", "Ke Li", "Xing Sun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1260374, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70a"}, "filepath": "data/2510.13887v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990218363852292, "type": "Poster", "name": "Incomplete Multi-view Clustering via Hierarchical Semantic Alignment and Cooperative Completion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118940", "abstract": "To address the challenges of insufficient semantic alignment across heterogeneous views, rigid fusion strategies, and the decoupling of missing view imputation from representation learning in incomplete multi-view clustering, this paper proposes HSACC, a novel framework based on Hierarchical Semantic Alignment and Cooperative Completion.HSACC introduces a dual-level semantic space design. In the low-level semantic space, consistency alignment is achieved by maximizing mutual information across views. In the high-level semantic space, adaptive view weights are dynamically assigned based on the distribution affinity between individual views and an initial fused representation. These weights are then used to perform weighted fusion, generating a unified global representation.Additionally, HSACC implicitly recovers missing views by projecting aligned latent representations into high-dimensional semantic spaces and jointly optimizes reconstruction and clustering to enable cooperative learning.Experimental results demonstrate that HSACC significantly outperforms state-of-the-art methods on five benchmark datasets. Ablation studies validate the effectiveness of the hierarchical alignment and dynamic weighting mechanisms, while parameter analysis confirms the model's robustness to hyperparameter variations.", "arxiv_id": "2510.13887v2", "arxiv_authors": ["Xiaojian Ding", "Lin Zhao", "Xian Li", "Xiaoying Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29c"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1056782, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70b"}, "filepath": "data/2503.22983v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991856093281943, "type": "Poster", "name": "indiSplit: Bringing Severity Cognizance to Image Decomposition in Fluorescence Microscopy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115037", "abstract": "Fluorescence microscopy, while being a key driver for progress in the life sciences, is also subject to technical limitations. To overcome them, computational multiplexing techniques have recently been proposed, which allow multiple cellular structures to be captured in a single image and later be unmixed. Existing image decomposition methods are trained on a set of superimposed input images and the respective unmixed target images. It is critical to note that the relative strength (mixing ratio) of the superimposed images for a given input is a priori unknown. However, existing methods are trained on a fixed intensity ratio of superimposed inputs, making them not cognizant to the range of relative intensities that can occur in fluorescence microscopy. In this work, we propose a novel method called indiSplit that is cognizant of the severity of the above-mentioned mixing ratio. Our idea is based on InDI, a popular iterative method for image restoration, and an ideal starting point to embrace the unknown mixing ratio in any given input. We introduce $(i)$ a suitably trained regressor network that predicts the degradation level (mixing asymmetry) of a given input image and $(ii)$ a degradation-specific normalization module, enabling degradation-aware inference across all mixing ratios. We show that this method solves two relevant tasks in fluorescence microscopy, namely image splitting and bleedthrough removal and empirically demonstrate the applicability of indiSplit on $5$ public datasets.", "arxiv_id": "2503.22983v3", "arxiv_authors": ["Ashesh Ashesh", "Florian Jug"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1019542, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70c"}, "filepath": "data/2505.20640v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992297971409292, "type": "Poster", "name": "IndustryEQA: Pushing the Frontiers of Embodied Question Answering in Industrial Scenarios", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121538", "abstract": "Existing Embodied Question Answering (EQA) benchmarks primarily focus on household environments, often overlooking safety-critical aspects and reasoning processes pertinent to industrial settings. This drawback limits the evaluation of agent readiness for real-world industrial applications. To bridge this, we introduce IndustryEQA, the first benchmark dedicated to evaluating embodied agent capabilities within safety-critical industrial warehouse scenarios. Built upon the NVIDIA Isaac Sim platform, IndustryEQA provides high-fidelity episodic memory videos featuring diverse industrial assets, dynamic human agents, and carefully designed hazardous situations inspired by real-world safety guidelines. The benchmark includes rich annotations covering six categories: equipment safety, human safety, object recognition, attribute recognition, temporal understanding, and spatial understanding. Besides, it also provides extra reasoning evaluation based on these categories. Specifically, it comprises 971 question-answer pairs generated from small warehouse scenarios and 373 pairs from large ones, incorporating scenarios with and without human. We further propose a comprehensive evaluation framework, including various baseline models, to assess their general perception and reasoning abilities in industrial environments. IndustryEQA aims to steer EQA research towards developing more robust, safety-aware, and practically applicable embodied agents for complex industrial environments.", "arxiv_id": "2505.20640v1", "arxiv_authors": ["Yifan Li", "Yuhang Chen", "Anh Dao", "Lichi Li", "Zhongyi Cai", "Zhen Tan", "Tianlong Chen", "Yu Kong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051011, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70d"}, "filepath": "data/2503.19385v5.png", "tags": [], "_media_type": "image", "_rand": 0.9992446920371028, "type": "Poster", "name": "Inference-Time Scaling for Flow Models via Stochastic Generation and Rollover Budget Forcing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115827", "abstract": "We propose an inference-time scaling approach for pretrained flow models. Recently, inference-time scaling has gained significant attention in LLMs and diffusion models, improving sample quality or better aligning outputs with user preferences by leveraging additional computation. For diffusion models, particle sampling has allowed more efficient scaling due to the stochasticity at intermediate denoising steps. On the contrary, while flow models have gained popularity as an alternative to diffusion models--offering faster generation and high-quality outputs--efficient inference-time scaling methods used for diffusion models cannot be directly applied due to their deterministic generative process. To enable efficient inference-time scaling for flow models, we propose three key ideas: 1) SDE-based generation, enabling particle sampling in flow models, 2) Interpolant conversion, broadening the search space and enhancing sample diversity, and 3) Rollover Budget Forcing (RBF), an adaptive allocation of computational resources across timesteps to maximize budget utilization. Our experiments show that SDE-based generation and variance-preserving (VP) interpolant-based generation, improves the performance of particle sampling methods for inference-time scaling in flow models. Additionally, we demonstrate that RBF with VP-SDE achieves the best performance, outperforming all previous inference-time scaling approaches.", "arxiv_id": "2503.19385v5", "arxiv_authors": ["Jaihoon Kim", "Taehoon Yoon", "Jisung Hwang", "Minhyuk Sung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a29f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3605212, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70e"}, "filepath": "data/2501.19252v3.png", "tags": [], "_media_type": "image", "_rand": 0.999210929974176, "type": "Poster", "name": "Inference-Time Text-to-Video Alignment with Diffusion Latent Beam Search", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117121", "abstract": "The remarkable progress in text-to-video diffusion models enables photorealistic generations, although the contents of the generated video often include unnatural movement or deformation, reverse playback, and motionless scenes.Recently, an alignment problem has attracted huge attention, where we steer the output of diffusion models based on some quantity on the goodness of the content.Because there is a large room for improvement of perceptual quality along the frame direction, we should address which metrics we should optimize and how we can optimize them in the video generation.In this paper, we propose diffusion latent beam search with lookahead estimator, which can select a better diffusion latent to maximize a given alignment reward, at inference time.We then point out that the improvement of perceptual video quality considering the alignment to prompts requires reward calibration by weighting existing metrics.This is because when humans or vision language models evaluate outputs, many previous metrics to quantify the naturalness of video do not always correlate with evaluation.We demonstrate that our method improves the perceptual quality evaluated on the calibrated reward, VLMs, and human assessment, without model parameter update, and outputs the best generation compared to greedy search and best-of-N sampling under much more efficient computational cost.The experiments highlight that our method is beneficial to many capable generative models, and provide a practical guideline on that we should prioritize the inference-time compute allocation into lookahead steps for reward estimate more than search budget or denoising steps.", "arxiv_id": "2501.19252v3", "arxiv_authors": ["Yuta Oshima", "Masahiro Suzuki", "Yutaka Matsuo", "Hiroki Furuta"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1000643, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a70f"}, "filepath": "data/2506.15745v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996250006620632, "type": "Poster", "name": "InfiniPot-V: Memory-Constrained KV Cache Compression for Streaming Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116667", "abstract": "Modern multimodal large language models (MLLMs) can reason over hour-long video, yet their key\u2013value (KV) cache grows linearly with time\u2014quickly exceeding the fixed memory of phones, AR glasses, and edge robots. Prior compression schemes either assume the whole video and user query are available offline or must first build the full cache, so memory still scales with stream length. InfiniPot-V is the first training-free, query-agnostic framework that enforces a hard, length-independent memory cap for streaming video understanding. During video encoding it monitors the cache and, once a user-set threshold is reached, runs a lightweight compression pass that (i) removes temporally redundant tokens via Temporal-axis Redundancy (TaR) metric and (ii) keeps semantically significant tokens via Value-Norm (VaN) ranking. Across four open-source MLLMs and four long-video and two streaming-video benchmarks, InfiniPot-V cuts peak GPU memory by up to 94\\%, sustains real-time generation, and matches or surpasses full-cache accuracy\u2014even in multi-turn dialogues. By dissolving the KV-cache bottleneck without retraining or query knowledge, InfiniPot-V closes the gap for on-device streaming video assistants.", "arxiv_id": "2506.15745v2", "arxiv_authors": ["Minsoo Kim", "Kyuhong Shim", "Jungwook Choi", "Simyung Chang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a1"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086719, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a710"}, "filepath": "data/2505.19028v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999419306972112, "type": "Poster", "name": "InfoChartQA: A Benchmark for Multimodal Question Answering on Infographic Charts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121377", "abstract": "Understanding infographic charts with design-driven visual elements (e.g., pictograms, icons) requires both visual recognition and reasoning, posing challenges for multimodal large language models (MLLMs). However, existing visual question answering benchmarks fall short in evaluating these capabilities of MLLMs due to the lack of paired plain charts and visual-element-based questions. To bridge this gap, we introduce InfoChartQA, a benchmark for evaluating MLLMs on infographic chart understanding. It includes 5,642 pairs of infographic and plain charts, each sharing the same underlying data but differing in visual presentations. We further design visual-element-based questions to capture their unique visual designs and communicative intent. Evaluation of 20 MLLMs reveals a substantial performance decline on infographic charts, particularly for visual-element-based questions related to metaphors. The paired infographic and plain charts enable fine-grained error analysis and ablation studies, which highlight new opportunities for advancing MLLMs in infographic chart understanding. We release InfoChartQA at https://github.com/CoolDawnAnt/InfoChartQA.", "arxiv_id": "2505.19028v3", "arxiv_authors": ["Minzhi Lin", "Tianchi Xie", "Mengchen Liu", "Yilin Ye", "Changjian Chen", "Shixia Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 956150, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a711"}, "filepath": "data/2510.10577v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996188201953499, "type": "Poster", "name": "Injecting Frame-Event Complementary Fusion into Diffusion for Optical Flow in Challenging Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116025", "abstract": "Optical flow estimation has achieved promising results in conventional scenes but faces challenges in high-speed and low-light scenes, which suffer from motion blur and insufficient illumination. These conditions lead to weakened texture and amplified noise and deteriorate the appearance saturation and boundary completeness of frame cameras, which are necessary for motion feature matching. In degraded scenes, the frame camera provides dense appearance saturation but sparse boundary completeness due to its long imaging time and low dynamic range. In contrast, the event camera offers sparse appearance saturation, while its short imaging time and high dynamic range gives rise to dense boundary completeness. Traditionally, existing methods utilize feature fusion or domain adaptation to introduce event to improve boundary completeness. However, the appearance features are still deteriorated, which severely affects the mostly adopted discriminative models that learn the mapping from visual features to motion fields and generative models that generate motion fields based on given visual features. So we introduce diffusion models that learn the mapping from noising flow to clear flow, which is not affected by the deteriorated visual features. Therefore, we propose a novel optical flow estimation framework Diff-ABFlow based on diffusion models with frame-event appearance-boundary fusion. Inspired by the appearance-boundary complementarity of frame and event, we propose an Attention-Guided Appearance-Boundary Fusion module to fuse frame and event. Based on diffusion models, we propose a Multi-Condition Iterative Denoising Decoder. Our proposed method can effectively utilize the respective advantages of frame and event, and shows great robustness to degraded input. In addition, we propose a dual-modal optical flow dataset for generalization experiments. Extensive experiments have verified the superiority of our proposed method. We will provide the code once the paper accepted.", "arxiv_id": "2510.10577v1", "arxiv_authors": ["Haonan Wang", "Hanyu Zhou", "Haoyue Liu", "Luxin Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 982149, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a712"}, "filepath": "data/2506.10980v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994597157057654, "type": "Poster", "name": "InstaInpaint: Instant 3D-Scene Inpainting with Masked Large Reconstruction Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119642", "abstract": "Recent advances in 3D scene reconstruction enable real-time viewing in virtual and augmented reality. To support interactive operations for better immersiveness, such as moving or editing objects, 3D scene inpainting methods are proposed to repair or complete the altered geometry. To support users in interacting (such as moving or editing objects) with the scene for the next level of immersiveness, 3D scene inpainting methods are developed to repair the altered geometry. However, current approaches rely on lengthy and computationally intensive optimization, making them impractical for real-time or online applications. We propose InstaInpaint, a reference-based feed-forward framework that produces 3D-scene inpainting from a 2D inpainting proposal within 0.4 seconds. We develop a self-supervised masked-finetuning strategy to enable training of our custom large reconstruction model (LRM) on the large-scale dataset. Through extensive experiments, we analyze and identify several key designs that improve generalization, textural consistency, and geometric correctness. InstaInpaint achieves a 1000$\\times$ speed-up from prior methods while maintaining a state-of-the-art performance across two standard benchmarks. Moreover, we show that InstaInpaint generalizes well to flexible downstream applications such as object insertion and multi-region inpainting.", "arxiv_id": "2506.10980v1", "arxiv_authors": ["Junqi You", "Chieh Hubert Lin", "Weijie Lyu", "Zhengbo Zhang", "Ming-Hsuan Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.501Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4100177, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a713"}, "filepath": "data/2509.16691v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998653940337554, "type": "Poster", "name": "InstanceAssemble: Layout-Aware Image Generation via Instance Assembling Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115776", "abstract": "Diffusion models have demonstrated remarkable capabilities in generating high-quality images. Recent advancements in Layout-to-Image (L2I) generation have leveraged positional conditions and textual descriptions to facilitate precise and controllable image synthesis. Despite overall progress, current L2I methods still exhibit suboptimal performance.Therefore, we propose InstanceAssemble, a novel architecture that incorporates layout conditions via instance-assembling attention, enabling position control with bounding boxes (bbox) and multimodal content control including texts and additional visual content. Our method achieves flexible adaption to existing DiT-based T2I models through light-weighted LoRA modules. Additionally, we propose a Layout-to-Image benchmark, Denselayout, a comprehensive benchmark for layout-to-image generation, containing 5k images with 90k instances in total. We further introduce Layout Grounding Score (LGS), an interpretable evaluation metric to more precisely assess the accuracy of L2I generation. Experiments demonstrate that our InstanceAssemble method achieves state-of-the-art performance under complex layout conditions, while exhibiting strong compatibility with diverse style LoRA modules.", "arxiv_id": "2509.16691v1", "arxiv_authors": ["Qiang Xiang", "Shuang Sun", "Binglei Li", "Dejia Song", "Huaxia Li", "Nemo Chen", "Xu Tang", "Yao Hu", "Junping Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5932274, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a714"}, "filepath": "data/2510.01119v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991891805370178, "type": "Poster", "name": "Instant4D: 4D Gaussian Splatting in Minutes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118893", "abstract": "Dynamic view synthesis has seen significant advances, yet reconstructing scenes from uncalibrated, casual video remains challenging due to slow optimization and complex parameter estimation. In this work, we present **Instant4D**, a monocular reconstruction system that leverages native 4D representation to efficiently process casual video sequences within minutes, without calibrated cameras or depth sensors.Our method begins with geometric recovery through deep visual SLAM, followed by grid pruning to optimize scene representation. Our design significantly reduces redundancy while maintaining geometric integrity, cutting model size to under **10%** of its original footprint. To handle temporal dynamics efficiently, we introduce a streamlined 4D Gaussian representation, achieving a **30\u00d7** speed-up and reducing training time to within two minutes, while maintaining competitive performance across several benchmarks. We further apply our model to in-the-wild videos, showcasing its generalizability. Our project website will be published at https://instant4d.github.io/Instant4D/", "arxiv_id": "2510.01119v1", "arxiv_authors": ["Zhanpeng Luo", "Haoxi Ran", "Li Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2655463, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a715"}, "filepath": "data/2503.24357v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993950859579058, "type": "Poster", "name": "InstructRestore: Region-Customized Image Restoration with Human Instructions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115458", "abstract": "Despite the significant progress in diffusion prior-based image restoration for real-world scenarios, most existing methods apply uniform processing to the entire image, lacking the capability to perform region-customized image restoration according to user preferences. In this work, we propose a new framework, namely InstructRestore, to perform region-adjustable image restoration following human instructions. To achieve this, we first develop a data generation engine to produce training triplets, each consisting of a high-quality image, the target region description, and the corresponding region mask. With this engine and careful data screening, we construct a comprehensive dataset comprising 536,945 triplets to support the training and evaluation of this task. We then examine how to integrate the low-quality image features under the ControlNet architecture to adjust the degree of image details enhancement. Consequently, we develop a ControlNet-like model to identify the target region and allocate different integration scales to the target and surrounding regions, enabling region-customized image restoration that aligns with user instructions. Experimental results demonstrate that our proposed InstructRestore approach enables effective human-instructed image restoration, such as images with bokeh effects and user-instructed local enhancement. Our work advances the investigation of interactive image restoration and enhancement techniques. Data, code, and models will be made publicly available.", "arxiv_id": "2503.24357v1", "arxiv_authors": ["Shuaizheng Liu", "Jianqi Ma", "Lingchen Sun", "Xiangtao Kong", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1639605, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a716"}, "filepath": "data/2505.15818v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992957020664656, "type": "Poster", "name": "InstructSAM: A Training-free Framework for Instruction-Oriented Remote Sensing Object Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119661", "abstract": "Language-Guided object recognition in remote sensing imagery is crucial for large-scale mapping and automated data annotation. However, existing open-vocabulary and visual grounding methods rely on explicit category cues, limiting their ability to handle complex or implicit queries that require advanced reasoning.To address this issue, we introduce a new suite of tasks, including Instruction-Oriented Object Counting, Detection, and Segmentation (InstructCDS), covering open-vocabulary, open-ended, and open-subclass scenarios. We further present EarthInstruct, the first InstructCDS benchmark for earth observation. It is constructed from two diverse remote sensing datasets with varying spatial resolutions and annotation rules across 20 categories, necessitating models to interpret dataset-specific instructions.Given the scarcity of semantically rich labeled data in remote sensing, we propose InstructSAM, a training-free framework for instruction-driven object recognition. InstructSAM leverages large vision-language models to interpret user instructions and estimate object counts, employs SAM2 for mask proposal, and formulates mask-label assignment as a binary integer programming problem. By integrating semantic similarity with counting constraints, InstructSAM efficiently assigns categories to predicted masks without relying on confidence thresholds. Experiments demonstrate that InstructSAM matches or surpasses specialized baselines across multiple tasks while maintaining near-constant inference time regardless of object count, reducing output tokens by 89\\% and overall runtime by over 32\\% compared to direct generation approaches. We believe the contributions of the proposed tasks, benchmark, and effective approach will advance future research in developing versatile object recognition systems.", "arxiv_id": "2505.15818v2", "arxiv_authors": ["Yijie Zheng", "Weijie Wu", "Qingyun Li", "Xuehui Wang", "Xu Zhou", "Aiai Ren", "Jun Shen", "Long Zhao", "Guoqing Li", "Xue Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001760, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a717"}, "filepath": "data/2509.17401v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992428557662896, "type": "Poster", "name": "Interpreting vision transformers via residual replacement model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118006", "abstract": "How do vision transformers (ViTs) represent and process the world? This paper addresses this long-standing question through the first systematic analysis of 6.6K features across all layers, extracted via sparse autoencoders, and by introducing the residual replacement model, which replaces ViT computations with interpretable features in the residual stream. Our analysis reveals not only a feature evolution from low-level patterns to high-level semantics, but also how ViTs encode curves and spatial positions through specialized feature types. The residual replacement model scalably produces a faithful yet parsimonious circuit for human-scale interpretability by significantly simplifying the original computations. As a result, this framework enables intuitive understanding of ViT mechanisms. Finally, we demonstrate the utility of our framework in debiasing spurious correlations.", "arxiv_id": "2509.17401v1", "arxiv_authors": ["Jinyeong Kim", "Junhyeok Kim", "Yumin Shim", "Joohyeok Kim", "Sunyoung Jung", "Seong Jae Hwang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2a9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031523, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a718"}, "filepath": "data/2509.07447v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990154019169379, "type": "Poster", "name": "In the Eye of MLLM: Benchmarking Egocentric Video Intent Understanding with Gaze-Guided Prompting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121584", "abstract": "The emergence of advanced multimodal large language models (MLLMs) has significantly enhanced AI assistants' ability to process complex information across modalities. Recently, egocentric videos, by directly capturing user focus, actions, and context in an unified coordinate, offer an exciting opportunity to enable proactive and personalized AI user experiences with MLLMs. However, existing benchmarks overlook the crucial role of gaze as an indicator of user intent. To address this gap, we introduce EgoGazeVQA, an egocentric gaze-guided video question answering benchmark that leverages gaze information to improve the understanding of longer daily-life videos. EgoGazeVQA consists of gaze-based QA pairs generated by MLLMs and refined by human annotators. Our experiments reveal that existing MLLMs struggle to accurately interpret user intentions using only global visual tokens. In contrast, our gaze-guided intent prompting methods significantly enhance performance by integrating spatial, temporal, and intent-related cues. We further conduct experiments on gaze-related fine-tuning and analyze how gaze estimation accuracy impacts prompting effectiveness. These results underscore the value of gaze for more personalized and effective AI assistants in egocentric settings.", "arxiv_id": "2509.07447v2", "arxiv_authors": ["Taiying Peng", "Jiacheng Hua", "Miao Liu", "Feng Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2aa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1034341, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a719"}, "filepath": "data/2504.01008v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997296209488585, "type": "Poster", "name": "IntrinsiX: High-Quality PBR Generation using Image Priors", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117098", "abstract": "We introduce IntrinsiX, a novel method that generates high-quality intrinsic images from text description.In contrast to existing text-to-image models whose outputs contain baked-in scene lighting, our approach predicts physically-based rendering (PBR) maps.This enables the generated outputs to be used for content creation scenarios in core graphics applications that facilitate re-lighting, editing, and texture generation tasks. In order to train our generator, we exploit strong image priors, and pre-train separate models for each PBR material component (albedo, roughness, metallic, normals).We then align these models with a new cross-intrinsic attention formulation that concatenates key and value features in a consistent fashion. This allows us to exchange information between each output modality and to obtain semantically coherent PBR predictions.To ground each intrinsic component, we propose a rendering loss which provides image-space signals to constrain the model, thus facilitating sharp details also in the output BRDF properties. Our results demonstrate detailed intrinsic generation with strong generalization capabilities that outperforms existing intrinsic image decomposition methods used with generated images by a significant margin.Finally, we show a series of applications, including re-lighting, editing, and for the first time text-conditioned room-scale PBR texture generation.We will release the code and the pre-trained model weights.", "arxiv_id": "2504.01008v1", "arxiv_authors": ["Peter Kocsis", "Lukas H\u00f6llein", "Matthias Nie\u00dfner"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ab"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2811796, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71a"}, "filepath": "data/2504.01689v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996536770872958, "type": "Poster", "name": "InvFussion: Bridging Supervised and Zero-shot Diffusion for Inverse Problems", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119745", "abstract": "Diffusion Models have demonstrated remarkable capabilities in handling inverse problems, offering high-quality posterior-sampling-based solutions. Despite significant advances, a fundamental trade-off persists regarding the way the conditioned synthesis is employed: Zero-shot approaches can accommodate any linear degradation but rely on approximations that reduce accuracy. In contrast, training-based methods model the posterior correctly, but cannot adapt to the degradation at test-time. Here we introduce InvFussion, the first training-based degradation-aware posterior sampler. InvFussion combines the best of both worlds - the strong performance of supervised approaches and the flexibility of zero-shot methods. This is achieved through a novel architectural design that seamlessly integrates the degradation operator directly into the diffusion denoiser. We compare InvFussion against existing general-purpose posterior samplers, both degradation-aware zero-shot techniques and blind training-based methods. Experiments on the FFHQ and ImageNet datasets demonstrate state-of-the-art performance. Beyond posterior sampling, we further demonstrate the applicability of our architecture, operating as a general Minimum Mean Square Error predictor, and as a Neural Posterior Principal Component estimator.", "arxiv_id": "2504.01689v1", "arxiv_authors": ["Noam Elata", "Hyungjin Chung", "Jong Chul Ye", "Tomer Michaeli", "Michael Elad"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ac"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1608091, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71b"}, "filepath": "data/2506.20671v2.png", "tags": [], "_media_type": "image", "_rand": 0.999216147555818, "type": "Poster", "name": "IPFormer: Visual 3D Panoptic Scene Completion with Context-Adaptive Instance Proposals", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117455", "abstract": "Semantic Scene Completion (SSC) has emerged as a pivotal approach for jointly learning scene geometry and semantics, enabling downstream applications such as navigation in mobile robotics. The recent generalization to Panoptic Scene Completion (PSC) advances the SSC domain by integrating instance-level information, thereby enhancing object-level sensitivity in scene understanding. While PSC was introduced using LiDAR modality, methods based on camera images remain largely unexplored. Moreover, recent Transformer-based SSC approaches utilize a fixed set of learned queries to reconstruct objects within the scene volume. Although these queries are typically updated with image context during training, they remain static at test time, limiting their ability to dynamically adapt specifically to the observed scene. To address these limitations, we propose IPFormer, the first approach that leverages context-adaptive instance proposals at train and test time to solve vision-based 3D Panoptic Scene Completion. Specifically, IPFormer dynamically initializes these queries as panoptic instance proposals derived from image context and further refines them through attention-based encoding and decoding to reason about semantic instance-voxel relationships. Experimental results show that our approach surpasses state-of-the-art methods in overall panoptic metrics PQ$^\\dagger$ and PQ-All, matches performance in individual metrics, and achieves a runtime reduction exceeding 14$\\times$. Furthermore, our ablation studies reveal that dynamically deriving instance proposals from image context, as opposed to random initialization, leads to a 3.62% increase in PQ-All and a remarkable average improvement of 18.65% in combined Thing-metrics. These results underscore the effectiveness of IPFormer and highlight its introduction of context-adaptive instance proposals as a pioneering effort in addressing vision-based 3D Panoptic Scene Completion.", "arxiv_id": "2506.20671v2", "arxiv_authors": ["Markus Gross", "Aya Fahmy", "Danit Niwattananan", "Dominik Muhle", "Rui Song", "Daniel Cremers", "Henri Mee\u00df"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ad"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 992879, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71c"}, "filepath": "data/2506.23329v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992039051364835, "type": "Poster", "name": "IR3D-Bench: Evaluating Vision-Language Model Scene Understanding as Agentic Inverse Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121555", "abstract": "Vision-language models (VLMs) excel at descriptive tasks, but whether they truly understand scenes from visual observations remains uncertain. We introduce IR3D-Bench, a benchmark challenging VLMs to demonstrate understanding through active creation rather than passive recognition. Grounded in the analysis-by-synthesis paradigm, IR3D-Bench tasks Vision-Language Agents (VLAs) with actively using programming and rendering tools to recreate the underlying 3D structure of an input image, achieving agentic inverse rendering through tool use. This ''understanding-by-creating'' approach probes the tool-using generative capacity of VLAs, moving beyond the descriptive or conversational capacity measured by traditional scene understanding benchmarks. We provide a comprehensive suite of metrics to evaluate geometric accuracy, spatial relations, appearance attributes, and overall plausibility. Initial experiments on agentic inverse rendering powered by various state-of-the-art VLMs highlight current limitations, particularly in visual precision rather than basic tool usage. IR3D-Bench, including data and evaluation protocols, is released to facilitate systematic study and development of tool-using VLAs towards genuine scene understanding by creating.", "arxiv_id": "2506.23329v1", "arxiv_authors": ["Parker Liu", "Chenxin Li", "Zhengxin Li", "Yipeng Wu", "Wuyang Li", "Zhiqin Yang", "Zhenyuan Zhang", "Yunlong Lin", "Sirui Han", "Brandon Y. Feng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ae"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1021967, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71d"}, "filepath": "data/2505.12335v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997830205521893, "type": "Poster", "name": "Is Artificial Intelligence Generated Image Detection a Solved Problem?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121688", "abstract": "The rapid advancement of generative models, such as GANs and Diffusion models, has enabled the creation of highly realistic synthetic images, raising serious concerns about misinformation, deepfakes, and copyright infringement. Although numerous Artificial Intelligence Generated Image (AIGI) detectors have been proposed, often reporting high accuracy, their effectiveness in real-world scenarios remains questionable. To bridge this gap, we introduce AIGIBench, a comprehensive benchmark designed to rigorously evaluate the robustness and generalization capabilities of state-of-the-art AIGI detectors. AIGIBench simulates real-world challenges through four core tasks: multi-source generalization, robustness to image degradation, sensitivity to data augmentation, and impact of test-time pre-processing. It includes 23 diverse fake image subsets that span both advanced and widely adopted image generation techniques, along with real-world samples collected from social media and AI art platforms. Extensive experiments on 11 advanced detectors demonstrate that, despite their high reported accuracy in controlled settings, these detectors suffer significant performance drops on real-world data, limited benefits from common augmentations, and nuanced effects of pre-processing, highlighting the need for more robust detection strategies. By providing a unified and realistic evaluation framework, AIGIBench offers valuable insights to guide future research toward dependable and generalizable AIGI detection.", "arxiv_id": "2505.12335v2", "arxiv_authors": ["Ziqiang Li", "Jiazhen Yan", "Ziwen He", "Kai Zeng", "Weiwei Jiang", "Lizhi Xiong", "Zhangjie Fu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2af"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044302, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71e"}, "filepath": "data/2510.19819v1.png", "tags": [], "_media_type": "image", "_rand": 0.999791614794238, "type": "Poster", "name": "Is This Tracker On? A Benchmark Protocol for Dynamic Tracking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121575", "abstract": "We introduce ITTO, a challenging new benchmark suite for evaluating and diagnosing the capabilities and limitations of point tracking methods. Our videos are sourced from existing datasets and egocentric real-world recordings, with high-quality human annotations collected through a multi-stage pipeline. ITTO captures the motion complexity, occlusion patterns, and object diversity characteristic of real-world scenes -- factors that are largely absent in current benchmarks. We conduct a rigorous analysis of state-of-the-art tracking methods on ITTO, breaking down performance along key axes of motion complexity. Our findings reveal that existing trackers struggle with these challenges, particularly in re-identifying points after occlusion, highlighting critical failure modes. These results point to the need for new modeling approaches tailored to real-world dynamics. We envision ITTO as a foundation testbed for advancing point tracking and guiding the development of more robust tracking algorithms.", "arxiv_id": "2510.19819v1", "arxiv_authors": ["Ilona Demler", "Saumya Chauhan", "Georgia Gkioxari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074370, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a71f"}, "filepath": "data/2504.21561v4.png", "tags": [], "_media_type": "image", "_rand": 0.9996269806608583, "type": "Poster", "name": "Iterative Tool Usage Exploration for Multimodal Agents via Step-wise Preference Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115154", "abstract": "Multimodal agents, which integrate a controller (e.g., a vision language model) with external tools, have demonstrated remarkable capabilities in tackling complex multimodal tasks.Existing approaches for training these agents, both supervised fine-tuning and reinforcement learning, depend on extensive human-annotated task-answer pairs and tool trajectories.However, for complex multimodal tasks, such annotations are prohibitively expensive or impractical to obtain.In this paper, we propose an iterative tool usage exploration method for multimodal agents without any pre-collected data, namely SPORT, via step-wise preference optimization to refine the trajectories of tool usage. Our method enables multimodal agents to autonomously discover effective tool usage strategies through self-exploration and optimization, eliminating the bottleneck of human annotation.SPORT has four iterative components: task synthesis, step sampling, step verification, and preference tuning.We first synthesize multimodal tasks using language models. Then, we introduce a novel trajectory exploration scheme, where step sampling and step verification are executed alternately to solve synthesized tasks.In step sampling, the agent tries different tools and obtains corresponding results. In step verification, we employ a verifier to provide AI feedback to construct step-wise preference data. The data is subsequently used to update the controller for tool usage through preference tuning, producing a SPORT agent.By interacting with real environments, the SPORT agent gradually evolves into a more refined and capable system.Evaluation in the GTA and GAIA benchmarks shows that the SPORT agent achieves 6.41% and 3.64% improvements, underscoring the generalization and effectiveness introduced by our method.", "arxiv_id": "2504.21561v4", "arxiv_authors": ["Pengxiang Li", "Zhi Gao", "Bofei Zhang", "Yapeng Mi", "Xiaojian Ma", "Chenrui Shi", "Tao Yuan", "Yuwei Wu", "Yunde Jia", "Song-Chun Zhu", "Qing Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.502Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1135173, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a720"}, "filepath": "data/2506.11136v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992830395017851, "type": "Poster", "name": "JAFAR: Jack up Any Feature at Any Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118867", "abstract": "Foundation Vision Encoders have become indispensable across a wide range of dense vision tasks. However, their operation at low spatial feature resolutions necessitates subsequent feature decompression to enable full-resolution processing. To address this limitation, we introduce JAFAR, a lightweight and flexible feature upsampler designed to enhance the spatial resolution of visual features from any Foundation Vision Encoder to any target resolution. JAFAR features an attention-based upsampling module that aligns the spatial representations of high-resolution queries with semantically enriched low-resolution keys via Spatial Feature Transform modulation. Despite the absence of high-resolution feature ground truth; we find that learning at low upsampling ratios and resolutions generalizes surprisingly well to much higher scales. Extensive experiments demonstrate that JAFAR recovers intricate pixel-level details and consistently outperforms existing feature upsampling techniques across a diverse set of dense downstream applications.", "arxiv_id": "2506.11136v1", "arxiv_authors": ["Paul Couairon", "Loick Chambon", "Louis Serrano", "Jean-Emmanuel Haugeard", "Matthieu Cord", "Nicolas Thome"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3465441, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a721"}, "filepath": "data/2505.19610v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993925265555901, "type": "Poster", "name": "JailBound: Jailbreaking Internal Safety Boundaries of Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115121", "abstract": "Vision-Language Models (VLMs) exhibit impressive performance, yet the integration of powerful vision encoders has significantly broadened their attack surface, rendering them increasingly susceptible to jailbreak attacks. However, lacking well-defined attack objectives, existing jailbreak methods often struggle with gradient-based strategies prone to local optima and lacking precise directional guidance, and typically decouple visual and textual modalities, thereby limiting their effectiveness by neglecting crucial cross-modal interactions. Inspired by the Eliciting Latent Knowledge (ELK) framework, we posit that VLMs encode safety-relevant information within their internal fusion-layer representations, revealing an implicit safety decision boundary in the latent space. This motivates exploiting boundary to steer model behavior. Accordingly, we propose \\textbf{JailBound}, a novel latent space jailbreak framework comprising two stages: (1) \\textbf{Safety Boundary Probing}, which addresses the guidance issue by approximating decision boundary within fusion layer's latent space, thereby identifying optimal perturbation directions towards the target region; and (2) \\textbf{Safety Boundary Crossing}, which overcomes the limitations of decoupled approaches by jointly optimizing adversarial perturbations across both image and text inputs. This latter stage employs an innovative mechanism to steer the model's internal state towards policy-violating outputs while maintaining cross-modal semantic consistency. Extensive experiments on six diverse VLMs demonstrate JailBound's efficacy, achieves 94.32\\% white-box and 67.28\\% black-box attack success averagely, which are 6.17\\% and 21.13\\% higher than SOTA methods, respectively. Our findings expose a overlooked safety risk in VLMs and highlight the urgent need for more robust defenses. \\textcolor{red}{Warning: This paper contains potentially sensitive, harmful and offensive content.}", "arxiv_id": "2505.19610v2", "arxiv_authors": ["Jiaxin Song", "Yixu Wang", "Jie Li", "Rui Yu", "Yan Teng", "Xingjun Ma", "Yingchun Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096223, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a722"}, "filepath": "data/2506.08220v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997338491230195, "type": "Poster", "name": "Jamais Vu: Exposing the Generalization Gap in Supervised Semantic Correspondence", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115832", "abstract": "Semantic correspondence (SC) aims to establish semantically meaningful matches across different instances of an object category. We illustrate how recent supervised SC methods remain limited in their ability to generalize beyond sparsely annotated training keypoints, effectively acting as keypoint detectors. To address this, we propose a novel approach for learning dense correspondences by lifting 2D keypoints into a canonical 3D space using monocular depth estimation. Our method constructs a continuous canonical manifold that captures object geometry without requiring explicit 3D supervision or camera annotations. Additionally, we introduce SPair-U, an extension of SPair-71k with novel keypoint annotations, to better assess generalization. Experiments not only demonstrate that our model significantly outperforms supervised baselines on unseen keypoints, highlighting its effectiveness in learning robust correspondences, but that unsupervised baselines outperform supervised counterparts when generalized across different datasets.", "arxiv_id": "2506.08220v1", "arxiv_authors": ["Octave Mariotti", "Zhipeng Du", "Yash Bhalgat", "Oisin Mac Aodha", "Hakan Bilen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069600, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a723"}, "filepath": "data/2506.17612v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992357522760573, "type": "Poster", "name": "JarvisArt: Liberating Human Artistic Creativity via an Intelligent Photo Retouching Agent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117513", "abstract": "Photo retouching has become integral to contemporary visual storytelling, enabling users to capture aesthetics and express creativity. While professional tools such as Adobe Lightroom offer powerful capabilities, they demand substantial expertise and manual effort. In contrast, existing AI-based solutions provide automation but often suffer from limited adjustability and poor generalization, failing to meet diverse and personalized editing needs. To bridge this gap, we introduce JarvisArt, a multi-modal large language model (MLLM)-driven agent that understands user intent, mimics the reasoning process of professional artists, and intelligently coordinates over 200 retouching tools within Lightroom. JarvisArt undergoes a two-stage training process: an initial Chain-of-Thought supervised fine-tuning to establish basic reasoning and tool-use skills, followed by Group Relative Policy Optimization for Retouching (GRPO-R) to further enhance its decision-making and tool proficiency. We also propose the Agent-to-Lightroom Protocol to facilitate seamless integration with Lightroom. To evaluate performance, we develop MMArt-Bench, a novel benchmark constructed from real-world user edits. JarvisArt demonstrates user-friendly interaction, superior generalization, and fine-grained control over both global and local adjustments, paving a new avenue for intelligent photo retouching. Notably, it outperforms GPT-4o with a 60\\% improvement in average pixel-level metrics on MMArt-Bench for content fidelity, while maintaining comparable instruction-following capabilities.", "arxiv_id": "2506.17612v1", "arxiv_authors": ["Yunlong Lin", "Zixu Lin", "Kunjie Lin", "Jinbin Bai", "Panwang Pan", "Chenxin Li", "Haoyu Chen", "Zhongdao Wang", "Xinghao Ding", "Wenbo Li", "Shuicheng Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061734, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a724"}, "filepath": "data/2503.15905v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993865109274974, "type": "Poster", "name": "Jasmine: Harnessing Diffusion Prior for Self-supervised Depth Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118500", "abstract": "In this paper, we propose \\textbf{Jasmine}, the first Stable Diffusion (SD)-based self-supervised framework for monocular depth estimation, which effectively harnesses SD\u2019s visual priors to enhance the sharpness and generalization of unsupervised prediction. Previous SD-based methods are all supervised since adapting diffusion models for dense prediction requires high-precision supervision. In contrast, self-supervised reprojection suffers from inherent challenges (\\textit{e.g.}, occlusions, texture-less regions, illumination variance), and the predictions exhibit blurs and artifacts that severely compromise SD's latent priors. To resolve this, we construct a novel surrogate task of mix-batch image reconstruction. Without any additional supervision, it preserves the detail priors of SD models by reconstructing the images themselves while preventing depth estimation from degradation. Furthermore, to address the inherent misalignment between SD's scale and shift invariant estimation and self-supervised scale-invariant depth estimation, we build the Scale-Shift GRU. It not only bridges this distribution gap but also isolates the fine-grained texture of SD output against the interference of reprojection loss. Extensive experiments demonstrate that Jasmine achieves SoTA performance on the KITTI benchmark and exhibits superior zero-shot generalization across multiple datasets.", "arxiv_id": "2503.15905v3", "arxiv_authors": ["Jiyuan Wang", "Chunyu Lin", "Cheng Guan", "Lang Nie", "Jing He", "Haodong Li", "Kang Liao", "Yao Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2849503, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a725"}, "filepath": "data/2510.03993v4.png", "tags": [], "_media_type": "image", "_rand": 0.999011207642579, "type": "Poster", "name": "Keep It on a Leash: Controllable Pseudo-label Generation Towards Realistic Long-Tailed Semi-Supervised Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116160", "abstract": "Current long-tailed semi-supervised learning methods assume that labeled data exhibit a long-tailed distribution, and unlabeled data adhere to a typical predefined distribution (i.e., long-tailed, uniform, or inverse long-tailed). However, the distribution of the unlabeled data is generally unknown and may follow an arbitrary distribution. To tackle this challenge, we propose a Controllable Pseudo-label Generation (CPG) framework, expanding the labeled dataset with the progressively identified reliable pseudo-labels from the unlabeled dataset and training the model on the updated labeled dataset with a known distribution, making it unaffected by the unlabeled data distribution. Specifically, CPG operates through a controllable self-reinforcing optimization cycle: (i) At each training step, our dynamic controllable filtering mechanism selectively incorporates reliable pseudo-labels from the unlabeled dataset into the labeled dataset, ensuring that the updated labeled dataset follows a known distribution; (ii) We then construct a Bayes-optimal classifier using logit adjustment based on the updated labeled data distribution; (iii) This improved classifier subsequently helps identify more reliable pseudo-labels in the next training step. We further theoretically prove that this optimization cycle can significantly reduce the generalization error under some conditions. Additionally, we propose a class-aware adaptive augmentation module to further improve the representation of minority classes, and an auxiliary branch to maximize data utilization by leveraging all labeled and unlabeled samples. Comprehensive evaluations on various commonly used benchmark datasets show that CPG achieves consistent improvements, surpassing state-of-the-art methods by up to **16.29\\%** in accuracy. **Code is available in the supplementary material.**", "arxiv_id": "2510.03993v4", "arxiv_authors": ["Yaxin Hou", "Bo Han", "Yuheng Jia", "Hui Liu", "Junhui Hou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1077495, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a726"}, "filepath": "data/2507.05604v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994539839252745, "type": "Poster", "name": "Kernel Density Steering: Inference-Time Scaling via Mode Seeking for Image Restoration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116670", "abstract": "Diffusion models show promise for image restoration, but existing methods often struggle with inconsistent fidelity and undesirable artifacts. To address this, we introduce Kernel Density Steering (KDS), a novel inference-time framework promoting robust, high-fidelity outputs through explicit local mode-seeking. KDS employs an $N$-particle ensemble of diffusion samples, computing patch-wise kernel density estimation gradients from their collective outputs. These gradients steer patches in each particle towards shared, higher-density regions identified within the ensemble. This collective local mode-seeking mechanism, acting as \"collective wisdom\", steers samples away from spurious modes prone to artifacts, arising from independent sampling or model imperfections, and towards more robust, high-fidelity structures. This allows us to obtain better quality samples at the expense of higher compute by simultaneously sampling multiple particles. As a plug-and-play framework, KDS requires no retraining or external verifiers, seamlessly integrating with various diffusion samplers. Extensive numerical validations demonstrate KDS substantially improves both quantitative and qualitative performance on challenging real-world super-resolution and image inpainting tasks.", "arxiv_id": "2507.05604v2", "arxiv_authors": ["Yuyang Hu", "Kangfu Mei", "Mojtaba Sahraee-Ardakan", "Ulugbek S. Kamilov", "Peyman Milanfar", "Mauricio Delbracio"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081348, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a727"}, "filepath": "data/2510.20261v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994311628041781, "type": "Poster", "name": "Kinaema: a recurrent sequence model for memory and pose in motion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115962", "abstract": "One key aspect of spatially aware robots is the ability to \"find their bearings\", ie. to correctly situate themselves or previously seen spaces. In this work, we focus on this particular scenario of continuous robotics operations, where information observed before an actual episode start is exploited to optimize efficiency. We introduce a new model, \"Kinaema\" and agent, capable of integrating a stream of visual observations while moving in a potentially large scene, and upon request, processing a query image and predicting the relative position of the shown space with respect to its current position. Our model does not explicitly store an observation history, therefore does not have hard constraints on context length. It maintains an implicit latent memory, which is updated by a transformer in a recurrent way, compressing the history of sensor readings into a compact representation. We evaluate the impact of this model in a new downstream task we call \"Mem-Nav\", targeting continuous robotics operations. We show that our large-capacity recurrent model maintains a useful representation of the scene, navigates to goals observed before the actual episode start, and is computationally efficient, in particular compared to classical transformers with attention over an observation history.", "arxiv_id": "2510.20261v1", "arxiv_authors": ["Mert Bulent Sariyildiz", "Philippe Weinzaepfel", "Guillaume Bono", "Gianluca Monaci", "Christian Wolf"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2b9"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1592509, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a728"}, "filepath": "data/2510.14605v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992736702258753, "type": "Poster", "name": "Knowledge-based Visual Question Answer with Multimodal Processing, Retrieval and Filtering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116686", "abstract": "The task of Knowlegde-Based Visual Question Answering (KB-VQA) requires the model to understand visual features and retrieve external knowledge. Retrieval-Augmented Generation (RAG) have been employed to address this problem through knowledge base querying. However, existing work demonstrate two limitations: insufficient interactivity during knowledge retrieval and ineffective organization of retrieved information for Visual-Language Model (VLM). To address these challenges, we propose a three-stage visual language model with Process, Retrieve and Filter (VLM-PRF) framework. For interactive retrieval, VLM-PRF uses reinforcement learning (RL) to guide the model to strategically process information via tool-driven operations. For knowledge filtering, our method trains the VLM to transform the raw retrieved information into into task-specific knowledge. With a dual reward as supervisory signals, VLM-PRF successfully enable model to optimize retrieval strategies and answer generation capabilities simultaneously. Experiments on two datasets demonstrate the effectiveness of our framework.", "arxiv_id": "2510.14605v2", "arxiv_authors": ["Yuyang Hong", "Jiaqi Gu", "Qi Yang", "Lubin Fan", "Yue Wu", "Ying Wang", "Kun Ding", "Shiming Xiang", "Jieping Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ba"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1618082, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a729"}, "filepath": "data/2503.18403v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993294559199741, "type": "Poster", "name": "Knowledge Graph Enhanced Generative Multi-modal Models for Class-Incremental Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117916", "abstract": "Continual learning in computer vision faces the critical challenge of catastrophic forgetting, where models struggle to retain prior knowledge while adapting to new tasks.Although recent studies have attempted to leverage the generalization capabilities of pre-trained models to mitigate overfitting on current tasks, models still tend to forget details of previously learned categories as tasks progress, leading to misclassification. To address these limitations, we introduce a novel Knowledge Graph Enhanced Generative Multi-modal model (KG-GMM) that builds an evolving knowledge graph throughout the learning process. Our approach utilizes relationships within the knowledge graph to augment the class labels and assigns different relations to similar categories to enhance model differentiation. During testing, we propose a Knowledge Graph Augmented Inference method that locates specific categories by analyzing relationships within the generated text, thereby reducing the loss of detailed information about old classes when learning new knowledge and alleviating forgetting. Experiments demonstrate that our method effectively leverages relational information to help the model correct mispredictions, achieving state-of-the-art results in both conventional CIL and few-shot CIL settings, confirming the efficacy of knowledge graphs at preserving knowledge in the continual learning scenarios.", "arxiv_id": "2503.18403v1", "arxiv_authors": ["Xusheng Cao", "Haori Lu", "Linlan Huang", "Fei Yang", "Xialei Liu", "Ming-Ming Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2bb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1579233, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72a"}, "filepath": "data/2505.16707v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995130058759768, "type": "Poster", "name": "KRIS-Bench: Benchmarking Next-Level Intelligent Image Editing Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121566", "abstract": "Recent advances in multi-modal generative models have enabled significant progress in instruction-based image editing. However, while these models produce visually plausible outputs, their capacity for knowledge-based reasoning editing tasks remains under-explored. In this paper, We introduce KRIS-Bench (Knowledge-based Reasoning in Image-editing Systems Benchmark), a diagnostic benchmark designed to assess models through a cognitively informed lens. Drawing from educational theory, KRIS-Bench categorizes editing tasks across three foundational knowledge types: Factual, Conceptual, and Procedural. Based on this taxonomy, we design 22 representative tasks spanning 7 reasoning dimensions and release 1,267 high-quality annotated editing instances. To support fine-grained evaluation, we propose a comprehensive protocol that incorporates a novel Knowledge Plausibility metric, enhanced by knowledge hints and calibrated through human studies. Empirical results on nine state-of-the-art models reveal significant gaps in reasoning performance, highlighting the need for knowledge-centric benchmarks to advance the development of intelligent image editing systems.", "arxiv_id": "2505.16707v1", "arxiv_authors": ["Yongliang Wu", "Zonghui Li", "Xinting Hu", "Xinyu Ye", "Xianfang Zeng", "Gang Yu", "Wenbo Zhu", "Bernt Schiele", "Ming-Hsuan Yang", "Xu Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2bc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.503Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083539, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72b"}, "filepath": "data/2506.12851v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992794526720211, "type": "Poster", "name": "KungfuBot: Physics-Based Humanoid Whole-Body Control for Learning Highly-Dynamic Skills", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118569", "abstract": "Humanoid robots are promising to acquire various skills by imitating human behaviors. However, existing algorithms are only capable of tracking smooth, low-speed human motions, even with delicate reward and curriculum design. This paper presents a physics-based humanoid control framework, aiming to master highly-dynamic human behaviors such as Kungfu and dancing through multi-steps motion processing and adaptive motion tracking. For motion processing, we design a pipeline to extract, filter out, correct, and retarget motions, while ensuring compliance with physical constraints to the maximum extent. For motion imitation, we formulate a bi-level optimization problem to dynamically adjust the tracking accuracy tolerance based on the current tracking error, creating an adaptive curriculum mechanism. We further construct an asymmetric actor-critic framework for policy training. In experiments, we train whole-body control policies to imitate a set of highly dynamic motions. Our method achieves significantly lower tracking errors than existing approaches and is successfully deployed on the Unitree G1 robot, demonstrating stable and expressive behaviors. The project page is https://kungfubot.github.io.", "arxiv_id": "2506.12851v1", "arxiv_authors": ["Weiji Xie", "Jinrui Han", "Jiakun Zheng", "Huanyu Li", "Xinzhe Liu", "Jiyuan Shi", "Weinan Zhang", "Chenjia Bai", "Xuelong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2bd"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1012673, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72c"}, "filepath": "data/2503.11245v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996252400654956, "type": "Poster", "name": "L2RSI: Cross-view LiDAR-based Place Recognition for Large-scale Urban Scenes via Remote Sensing Imagery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119978", "abstract": "We tackle the challenge of LiDAR-based place recognition, which traditionally depends on costly and time-consuming prior 3D maps. To overcome this, we first construct XA-L\\&RSI dataset, which encompasses approximately $110,000$ remote sensing submaps and $13,000$ LiDAR point cloud submaps captured in urban scenes, and propose a novel method, L2RSI, for cross-view LiDAR place recognition using high-resolution Remote Sensing Imagery. This approach enables large-scale localization capabilities at a reduced cost by leveraging readily available overhead images as map proxies. L2RSI addresses the dual challenges of cross-view and cross-modal place recognition by learning feature alignment between point cloud submaps and remote sensing submaps in the semantic domain. Additionally, we introduce a novel probability propagation method based on particle estimation to refine position predictions, effectively leveraging temporal and spatial information. This approach enables large-scale retrieval and cross-scene generalization without fine-tuning. Extensive experiments on XA-L\\&RSI demonstrate that, within a $100km^2$ retrieval range, L2RSI accurately localizes $83.27\\%$ of point cloud submaps within a $30m$ radius for top-$1$ retrieved location.", "arxiv_id": "2503.11245v3", "arxiv_authors": ["Ziwei Shi", "Xiaoran Zhang", "Wenjing Xu", "Yan Xia", "Yu Zang", "Siqi Shen", "Cheng Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2be"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2330517, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72d"}, "filepath": "data/2505.22634v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994116183505385, "type": "Poster", "name": "LabUtopia: High-Fidelity Simulation and Hierarchical Benchmark for Scientific Embodied Agents", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121800", "abstract": "Scientific embodied agents play a crucial role in modern laboratories by automating complex experimental workflows. Compared to typical household environments, laboratory settings impose significantly higher demands on perception of physical-chemical transformations and long-horizon planning, making them an ideal testbed for advancing embodied intelligence. However, its development has been long hampered by the lack of suitable simulator and benchmarks. In this paper, we address this gap by introducing LabUtopia, a comprehensive simulation and benchmarking suite designed to facilitate the development of generalizable, reasoning-capable embodied agents in laboratory settings. Specifically, it integrates i) LabSim, a high-fidelity simulator supporting multi-physics and chemically meaningful interactions; ii) LabScene, a scalable procedural generator for diverse scientific scenes; and iii) LabBench, a hierarchical benchmark spanning five levels of complexity from atomic actions to long-horizon mobile manipulation. LabUtopia supports 30 distinct tasks and includes more than 200 scene and instrument assets, enabling large-scale training and principled evaluation in high-complexity environments. We demonstrate that LabUtopia offers a powerful platform for advancing the integration of perception, planning, and control in scientific-purpose agents and provides a rigorous testbed for exploring the practical capabilities and generalization limits of embodied intelligence in future research. The benchmark and codes are available at https://sites.google.com/view/labutopia/ .", "arxiv_id": "2505.22634v1", "arxiv_authors": ["Rui Li", "Zixuan Hu", "Wenxi Qu", "Jinouwen Zhang", "Zhenfei Yin", "Sha Zhang", "Xuantuo Huang", "Hanqing Wang", "Tai Wang", "Jiangmiao Pang", "Wanli Ouyang", "Lei Bai", "Wangmeng Zuo", "Ling-Yu Duan", "Dongzhan Zhou", "Shixiang Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2bf"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1048381, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72e"}, "filepath": "data/2411.12448v2.png", "tags": [], "_media_type": "image", "_rand": 0.999326596381635, "type": "Poster", "name": "Large Language Models for Lossless Image Compression: Next-Pixel Prediction in Language Space is All You Need", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119033", "abstract": "We have recently witnessed that ''Intelligence\" and `''Compression\" are the two sides of the same coin, where the language large model (LLM) with unprecedented intelligence is a general-purpose lossless compressor for various data modalities. This attribute is particularly appealing to the lossless image compression community, given the increasing need to compress high-resolution images in the current streaming media era. Consequently, a spontaneous envision emerges: Can the compression performance of the LLM elevate lossless image compression to new heights? However, our findings indicate that the naive application of LLM-based lossless image compressors suffers from a considerable performance gap compared with existing state-of-the-art (SOTA) codecs on common benchmark datasets. In light of this, we are dedicated to fulfilling the unprecedented intelligence (compression) capacity of the LLM for lossless image compression tasks, thereby bridging the gap between theoretical and practical compression performance. Specifically, we propose P -LLM, a next-pixel prediction-based LLM, which integrates various elaborated insights and methodologies, \\textit{e.g.,} pixel-level priors, the in-context ability of LLM, and a pixel-level semantic preservation strategy, to enhance the understanding capacity of pixel sequences for better next-pixel predictions. Extensive experiments on benchmark datasets demonstrate that P-LLM can beat SOTA classical and learned codecs.", "arxiv_id": "2411.12448v2", "arxiv_authors": ["Kecheng Chen", "Pingping Zhang", "Hui Liu", "Jie Liu", "Yibing Liu", "Jiaxin Huang", "Shiqi Wang", "Hong Yan", "Haoliang Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1449966, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a72f"}, "filepath": "data/2510.07961v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999926156314676, "type": "Poster", "name": "Latent Harmony: Synergistic Unified UHD Image Restoration via Latent Space Regularization and Controllable Refinement", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115355", "abstract": "Ultra-High Definition (UHD) image restoration struggles to balance computational efficiency and detail retention.While Variational Autoencoders (VAEs) offer improved efficiency by operating in the latent space, with the Gaussian variational constraint, this compression preserves semantics but sacrifices critical high-frequency attributes specific to degradation and thus compromises reconstruction fidelity. % This compromises reconstruction fidelity, even when global semantics are preserved.Consequently, a VAE redesign is imperative to foster a robust semantic representation conducive to generalization and perceptual quality, while simultaneously enabling effective high-frequency information processing crucial for reconstruction fidelity.To address this, we propose \\textit{Latent Harmony}, a two-stage framework that reinvigorates VAEs for UHD restoration by concurrently regularizing the latent space and enforcing high-frequency-aware reconstruction constraints. Specifically, Stage One introduces the LH-VAE, which fortifies its latent representation through visual semantic constraints and progressive degradation perturbation for enhanced semantics robustness; meanwhile, it incorporates latent equivariance to bolster its high-frequency reconstruction capabilities. Then, Stage Two facilitates joint training of this refined VAE with a dedicated restoration model.This stage integrates High-Frequency Low-Rank Adaptation (HF-LoRA), featuring two distinct modules: an encoder LoRA, guided by a fidelity-oriented high-frequency alignment loss, tailored for the precise extraction of authentic details from degradation-sensitive high-frequency components; and a decoder LoRA, driven by a perception-oriented loss, designed to synthesize perceptually superior textures. These LoRA modules are meticulously trained via alternating optimization with selective gradient propagation to preserve the integrity of the pre-trained latent structure. This methodology culminates in a flexible fidelity-perception trade-off at inference, managed by an adjustable parameter $\\alpha$.Extensive experiments demonstrate that \\textit{Latent Harmony} effectively balances perceptual and reconstructive objectives with efficiency, achieving superior restoration performance across diverse UHD and standard-resolution scenarios.", "arxiv_id": "2510.07961v3", "arxiv_authors": ["Yidi Liu", "Xueyang Fu", "Jie Huang", "Jie Xiao", "Dong Li", "Wenlong Zhang", "Lei Bai", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037014, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a730"}, "filepath": "data/2508.05941v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996458628198565, "type": "Poster", "name": "Latent Policy Barrier: Learning Robust Visuomotor Policies by Staying In-Distribution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119037", "abstract": "Visuomotor policies trained via behavior cloning are vulnerable to covariate shift, where small deviations from expert trajectories can compound into failure. Common strategies to mitigate this issue involve expanding the training distribution through human-in-the-loop corrections or synthetic data augmentation. However, these approaches are often labor-intensive, rely on strong task assumptions, or compromise the quality of imitation. We introduce Latent Policy Barrier, a framework for robust visuomotor policy learning. Inspired by Control Barrier Functions, LPB treats the latent embeddings of expert demonstrations as an implicit barrier separating safe, in-distribution states from unsafe, out-of-distribution (OOD) ones. Our approach decouples the role of precise expert imitation and OOD recovery into two separate modules: a base diffusion policy solely on expert data, and a dynamics model trained on both expert and suboptimal policy rollout data. At inference time, the dynamics model predicts future latent states and optimizes them to stay within the expert distribution. Both simulated and real-world experiments show that LPB improves both policy robustness and data efficiency, enabling reliable manipulation from limited expert data and without additional human correction or annotation. More details are on our anonymous project website https://latentpolicybarrier.github.io.", "arxiv_id": "2508.05941v1", "arxiv_authors": ["Zhanyi Sun", "Shuran Song"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c2"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1307071, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a731"}, "filepath": "data/2509.16527v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999858848559825, "type": "Poster", "name": "Lattice Boltzmann Model for Learning Real-World Pixel Dynamicity", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119646", "abstract": "This work proposes the Lattice Boltzmann Model (LBM) to learn real-world pixel dynamicity for visual tracking.LBM decomposes visual representations into dynamic pixel lattices and solves pixel motion states through collision-streaming processes. Specifically, the high-dimensional distribution of the target pixels is acquired through a multilayer predict-update network to estimate the pixel positions and visibility. The predict stage formulates lattice collisions among the spatial neighborhood of target pixels and develops lattice streaming within the temporal visual context. The update stage rectifies the pixel distributions with online visual representations. Compared with existing methods, LBM demonstrates practical applicability in an online and real-time manner, which can efficiently adapt to real-world visual tracking tasks. Comprehensive evaluations of real-world point tracking benchmarks such as TAP-Vid and RoboTAP validate LBM's efficiency. A general evaluation of large-scale open-world object tracking benchmarks such as TAO, BFT, and OVT-B further demonstrates LBM's real-world practicality.", "arxiv_id": "2509.16527v1", "arxiv_authors": ["Guangze Zheng", "Shijie Lin", "Haobo Zuo", "Si Si", "Ming-Shan Wang", "Changhong Fu", "Jia Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2178011, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a732"}, "filepath": "data/2510.15304v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998917142359376, "type": "Poster", "name": "Layer as Puzzle Pieces: Compressing Large Language Models through Layer Concatenation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116886", "abstract": "Large Language Models (LLMs) excel at natural language processing tasks, but their massive size leads to high computational and storage demands.Recent works have sought to reduce their model size through layer-wise structured pruning.However, they tend to ignore retaining the capabilities in the pruned part. In this work, we re-examine structured pruning paradigms and uncover several key limitations: 1) notable performance degradation due to direct layer removal, 2) incompetent linear weighted layer aggregation, and 3) the lack of effective post-training recovery mechanisms.To address these limitations, we propose CoMe, including a progressive layer pruning framework with a Concatenation-based Merging technology and a hierarchical distillation post-training process. Specifically, we introduce a channel sensitivity metric that utilizes activation intensity and weight norms for fine-grained channel selection. Subsequently, we employ a concatenation-based layer merging method to fuse the most critical channels in the adjacent layers, enabling a progressive model size reduction. Finally, we propose a hierarchical distillation protocol, which leverages the correspondences between the original and pruned model layers established during pruning, enabling efficient knowledge transfer.Experiments on seven benchmarks show that CoMe achieves state-of-the-art performance; when pruning 30% of LLaMA-2-7b's parameters, the pruned model retains 83% of its original average accuracy.", "arxiv_id": "2510.15304v1", "arxiv_authors": ["Fei Wang", "Li Shen", "Liang Ding", "Chao Xue", "Ye Liu", "Changxing Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070697, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a733"}, "filepath": "data/2506.14271v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996597702144162, "type": "Poster", "name": "Leader360V: A Large-scale, Real-world 360 Video Dataset for Multi-task Learning in Diverse Environment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121487", "abstract": "360 video captures the complete surrounding scenes with the ultra-large field of view of 360x180. This makes 360 scene understanding tasks, *e.g.*, segmentation and tracking, crucial for appications, such as autonomous driving, robotics. With the recent emergence of foundation models, the community is, however, impeded by the lack of large-scale, labelled real-world datasets. This is caused by the inherent spherical properties, *e.g.*, severe distortion in polar regions, and content discontinuities, rendering the annotation costly yet complex. This paper introduces **Leader360V**, the **first** large-scale (10K+), labeled real-world 360 video datasets for instance segmentation and tracking. Our datasets enjoy high scene diversity, ranging from indoor and urban settings to natural and dynamic outdoor scenes. To automate annotation, we design an automatic labeling pipeline, which subtly coordinates pre-trained 2D segmentors and large language models (LLMs) to facilitate the labeling. The pipeline operates in three novel stages. Specifically, in the **Initial Annotation Phase**, we introduce a Semantic- and Distortion-aware Refinement (**SDR**) module, which combines object mask proposals from multiple 2D segmentors with LLM-verified semantic labels. These are then converted into mask prompts to guide SAM2 in generating distortion-aware masks for subsequent frames. In the **Auto-Refine Annotation Phase**, missing or incomplete regions are corrected either by applying the SDR again or resolving the discontinuities near the horizontal borders. The **Manual Revision Phase** finally incorporates LLMs and human annotators to further refine and validate the annotations. Extensive user studies and evaluations demonstrate the effectiveness of our labeling pipeline. Meanwhile, experiments confirm that Leader360V significantly enhances model performance for 360 video segmentation and tracking, paving the way for more scalable 360 scene understanding. We release our dataset and code at {https://leader360v.github.io/Leader360V\\_HomePage/} for anonymous review.", "arxiv_id": "2506.14271v1", "arxiv_authors": ["Weiming Zhang", "Dingwen Xiao", "Aobotao Dai", "Yexin Liu", "Tianbo Pan", "Shiqi Wen", "Lei Chen", "Lin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2578408, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a734"}, "filepath": "data/2505.22025v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990527245267551, "type": "Poster", "name": "Learnable Burst-Encodable Time-of-Flight Imaging for High-Fidelity Long-Distance Depth Sensing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115057", "abstract": "Long-distance depth imaging holds great promise for applications such as autonomous driving and robotics. Direct time-of-flight (dToF) imaging offers high-precision, long-distance depth sensing, yet demands ultra-short pulse light sources and high-resolution time-to-digital converters. In contrast, indirect time-of-flight (iToF) imaging often suffers from phase wrapping and low signal-to-noise ratio (SNR) as the sensing distance increases. In this paper, we introduce a novel ToF imaging paradigm, termed Burst-Encodable Time-of-Flight (BE-ToF), which facilitates high-fidelity, long-distance depth imaging. Specifically, the BE-ToF system emits light pulses in burst mode and estimates the phase delay of the reflected signal over the entire burst period, thereby effectively avoiding the phase wrapping inherent to conventional iToF systems. Moreover, to address the low SNR caused by light attenuation over increasing distances, we propose an end-to-end learnable framework that jointly optimizes the coding functions and the depth reconstruction network. A specialized double well function and first-order difference term are incorporated into the framework to ensure the hardware implementability of the coding functions. The proposed approach is rigorously validated through comprehensive simulations and real-world prototype experiments, demonstrating its effectiveness and practical applicability.", "arxiv_id": "2505.22025v1", "arxiv_authors": ["Manchao Bao", "Shengjiang Fang", "Tao Yue", "Xuemei Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 996792, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a735"}, "filepath": "data/2505.05495v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991580210021659, "type": "Poster", "name": "Learning 3D Persistent Embodied World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117501", "abstract": "The ability to simulate the effects of future actions on the world is a crucial ability of intelligent embodied agents, enabling agents to anticipate the effects of their actions and make plans accordingly. While a large body of existing work has explored how to construct such world models using video models, they are often myopic in nature, without any memory of a scene not captured by currently observed images, preventing agents from making consistent long-horizon plans in complex environments where many parts of the scene are partially observed. We introduce a new persistent embodied world model with an explicit memory of previously generated content, enabling much more consistent long-horizon simulation. During generation time, our video diffusion model predicts RGB-D video of the future observations of the agent. This generation is then aggregated into a persistent 3D map of the environment. By conditioning the video model on this 3D spatial map, we illustrate how this enables video world models to faithfully simulate both seen and unseen parts of the world. Finally, we illustrate the efficacy of such a world model in downstream embodied applications, enabling effective planning and policy learning.", "arxiv_id": "2505.05495v1", "arxiv_authors": ["Siyuan Zhou", "Yilun Du", "Yuncong Yang", "Lei Han", "Peihao Chen", "Dit-Yan Yeung", "Chuang Gan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6144409, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a736"}, "filepath": "data/2505.08909v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992895902652092, "type": "Poster", "name": "Learning Cocoercive Conservative Denoisers via Helmholtz Decomposition for Poisson Imaging Inverse Problems", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115021", "abstract": "Plug-and-play (PnP) methods with deep denoisers have shown impressive results in imaging problems. They typically require strong convexity or smoothness of the fidelity term and a (residual) non-expansive denoiser for convergence. These assumptions, however, are violated in Poisson inverse problems, and non-expansiveness can hinder denoising performance. To address these challenges, we propose a cocoercive conservative (CoCo) denoiser, which may be (residual) expansive, leading to improved denoising performance. By leveraging the generalized Helmholtz decomposition, we introduce a novel training strategy that combines Hamiltonian regularization to promote conservativeness and spectral regularization to ensure cocoerciveness. We prove that CoCo denoiser is a proximal operator of a weakly convex function, enabling a restoration model with an implicit weakly convex prior. The global convergence of PnP methods to a stationary point of this restoration model is established. Extensive experimental results demonstrate that our approach outperforms closely related methods in both visual quality and quantitative metrics.", "arxiv_id": "2505.08909v2", "arxiv_authors": ["Deliang Wei", "Peng Chen", "Haobo Xu", "Jiale Yao", "Fang Li", "Tieyong Zeng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 874468, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a737"}, "filepath": "data/2505.11152v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990273530073472, "type": "Poster", "name": "Learning Dense Hand Contact Estimation from Imbalanced Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117957", "abstract": "Hands are essential to human interaction, and understanding contact between hands and the world can promote comprehensive understanding of their function. Recently, there have been growing number of hand interaction datasets that cover interaction with object, other hand, scene, and body. Despite the significance of the task and increasing high-quality data, how to effectively learn dense hand contact estimation remains largely underexplored. There are two major challenges for learning dense hand contact estimation. First, there exists class imbalance issue from hand contact datasets where majority of samples are not in contact. Second, hand contact datasets contain spatial imbalance issue with most of hand contact exhibited in finger tips, resulting in challenges for generalization towards contacts in other hand regions. To tackle these issues, we present a framework that learns dense HAnd COntact estimation (HACO) from imbalanced data. To resolve the class imbalance issue, we introduce balanced contact sampling, which builds and samples from multiple sampling groups that fairly represent diverse contact statistics for both contact and non-contact samples. Moreover, to address the spatial imbalance issue, we propose vertex-level class-balanced (VCB) loss, which incorporates spatially varying contact distribution by separately reweighting loss contribution of each vertex based on its contact frequency across dataset. As a result, we effectively learn to predict dense hand contact estimation with large-scale hand contact data without suffering from class and spatial imbalance issue. The codes will be released.", "arxiv_id": "2505.11152v2", "arxiv_authors": ["Daniel Sungho Jung", "Kyoung Mu Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2c9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061865, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a738"}, "filepath": "data/2412.01463v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999299141329856, "type": "Poster", "name": "Learning Differential Pyramid Representation for Tone Mapping", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117432", "abstract": "Existing tone mapping methods operate on downsampled inputs and rely on handcrafted pyramids to recover high-frequency details. Existing tone mapping methods operate on downsampled inputs and rely on handcrafted pyramids to recover high-frequency details. These designs typically fail to preserve fine textures and structural fidelity in complex HDR scenes. Furthermore, most methods lack an effective mechanism to jointly model global tone consistency and local contrast enhancement, leading to globally flat or locally inconsistent outputs such as halo artifacts. We present the Differential Pyramid Representation Network (DPRNet), an end-to-end framework for high-fidelity tone mapping. At its core is a learnable differential pyramid that generalizes traditional Laplacian and Difference-of-Gaussian pyramids through content-aware differencing operations across scales. This allows DPRNet to adaptively capture high-frequency variations under diverse luminance and contrast conditions. To enforce perceptual consistency, DPRNet incorporates global tone perception and local tone tuning modules operating on downsampled inputs, enabling efficient yet expressive tone adaptation. Finally, an iterative detail enhancement module progressively restores the full-resolution output in a coarse-to-fine manner, reinforcing structure and sharpness. To support training and benchmarking, we introduce a new tone mapping dataset with diverse real-world scenes and lighting conditions. Experiments show that DPRNet achieves state-of-the-art results, improving PSNR by **2.39 dB** on the 4K HDR+ dataset and **3.01 dB** on the 4K HDRI Haven dataset, while producing perceptually coherent, detail-preserving results. Demo available at [DPRNet](https://xxxxxxdprnet.github.io/DPRNet/).", "arxiv_id": "2412.01463v2", "arxiv_authors": ["Qirui Yang", "Yinbo Li", "Yihao Liu", "Peng-Tao Jiang", "Fangpu Zhang", "Qihua Cheng", "Huanjing Yue", "Jingyu Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ca"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.504Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2241400, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a739"}, "filepath": "data/2503.14698v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996755299510559, "type": "Poster", "name": "Learning Efficient Fuse-and-Refine for Feed-Forward 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118601", "abstract": "Recent advances in feed-forward 3D Gaussian Splatting have led to rapid improvements in efficient scene reconstruction from sparse views. However, most existing approaches construct Gaussian primitives directly aligned with the pixels in one or more of the input images. This leads to redundancies in the representation when input views overlap and constrains the position of the primitives to lie along the input rays without full flexibility in 3D space. Moreover, these pixel-aligned approaches do not naturally generalize to dynamic scenes, where effectively leveraging temporal information requires resolving both redundant and newly appearing content across frames. To address these limitations, we introduce a novel Fuse-and-Refine module that enhances existing feed-forward models by merging and refining the primitives in a canonical 3D space. At the core of our method is an efficient hybrid Splat-Voxel representation \u2013 from an initial set of pixel-aligned Gaussian primitives, we aggregate local features into a coarse-to-fine voxel hierarchy, and then use a sparse voxel transformer to process these voxel features and generate refined Gaussian primitives. By fusing and refining an arbitrary number of inputs into a consistent set of primitives, our representation effectively reduces redundancy and naturally adapts to temporal frames, enabling history-aware online reconstruction of dynamic scenes. Trained on large-scale static scene datasets, our model learns an effective global strategy to process around 20k primitives within 15ms and significantly enhances reconstruction quality compared to pixel-aligned reconstruction approaches. Without additional training, our model generalizes to video by fusing primitives across time, yielding a more temporally coherent result compared to baseline methods with graceful handling of occluded content. Our approach achieves state-of-the-art performance in both static and streaming scene reconstructions while running at interactive rates (15 fps with 350ms delay) on a single H100 GPU.", "arxiv_id": "2503.14698v1", "arxiv_authors": ["Yiming Wang", "Lucy Chai", "Xuan Luo", "Michael Niemeyer", "Manuel Lagunas", "Stephen Lombardi", "Siyu Tang", "Tiancheng Sun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2cb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2281984, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73a"}, "filepath": "data/2505.24625v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994038294648063, "type": "Poster", "name": "Learning from Videos for 3D World: Enhancing MLLMs with 3D Vision Geometry Priors", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116035", "abstract": "Previous research has explored the use of Multimodal Large Language Models (MLLMs) for 3D scene understanding by treating scenes as videos. These methods typically rely on explicit dense 3D inputs, such as point clouds or reconstructed Bird's-Eye View (BEV) maps. In this work, we take an innovative step forward by questioning whether we can improve MLLMs' 3D spatial understanding and reasoning capability by directly learning from videos with 3D vision geometry priors. We introduce a simple yet effective approach: the Video-3D Geometry LLM (VG LLM). Our method utilizes a 3D visual geometry encoder to extract 3D geometry prior information from the input video sequences. This information is integrated with visual tokens and fed into the MLLM. Our approach significantly boosts the 3D perception and reasoning capabilities of MLLMs, demonstrating notable improvements in various 3D scene understanding tasks and spatial reasoning benchmarks, all learned directly from videos. Remarkably, without relying on any explicit 3D data inputs, our 4B model achieves competitive performance with prior state-of-the-art methods, even outperforming the Gemini-1.5-Pro on the VSI-Bench.", "arxiv_id": "2505.24625v3", "arxiv_authors": ["Duo Zheng", "Shijia Huang", "Yanyang Li", "Liwei Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2cc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1114632, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73b"}, "filepath": "data/2509.26631v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995878540193925, "type": "Poster", "name": "Learning Generalizable Shape Completion with $\\mathrm{SIM}(3)$ Equivariance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116763", "abstract": "3D shape completion methods typically assume scans are pre-aligned to a canonical frame. This leaks pose and scale cues that networks may exploit to memorize absolute positions rather than inferring intrinsic geometry. When such alignment is absent in real data, performance collapses. We argue that robust generalization demands architectural equivariance to the similarity group, $\\mathrm{SIM}(3)$, so the model remains agnostic to pose and scale. Following this principle, we introduce the first $\\mathrm{SIM}(3)$-equivariant shape completion network, whose modular layers successively canonicalize features, reason over similarity-invariant geometry, and restore the original frame. Under a de-biased evaluation protocol that removes the hidden cues, our model outperforms both equivariant and augmentation baselines on the PCN benchmark. It also sets new zero-shot records on real driving and indoor scans, lowering minimal matching distance on KITTI by 17\\% and Chamfer distance $\\ell1$ on OmniObject3D by 14\\%. Perhaps surprisingly, ours under the stricter protocol still outperforms competitors under their biased settings. These results establish full $\\mathrm{SIM}(3)$ equivariance as an effective route to truly generalizable shape completion.", "arxiv_id": "2509.26631v2", "arxiv_authors": ["Yuqing Wang", "Zhaiyu Chen", "Xiao Xiang Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2cd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096503, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73c"}, "filepath": "data/2510.18357v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991801605324419, "type": "Poster", "name": "Learning Human-Object Interaction as Groups", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115110", "abstract": "Human-Object Interaction (HOI) detection aims to localize human-object pairs and identify interactive relationships between them. To aggregate the contextual cues of the scene, current HOI methods either propagate information across all the detected entities via self-attention mechanisms, or establish message passing between humans and objects using a bipartite graph architecture. However, theyall neglect inherent social attributes within human-centric scenarios, which are collective and beyond pairwise. In light of this, we revisit the relation modeling in HOI from a group view, and propose GroupHOI, a novel framework that propagates contextual information in terms of geometric proximity and semantic similarity. To exploit geometric proximity, humans and objects are grouped into distinct clusters using a learnable proximity estimator based on spatial features derived from bounding boxes. In each group, a soft correspondence is computed via self-attention to aggregate and dispatch contextual cues. To incorporate semantic similarity, we enhance the vanilla transformer-based interaction decoder with semantic-aware local cues derived from HO-pair features. Extensive experiments on HICO-DET and V-COCO benchmarks demonstrate the superiority of GroupHOI over the state-of-the-art methods. The source code will be released.", "arxiv_id": "2510.18357v1", "arxiv_authors": ["Jiajun Hong", "Jianan Wei", "Wenguan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ce"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112102, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73d"}, "filepath": "data/2510.08279v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997506488178305, "type": "Poster", "name": "Learning Neural Exposure Fields for View Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118401", "abstract": "Recent advances in neural scene representations have led to unprecedented quality in 3D reconstruction and view synthesis. Despite achieving high-quality results for common benchmarks with curated data, outputs often degrade for data that contain per image variations such as strong exposure changes, present, e.g., in most scenes with indoor and outdoor areas or rooms with windows. In this paper, we introduce Neural Exposure Fields (NExF), a novel technique for robustly reconstructing 3D scenes with high quality and 3D-consistent appearance from challenging real-world captures. In the core, we propose to learn a neural field predicting an optimal exposure value per 3D point, enabling us to optimize exposure along with the neural scene representation. While capture devices such as cameras select optimal exposure per image/pixel, we generalize this concept and perform optimization in 3D instead. This enables accurate view synthesis in high dynamic range scenarios, bypassing the need of post-processing steps or multi-exposure captures. Our contributions include a novel neural representation for exposure prediction, a system for joint optimization of the scene representation and the exposure field via a novel neural conditioning mechanism, and demonstrated superior performance on challenging real-world data. We find that our approach trains faster than prior works and produces state-of-the-art results on several benchmarks improving by over 55% over best-performing baselines.", "arxiv_id": "2510.08279v2", "arxiv_authors": ["Michael Niemeyer", "Fabian Manhardt", "Marie-Julie Rakotosaona", "Michael Oechsle", "Christina Tsalicoglou", "Keisuke Tateno", "Jonathan T. Barron", "Federico Tombari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2cf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1056715, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73e"}, "filepath": "data/2505.21524v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994614967185296, "type": "Poster", "name": "Learning Shared Representations from Unpaired Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116540", "abstract": "Learning shared representations is a primary area of multimodal representation learning. The current approaches to achieve a shared embedding space rely heavily on paired samples from each modality, which are significantly harder to obtain than unpaired ones. In this work, we demonstrate that shared representations can be learned almost exclusively from unpaired data. Our arguments are grounded in the spectral embeddings of the random walk matrices constructed independently from each unimodal representation. Empirical results in computer vision and natural language processing domains support its potential, revealing the effectiveness of unpaired data in capturing meaningful cross-modal relations, demonstrating high capabilities in retrieval tasks, generation, arithmetics, zero-shot, and cross-domain classification. This work, to the best of our knowledge, is the first to demonstrate these capabilities almost exclusively from unpaired samples, giving rise to a cross-modal embedding that could be viewed as universal, i.e., independent of the specific modalities of the data.", "arxiv_id": "2505.21524v2", "arxiv_authors": ["Amitai Yacobi", "Nir Ben-Ari", "Ronen Talmon", "Uri Shaham"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 923566, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a73f"}, "filepath": "data/2502.14520v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990976763934198, "type": "Poster", "name": "Learning Temporal 3D Semantic Scene Completion via Optical Flow Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116145", "abstract": "3D Semantic Scene Completion (SSC) provides comprehensive scene geometry and semantics for autonomous driving perception, which is crucial for enabling accurate and reliable decision-making. However, existing SSC methods are limited to capturing sparse information from the current frame or naively stacking multi-frame temporal features, thereby failing to acquire effective scene context. These approaches ignore critical motion dynamics and struggle to achieve temporal consistency. To address the above challenges, we propose a novel temporal SSC method FlowScene: Learning Temporal 3D Semantic Scene Completion via Optical Flow Guidance. By leveraging optical flow, FlowScene can integrate motion, different viewpoints, occlusions, and other contextual cues, thereby significantly improving the accuracy of 3D scene completion. Specifically, our framework introduces two key components: (1) a Flow-Guided Temporal Aggregation module that aligns and aggregates temporal features using optical flow, capturing motion-aware context and deformable structures; and (2) an Occlusion-Guided Voxel Refinement module that injects occlusion masks and temporally aggregated features into 3D voxel space, adaptively refining voxel representations for explicit geometric modeling.Experimental results demonstrate that FlowScene achieves state-of-the-art performance, with mIoU of 17.70 and 20.81 on the SemanticKITTI and SSCBench-KITTI-360 benchmarks. The source code will be released upon acceptance.", "arxiv_id": "2502.14520v1", "arxiv_authors": ["Meng Wang", "Fan Wu", "Ruihui Li", "Yunchuan Qin", "Zhuo Tang", "Kenli Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3104456, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a740"}, "filepath": "data/2503.22215v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999156672309796, "type": "Poster", "name": "Learning to Instruct for Visual Instruction Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118364", "abstract": "We propose LIT, an advancement of visual instruction tuning (VIT). While VIT equips Multimodal LLMs (MLLMs) with promising multimodal capabilities, the current design choices for VIT often result in overfitting and shortcut learning, potentially degrading performance. This gap arises from an overemphasis on instruction-following abilities, while neglecting the proactive understanding of visual information. Inspired by this, LIT adopts a simple yet effective approach by incorporating the loss function into both the instruction and response sequences. It seamlessly expands the training data, and regularizes the MLLMs from overly relying on language priors. Based on this merit, LIT achieves a significant relative improvement of up to 9% on comprehensive multimodal benchmarks, requiring no additional training data and incurring negligible computational overhead. Surprisingly, LIT attains exceptional fundamental visual capabilities, yielding up to an 18% improvement in captioning performance, while simultaneously alleviating hallucination in MLLMs. The model weights and source code will be publicly available.", "arxiv_id": "2503.22215v2", "arxiv_authors": ["Zhihan Zhou", "Feng Hong", "Jiaan Luo", "Jiangchao Yao", "Dongsheng Li", "Bo Han", "Ya Zhang", "Yanfeng Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1436372, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a741"}, "filepath": "data/2510.07741v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993062823252915, "type": "Poster", "name": "Learning to See Everything in Ultra-High Dynamic Range Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115040", "abstract": "Ultra-high dynamic range (UHDR) scenes exhibit pronounced exposure disparities between bright and dark regions. Such conditions are common in nighttime scenes with light sources. Even standard exposure settings often result in a bimodal intensity distribution with boundary peaks, making it challenging to simultaneously preserve both highlight and shadow details. RGB-based bracketing methods can capture details at both ends using short-long exposure pairs, but are susceptible to misalignment and ghosting artifacts. A short-exposure image, however, already retains sufficient highlight detail. The main challenge lies in denoising and recovering information in dark regions. RAW images, thanks to their higher bit depth and more predictable noise characteristics, offer greater potential for addressing this challenge. This raises a key question: can we learn to see everything in UHDR scenes using only a single short-exposure RAW image? Our method, relying solely on one short-exposure frame, inherently avoids ghosting and motion blur, making it particularly robust in dynamic scenes. To achieve that, we introduce a two-stage framework: exposure correction via a ratio map to balance dynamic range, followed by brightness-aware noise modeling to enhance detail recovery in dark regions. To support this, we design a 9-stop bracketing pipeline to synthesize realistic UHDR images, and construct a dataset accordingly on static scenes, using only the shortest exposure as input for reconstruction. Experiments show that our method significantly outperforms existing single-frame approaches. Code will be released publicly.", "arxiv_id": "2510.07741v1", "arxiv_authors": ["Yuang Meng", "Xin Jin", "Lina Lei", "Chun-Le Guo", "Chongyi Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033729, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a742"}, "filepath": "data/2505.21996v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998456981187016, "type": "Poster", "name": "Learning World Models for Interactive Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118999", "abstract": "Foundational world models must be both interactive and preserve spatialtemporal coherence to enable effective future planning with different action choices. However, present models for long video generation have limited inherent world modeling capabilities due to two main challenges: compounding errors and insufficient memory mechanisms. We enhance image-to-video models with interactive capabilities through additional action conditioning and autoregressive framework, and reveal that compounding error is inherently irreducible in autoregressive video generation, while insufficient memory mechanism leads to incoherence of world models. We propose video retrieval augmented generation (VRAG) with explicit global state conditioning, which significantly reduces long-term compounding errors and increases spatialtemporal consistency of video world models. In contrast, naive autoregressive generation with extended context windows and retrieval-augmented generation prove less effective for video generation, primarily due to the limited in-context learning capabilities of current video models. Our work illuminates the fundamental challenges in video world models and establishes a comprehensive benchmark for improving video generation models with internal world modeling capabilities.", "arxiv_id": "2505.21996v1", "arxiv_authors": ["Taiye Chen", "Xun Hu", "Zihan Ding", "Chi Jin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2617658, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a743"}, "filepath": "data/2503.04344v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995729449346876, "type": "Poster", "name": "LEDiT: Your Length-Extrapolatable Diffusion Transformer without Positional Encoding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119583", "abstract": "Diffusion transformers (DiTs) struggle to generate images at resolutions higher than their training resolutions. The primary obstacle is that the explicit positional encodings (PE), such as RoPE, need extrapolating to unseen positions which degrades performance when the inference resolution differs from training. In this paper, We propose a Length-Extrapolatable Diffusion Transformer (LEDiT) to overcome this limitation. LEDiT needs no explicit PEs, thereby avoiding PE extrapolation. The key innovation of LEDiT lies in the use of causal attention. We demonstrate that causal attention can implicitly encode global positional information and show that such information facilitates extrapolation. We further introduce a locality enhancement module, which captures fine-grained local information to complement the global coarse-grained position information encoded by causal attention. Experimental results on both conditional and text-to-image generation tasks demonstrate that LEDiT supports up to 4\u00d7 resolution scaling (e.g., from 256$\\times$256 to 512$\\times$512), achieving better image quality compared to the state-of-the-art length extrapolation methods. We believe that LEDiT marks a departure from the standard RoPE-based methods and offers a promising insight into length extrapolation.", "arxiv_id": "2503.04344v3", "arxiv_authors": ["Shen Zhang", "Siyuan Liang", "Yaning Tan", "Zhaowei Chen", "Linze Li", "Ge Wu", "Yuhao Chen", "Shuheng Li", "Zhenyu Zhao", "Caihua Chen", "Jiajun Liang", "Yao Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5311112, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a744"}, "filepath": "data/2505.22647v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993320242167231, "type": "Poster", "name": "Let Them Talk: Audio-Driven Multi-Person Conversational Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120220", "abstract": "Audio-driven human animation methods, such as talking head and talking body generation, have made remarkable progress in generating synchronized facial movements and appealing visual quality videos. However, existing methods primarily focus on single human animation and struggle with multi-stream audio inputs, facing incorrect binding problems between audio and persons. Additionally, they exhibit limitations in instruction-following capabilities. To solve this problem, in this paper, we propose a novel task: Multi-Person Conversational Video Generation, and introduce a new framework, MultiTalk, to address the challenges during multi-person generation. Specifically, for audio injection, we investigate several schemes and propose the Label Rotary Position Embedding (L-RoPE) method to resolve the audio and person binding problem. Furthermore, during training, we observe that partial parameter training and multi-task training are crucial for preserving the instruction-following ability of the base model. MultiTalk achieves superior performance compared to other methods on several datasets, including talking head, talking body, and multi-person datasets, demonstrating the powerful generation capabilities of our approach.", "arxiv_id": "2505.22647v1", "arxiv_authors": ["Zhe Kong", "Feng Gao", "Yong Zhang", "Zhuoliang Kang", "Xiaoming Wei", "Xunliang Cai", "Guanying Chen", "Wenhan Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.505Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3817558, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a745"}, "filepath": "data/2506.09881v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994606778807273, "type": "Poster", "name": "Leveraging Depth and Language for Open-Vocabulary Domain-Generalized Semantic Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118636", "abstract": "Open-Vocabulary semantic segmentation (OVSS) and domain generalization in semantic segmentation (DGSS) highlight a subtle complementarity that motivates Open-Vocabulary Domain-Generalized Semantic Segmentation (OV-DGSS). OV-DGSS aims to generate pixel-level masks for unseen categories while maintaining robustness across unseen domains, a critical capability for real-world scenarios such as autonomous driving in adverse conditions. We introduce Vireo, a novel single-stage framework for OV-DGSS that unifies the strengths of OVSS and DGSS for the first time. Vireo builds upon frozen Visual Foundation Models (VFMs) and incorporates scene geometry via Depth VFMs to extract domain-invariant structural features. To bridge the gap between visual and textual modalities under domain shift, we propose three key components: (1) GeoText Prompts, which align geometric features with language cues and progressively refine encoder representations; (2) Coarse Mask Prior Embedding (CMPE) for enhancing gradient flow for faster convergence and stronger textual influence; and (3) the Domain-Open-Vocabulary Vector Embedding Head (DOV-VEH), which fuses refined structural and semantic features for robust prediction. Comprehensive evaluation on these components demonstrates the effectiveness of our designs. Our proposed Vireo achieves the state-of-the-art performance and surpasses existing methods by a large margin in both domain generalization and open-vocabulary recognition, offering a unified and scalable solution for robust visual understanding in diverse and dynamic environments. Code is available at https://github.com/anonymouse-9c53tp182bvz/Vireo.", "arxiv_id": "2506.09881v2", "arxiv_authors": ["Siyu Chen", "Ting Han", "Chengzheng Fu", "Changshe Zhang", "Chaolei Wang", "Jinhe Su", "Guorong Cai", "Meiliu Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1105016, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a746"}, "filepath": "data/2509.23639v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997951397789899, "type": "Poster", "name": "LightFair: Towards an Efficient Alternative for Fair T2I Diffusion via Debiasing Pre-trained Text Encoders", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115286", "abstract": "This paper explores a novel lightweight approach LightFair to achieve fair text-to-image diffusion models (T2I DMs) by addressing the adverse effects of the text encoder. Most existing methods either couple different parts of the diffusion model for full-parameter training or rely on auxiliary networks for correction. They incur heavy training or sampling burden and unsatisfactory performance. Since T2I DMs consist of multiple components, with the text encoder being the most fine-tunable and front-end module, this paper focuses on mitigating bias by fine-tuning text embeddings. To validate feasibility, we observe that the text encoder\u2019s neutral embedding output shows substantial skewness across image embeddings of various attributes in the CLIP space. More importantly, the noise prediction network further amplifies this imbalance. To finetune the text embedding, we propose a collaborative distance-constrained debiasing strategy that balances embedding distances to improve fairness without auxiliary references. However, mitigating bias can compromise the original generation quality. To address this, we introduce a two-stage text-guided sampling strategy to limit when the debiased text encoder intervenes. Extensive experiments demonstrate that LightFair is effective and efficient. Notably, on Stable Diffusion v1.5, our method achieves SOTA debiasing at just $1/4$ of the training burden, with virtually no increase in sampling burden.", "arxiv_id": "2509.23639v1", "arxiv_authors": ["Boyu Han", "Qianqian Xu", "Shilong Bao", "Zhiyong Yang", "Kangli Zi", "Qingming Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1170468, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a747"}, "filepath": "data/2501.16312v4.png", "tags": [], "_media_type": "image", "_rand": 0.9994836815863378, "type": "Poster", "name": "LinPrim: Linear Primitives for Differentiable Volumetric Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117250", "abstract": "Volumetric rendering has become central to modern novel view synthesis methods, which use differentiable rendering to optimize 3D scene representations directly from observed views. While many recent works build on NeRF or 3D Gaussians, we explore an alternative volumetric scene representation. More specifically, we introduce two new scene representations based on linear primitives - octahedra and tetrahedra - both of which define homogeneous volumes bounded by triangular faces. To optimize these primitives, we present a differentiable rasterizer that runs efficiently on GPUs, allowing end-to-end gradient-based optimization while maintaining real-time rendering capabilities. Through experiments on real-world datasets, we demonstrate comparable performance to state-of-the-art volumetric methods while requiring fewer primitives to achieve similar reconstruction fidelity. Our findings deepen the understanding of 3D representations by providing insights into the fidelity and performance characteristics of transparent polyhedra and suggest that adopting novel primitives can expand the available design space.", "arxiv_id": "2501.16312v4", "arxiv_authors": ["Nicolas von L\u00fctzow", "Matthias Nie\u00dfner"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2d9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3620710, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a748"}, "filepath": "data/2507.02861v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991975851449182, "type": "Poster", "name": "LiteReality: Graphic-Ready 3D Scene Reconstruction from RGB-D Scans", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119955", "abstract": "We propose LiteReality, a novel pipeline that converts RGB-D scans of indoor environments into compact, realistic, and interactive 3D virtual replicas. LiteReality not only reconstructs scenes that visually resemble reality but also supports key features essential for graphics pipelines\u2014such as object individuality, articulation, high-quality physically based rendering materials, and physically based interaction. At its core, LiteReality first performs scene understanding and parses the results into a coherent 3D layout and objects, with the help of a structured scene graph. It then reconstructs the scene by retrieving the most visually similar 3D artist-crafted models from a curated asset database. Later, the Material Painting module enhances the realism of retrieved objects by recovering high-quality, spatially varying materials. Finally, the reconstructed scene is integrated into a simulation engine with basic physical properties applied to enable interactive behavior. The resulting scenes are compact, editable, and fully compatible with standard graphics pipelines, making them suitable for applications in AR/VR, gaming, robotics, and digital twins. In addition, LiteReality introduces a training-free object retrieval module that achieves state-of-the-art similarity performance, as benchmarked on the Scan2CAD dataset, along with a robust Material Painting module capable of transferring appearances from images of any style to 3D assets\u2014even in the presence of severe misalignment, occlusion, and poor lighting. We demonstrate the effectiveness of LiteReality on both real-life scans and public datasets.", "arxiv_id": "2507.02861v1", "arxiv_authors": ["Zhening Huang", "Xiaoyang Wu", "Fangcheng Zhong", "Hengshuang Zhao", "Matthias Nie\u00dfner", "Joan Lasenby"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2da"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4610879, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a749"}, "filepath": "data/2506.07570v2.png", "tags": [], "_media_type": "image", "_rand": 0.999243351094052, "type": "Poster", "name": "LLM-driven Indoor Scene Layout Generation via Scaled Human-aligned Data Synthesis and Multi-Stage Preference Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117323", "abstract": "Automatic indoor layout generation has attracted increasing attention due to its potential in interior design, virtual environment construction, and embodied AI. Existing methods fall into two categories: prompt-driven approaches that leverage proprietary LLM services (e.g., GPT APIs), and learning-based methods trained on layout data upon diffusion-based models. Prompt-driven methods often suffer from spatial inconsistency and high computational costs, while learning-based methods are typically constrained by coarse relational graphs and limited datasets, restricting their generalization to diverse room categories. In this paper, we revisit LLM-based indoor layout generation and present 3D-SynthPlace, a large-scale dataset that combines synthetic layouts generated via a `GPT synthesize, Human inspect' pipeline, upgraded from the 3D-Front dataset. 3D-SynthPlace contains nearly 17,000 scenes, covering four common room types\u2014bedroom, living room, kitchen, and bathroom\u2014enriched with diverse objects and high-level spatial annotations. We further introduce OptiScene, a strong open-source LLM optimized for indoor layout generation, fine-tuned based on our 3D-SynthPlace dataset through our two-stage training. For the warum-up stage I, we adopt supervised fine-tuning (SFT), which is taught to first generate high-level spatial descriptions then conditionally predict concrete object placements. For the reinforcing stage II, to better align the generated layouts with human design preferences, we apply multi-turn direct preference optimization (DPO), which significantly improving layout quality and generation success rates. Extensive experiments demonstrate that OptiScene outperforms traditional prompt-driven and learning-based baselines. Moreover, OptiScene shows promising potential in interactive tasks such as scene editing and robot navigation, highlighting its applicability beyond static layout generation.", "arxiv_id": "2506.07570v2", "arxiv_authors": ["Yixuan Yang", "Zhen Luo", "Tongsheng Ding", "Junru Lu", "Mingqi Gao", "Jinyu Yang", "Victor Sanchez", "Feng Zheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2db"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2172029, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74a"}, "filepath": "data/2509.09672v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993414666394509, "type": "Poster", "name": "Locality in Image Diffusion Models Emerges from Data Statistics", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115663", "abstract": "Among generative models, diffusion models are uniquely intriguing due to the existence of a closed-form Bayes-optimal solution to their training objective, often referred to as the optimal denoiser.However, this optimal denoiser merely reproduces images in the training set, and hence fails to serve as a complete theory for the behavior of deep diffusion models. Recent work has attempted to characterize this gap between the optimal denoiser and deep diffusion models and has produced analytical, training-free models that can generate images that resemble those generated by a trained UNet. The best-performing method suggests that shift equivariance and locality inductive biases of convolutional neural networks are the cause of this gap, and hence, incorporates these assumptions into its analytical model. In this work, we present evidence that the locality in deep diffusion models emerges as a statistical property of the image dataset, \\emph{not} due to the inductive bias of convolutional neural networks. Specifically, we demonstrate that an optimal parametric linear denoiser exhibits similar locality properties and provides a theoretical analysis that grounds this in a principal component analysis of the training set.We then show that an analytical denoiser based on these statistics better matches scores predicted by a deep diffusion model than the prior, expert-crafted alternative.", "arxiv_id": "2509.09672v1", "arxiv_authors": ["Artem Lukoianov", "Chenyang Yuan", "Justin Solomon", "Vincent Sitzmann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2dc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 951319, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74b"}, "filepath": "data/2505.18832v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993613031290598, "type": "Poster", "name": "Localizing Knowledge in Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117911", "abstract": "Understanding how knowledge is distributed across the layers of generative models is crucial for improving interpretability, controllability, and adaptation. While prior work has explored knowledge localization in UNet-based architectures, Diffusion Transformer (DiT)-based models remain underexplored in this context. In this paper, we propose a model- and knowledge-agnostic method to localize where specific types of knowledge are encoded within the DiT blocks. We evaluate our method on state-of-the-art DiT-based models, including PixArt-$\\alpha$, FLUX, and SANA, across six diverse knowledge categories. We show that the identified blocks are both interpretable and causally linked to the expression of knowledge in generated outputs.Building on these insights, we apply our localization framework to two key applications: *model personalization* and *knowledge unlearning*. In both settings, our localized fine-tuning approach enables efficient and targeted updates, reducing computational cost, improving task-specific performance, and better preserving general model behavior with minimal interference to unrelated or surrounding content.Overall, our findings offer new insights into the internal structure of DiTs and introduce a practical pathway for more interpretable, efficient, and controllable model editing.", "arxiv_id": "2505.18832v1", "arxiv_authors": ["Arman Zarei", "Samyadeep Basu", "Keivan Rezaei", "Zihao Lin", "Sayan Nag", "Soheil Feizi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2dd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074717, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74c"}, "filepath": "data/2503.18142v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997669616062689, "type": "Poster", "name": "LocDiff: Identifying Locations on Earth by Diffusing in the Hilbert Space", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116714", "abstract": "Image geolocalization is a fundamental yet challenging task, aiming at inferring the geolocation on Earth where an image is taken. State-of-the-art methods employ either grid-based classification or gallery-based image-location retrieval, whose spatial generalizability significantly suffers if the spatial distribution of test images does not align with the choices of grids and galleries. Recently emerging generative approaches, while getting rid of grids and galleries, use raw geographical coordinates and suffer quality losses due to their lack of multi-scale information. To address these limitations, we propose a multi-scale latent diffusion model called LocDiff for image geolocalization. We developed a novel positional encoding-decoding framework called Spherical Harmonics Dirac Delta (SHDD) Representations, which encodes points on a spherical surface (e.g., geolocations on Earth) into a Hilbert space of Spherical Harmonics coefficients and decodes points (geolocations) by mode-seeking on spherical probability distributions. We also propose a novel SirenNet-based architecture (CS-UNet) to learn an image-based conditional backward process in the latent SHDD space by minimizing a latent KL-divergence loss. To the best of our knowledge, LocDiff is the first image geolocalization model that performs latent diffusion in a multi-scale location encoding space and generates geolocations under the guidance of images. Experimental results show that LocDiff can outperform all state-of-the-art grid-based, retrieval-based, and diffusion-based baselines across 5 challenging global-scale image geolocalization datasets, and demonstrates significantly stronger generalizability to unseen geolocations.", "arxiv_id": "2503.18142v1", "arxiv_authors": ["Zhangyu Wang", "Jielu Zhang", "Zhongliang Zhou", "Qian Cao", "Nemin Wu", "Zeping Liu", "Lan Mu", "Yang Song", "Yiqun Xie", "Ni Lao", "Gengchen Mai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2de"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1629355, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74d"}, "filepath": "data/2505.23158v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997503847671674, "type": "Poster", "name": "LODGE: Level-of-Detail Large-Scale Gaussian Splatting with Efficient Rendering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118763", "abstract": "In this work, we present a novel level-of-detail (LOD) method for 3D Gaussian Splatting that enables real-time rendering of large-scale scenes on memory-constrained devices. Our approach introduces a hierarchical LOD representation that iteratively selects optimal subsets of Gaussians based on camera distance, thus largely reducing both rendering time and GPU memory usage. We construct each LOD level by applying a depth-aware 3D smoothing filter, followed by importance-based pruning and fine-tuning to maintain visual fidelity. To further reduce memory overhead, we partition the scene into spatial chunks and dynamically load only relevant Gaussians during rendering, employing an opacity-blending mechanism to avoid visual artifacts at chunk boundaries. Our method achieves state-of-the-art performance on both outdoor (Hierarchical 3DGS) and indoor (Zip-NeRF) datasets, delivering high-quality renderings with reduced latency and memory requirements.", "arxiv_id": "2505.23158v1", "arxiv_authors": ["Jonas Kulhanek", "Marie-Julie Rakotosaona", "Fabian Manhardt", "Christina Tsalicoglou", "Michael Niemeyer", "Torsten Sattler", "Songyou Peng", "Federico Tombari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2df"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2312023, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74e"}, "filepath": "data/2503.13139v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999718841404384, "type": "Poster", "name": "Logic-in-Frames: Dynamic Keyframe Search via Visual Semantic-Logical Verification for Long Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115148", "abstract": "Understanding long video content is a complex endeavor that often relies on densely sampled frame captions or end-to-end feature selectors, yet these techniques commonly overlook the logical relationships between textual queries and visual elements. In practice, computational constraints necessitate coarse frame subsampling, a challenge analogous to \u201cfinding a needle in a haystack.\u201d To address this issue, we introduce a semantics-driven search framework that reformulates keyframe selection under the paradigm of Visual Semantic-Logical Search (VSLS). Specifically, we systematically define four fundamental logical dependencies: 1) spatial co-occurrence, 2) temporal proximity, 3) attribute dependency, and 4) causal order. These relations dynamically update frame sampling distributions through an iterative refinement process, enabling context-aware identification of semantically critical frames tailored to specific query requirements. Our method establishes new state-of-the-art performance on the manually annotated benchmark in keyframe selection metrics. Furthermore, when applied to downstream video question-answering tasks, the proposed approach demonstrates the best performance gains over existing methods on LongVideoBench and Video-MME, validating its effectiveness in bridging the logical gap between textual queries and visual-temporal reasoning. The code will be publicly available.", "arxiv_id": "2503.13139v2", "arxiv_authors": ["Weiyu Guo", "Ziyang Chen", "Shaoguang Wang", "Jianxiang He", "Yijie Xu", "Jinhui Ye", "Ying Sun", "Hui Xiong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1135048, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a74f"}, "filepath": "data/2510.00303v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991111121189379, "type": "Poster", "name": "Looking Beyond the Known: Towards a Data Discovery Guided Open-World Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120090", "abstract": "Open-World Object Detection (OWOD) enriches traditional object detectors by enabling continual discovery and integration of unknown objects via human guidance. However, existing OWOD approaches frequently suffer from semantic confusion between known and unknown classes, alongside catastrophic forgetting, leading to diminished unknown recall and degraded known-class accuracy. To overcome these challenges, we propose **C**ombinato**r**ial **O**pen-**W**orld **D**etection (**CROWD**), a unified framework reformulating unknown object discovery and adaptation as an interwoven combinatorial (set-based) data-discovery (CROWD-Discover) and representation learning (CROWD-Learn) task. CROWD-Discover strategically mines unknown instances by maximizing Submodular Conditional Gain (SCG) functions, selecting representative examples distinctly dissimilar from known objects. Subsequently, CROWD-Learn employs novel combinatorial objectives that jointly disentangle known and unknown representations while maintaining discriminative coherence among known classes, thus mitigating confusion and forgetting. Extensive evaluations on OWOD benchmarks illustrate that CROWD achieves improvements of 2.83% and 2.05% in known-class accuracy on M-OWODB and S-OWODB, respectively, and nearly 2.4$\\times$ unknown recall compared to leading baselines.", "arxiv_id": "2510.00303v1", "arxiv_authors": ["Anay Majee", "Amitesh Gangrade", "Rishabh Iyer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1865609, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a750"}, "filepath": "data/2505.18051v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997589585250193, "type": "Poster", "name": "LookWhere? Efficient Visual Recognition by Learning Where to Look and What to See from Self-Supervision", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117306", "abstract": "Vision transformers are ever larger, more accurate, and more expensive to compute.At high resolution, the expense is even more extreme as the number of tokens grows quadratically in the image size. We turn to adaptive computation to cope with this cost by learning to predict where to compute.Our LookWhere method divides the computation between a low-resolution selector and a high-resolution extractor without ever processing the full high-resolution input.We jointly pretrain the selector and extractor without task supervision by distillation from a self-supervised teacher, in effect learning where and what to compute at the same time.Unlike prior token reduction methods, which pay to save by pruning already-computed tokens, and prior token selection methods, which require complex and expensive per-task optimization, LookWhere economically and accurately selects and extracts transferrable representations of images.We show that LookWhere excels at sparse recognition on high-resolution inputs (Traffic Signs), maintaining accuracy while reducing FLOPs by 17x and time by 4x, and standard recognition tasks that are global (ImageNet classification) and local (ADE20K segmentation), improving accuracy while reducing time by 1.36x.", "arxiv_id": "2505.18051v1", "arxiv_authors": ["Anthony Fuller", "Yousef Yassin", "Junfeng Wen", "Daniel G. Kyrollos", "Tarek Ibrahim", "James R. Green", "Evan Shelhamer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1162867, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a751"}, "filepath": "data/2505.23758v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990153545390473, "type": "Poster", "name": "LoRAShop: Training-Free Multi-Concept Image Generation and Editing with Rectified Flow Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117656", "abstract": "We introduce LoRAShop, the first framework for multi-concept image generation and editing with LoRA models. LoRAShop builds on a key observation about the feature interaction patterns inside Flux-style diffusion transformers: concept-specific transformer features activate spatially coherent regions early in the denoising process. We harness this observation to derive a disentangled latent mask for each concept in a prior forward pass and blend the corresponding LoRA weights only within regions bounding the concepts to be personalized. The resulting edits seamlessly integrate multiple subjects or styles into the original scene while preserving global context, lighting, and fine details. Our experiments demonstrate that LoRAShop delivers better identity preservation compared to baselines. By eliminating retraining and external constraints, LoRAShop turns personalized diffusion models into a practical `photoshop-with-LoRAs' tool and opens new avenues for compositional visual storytelling and rapid creative iteration.", "arxiv_id": "2505.23758v1", "arxiv_authors": ["Yusuf Dalva", "Hidir Yesiltepe", "Pinar Yanardag"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.506Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7860503, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a752"}, "filepath": "data/2506.01935v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997087232288526, "type": "Poster", "name": "Low-Rank Head Avatar Personalization with Registers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116196", "abstract": "We introduce a novel method for low-rank personalization of a generic model for head avatar generation. Prior work proposes generic models that achieve high-quality face animation by leveraging large-scale datasets of multiple identities. However, such generic models usually fail to synthesize unique identity-specific details, since they learn a general domain prior. To adapt to specific subjects, we find that it is still challenging to capture high-frequency facial details via popular solutions like low-rank adaptation (LoRA). This motivates us to propose a specific architecture, a Register Module, that enhances the performance of LoRA, while requiring only a small number of parameters to adapt to an unseen identity. Our module is applied to intermediate features of a pre-trained model, storing and re-purposing information in a learnable 3D feature space. To demonstrate the efficacy of our personalization method, we collect a dataset of talking videos of individuals with distinctive facial details, such as wrinkles and tattoos. Our approach faithfully captures unseen faces, outperforming existing methods quantitatively and qualitatively. We will release the code, models, and dataset to the public.", "arxiv_id": "2506.01935v1", "arxiv_authors": ["Sai Tanmay Reddy Chakkera", "Aggelina Chatziagapi", "Md Moniruzzaman", "Chen-Ping Yu", "Yi-Hsuan Tsai", "Dimitris Samaras"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 954828, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a753"}, "filepath": "data/2509.03680v1.png", "tags": [], "_media_type": "image", "_rand": 0.999650502426606, "type": "Poster", "name": "LuxDiT: Lighting Estimation with Video Diffusion Transformer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116097", "abstract": "Estimating scene lighting from a single image or video remains a longstanding challenge in computer vision and graphics. Learning-based approaches are constrained by the scarcity of ground-truth HDR environment maps, which are expensive to capture and limited in diversity. While recent generative models offer strong priors for image synthesis, lighting estimation remains difficult due to its reliance on indirect visual cues, the need to infer global (non-local) context, and the recovery of high-dynamic-range outputs. We propose LuxDiT, a novel data-driven approach that fine-tunes a video diffusion transformer to generate HDR environment maps conditioned on visual input. Trained on a large synthetic dataset with diverse lighting conditions, our model learns to infer illumination from indirect visual cues and generalizes effectively to real-world scenes. To improve semantic alignment between the input and the predicted environment map, we introduce a low-rank adaptation finetuning strategy using a collected dataset of HDR panoramas. Our method produces accurate lighting predictions with realistic angular high-frequency details, outperforming existing state-of-the-art techniques in both quantitative and qualitative evaluations.", "arxiv_id": "2509.03680v1", "arxiv_authors": ["Ruofan Liang", "Kai He", "Zan Gojcic", "Igor Gilitschenski", "Sanja Fidler", "Nandita Vijaykumar", "Zian Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e5"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1009483, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a754"}, "filepath": "data/2506.09045v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991792039802821, "type": "Poster", "name": "MagCache: Fast Video Generation with Magnitude-Aware Cache", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118625", "abstract": "Existing acceleration techniques for video diffusion models often rely on uniform heuristics or time-embedding variants to skip timesteps and reuse cached features. These approaches typically require extensive calibration with curated prompts and risk inconsistent outputs due to prompt-specific overfitting. In this paper, we introduce a novel and robust discovery: a unified magnitude law observed across different models and prompts. Specifically, the magnitude ratio of successive residual outputs decreases monotonically, steadily in most timesteps while rapidly in the last several steps. Leveraging this insight, we introduce a Magnitude-aware Cache (MagCache) that adaptively skips unimportant timesteps using an error modeling mechanism and adaptive caching strategy. Unlike existing methods requiring dozens of curated samples for calibration, MagCache only requires a single sample for calibration. Experimental results show that MagCache achieves 2.1\u00d7 and 2.68\u00d7 speedups on Open-Sora and Wan 2.1, respectively, while preserving superior visual fidelity. It significantly outperforms existing methods in LPIPS, SSIM, and PSNR, under comparable computational budgets.", "arxiv_id": "2506.09045v1", "arxiv_authors": ["Zehong Ma", "Longhui Wei", "Feng Wang", "Shiliang Zhang", "Qi Tian"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047306, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a755"}, "filepath": "data/2506.07016v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998200577863336, "type": "Poster", "name": "MAGNET: A Multi-agent Framework for Finding Audio-Visual Needles by Reasoning over Multi-Video Haystacks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119257", "abstract": "Large multimodal models (LMMs) have shown remarkable progress in audio-visual understanding, yet they struggle with real-world scenarios that requirecomplex reasoning across extensive video collections. Existing benchmarks forvideo question answering remain limited in scope, typically involving one clipper query, which falls short of representing the challenges of large-scale, audio-visual retrieval and reasoning encountered in practical applications. To bridgethis gap, we introduce a novel task named AV-HaystacksQA, where the goalis to identify salient segments across different videos in response to a query andlink them together to generate the most informative answer. To this end, wepresent AVHaystacks, an audio-visual benchmark comprising 3100 annotated QApairs designed to assess the capabilities of LMMs in multi-video retrieval andtemporal grounding task. Additionally, we propose a model-agnostic, multi-agentframework MAGNET to address this challenge, achieving up to 89% and 65%relative improvements over baseline methods on BLEU@4 and GPT evaluationscores in QA task on our proposed AVHaystacks. To enable robust evaluation ofmulti-video retrieval and temporal grounding for optimal response generation, weintroduce two new metrics, STEM, which captures alignment errors between aground truth and a predicted step sequence and MTGS, to facilitate balanced andinterpretable evaluation of segment-level grounding performance. Our code anddataset will be released publicly", "arxiv_id": "2506.07016v2", "arxiv_authors": ["Sanjoy Chowdhury", "Mohamed Elmoghany", "Yohan Abeysinghe", "Junjie Fei", "Sayan Nag", "Salman Khan", "Mohamed Elhoseiny", "Dinesh Manocha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113926, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a756"}, "filepath": "data/2507.06363v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996196667814953, "type": "Poster", "name": "Mamba Goes HoME: Hierarchical Soft Mixture-of-Experts for 3D Medical Image Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115006", "abstract": "In recent years, artificial intelligence has significantly advanced medical image segmentation. However, challenges remain, including efficient 3D medical image processing across diverse modalities and handling data variability. In this work, we introduce Hierarchical Soft Mixture-of-Experts (HoME), a two-level token-routing layer for efficient long-context modeling, specifically designed for 3D medical image segmentation. Built on the Mamba state-space model (SSM) backbone, HoME enhances sequential modeling through sparse, adaptive expert routing. The first stage employs a Soft Mixture-of-Experts (SMoE) layer to partition input sequences into local groups, routing tokens to specialized per-group experts for localized feature extraction. The second stage aggregates these outputs via a global SMoE layer, enabling cross-group information fusion and global context refinement. This hierarchical design, combining local expert routing with global expert refinement improves generalizability and segmentation performance, surpassing state-of-the-art results across datasets from the three most commonly used 3D medical imaging modalities and data quality.", "arxiv_id": "2507.06363v2", "arxiv_authors": ["Szymon P\u0142otka", "Gizem Mert", "Maciej Chrabaszcz", "Ewa Szczurek", "Arkadiusz Sitek"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e8"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1250560, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a757"}, "filepath": "data/2508.10133v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990091613822375, "type": "Poster", "name": "MANGO: Multimodal Attention-based Normalizing Flow Approach to Fusion Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117075", "abstract": "Multimodal learning has gained much success in recent years. However, current multimodal fusion methods adopt the attention mechanism of Transformers to implicitly learn the underlying correlation of multimodal features. As a result, the multimodal model cannot capture the essential features of each modality, making it difficult to comprehend complex structures and correlations of multimodal inputs. This paper introduces a novel Multimodal Attention-based Normalizing Flow (MANGO) approach\\footnote{The source code of this work will be publicly available.} to developing explicit, interpretable, and tractable multimodal fusion learning. In particular, we propose a new Invertible Cross-Attention (ICA) layer to develop the Normalizing Flow-based Model for multimodal data. To efficiently capture the complex, underlying correlations in multimodal data in our proposed invertible cross-attention layer, we propose three new cross-attention mechanisms: Modality-to-Modality Cross-Attention (MMCA), Inter-Modality Cross-Attention (IMCA), and Learnable Inter-Modality Cross-Attention (LICA). Finally, we introduce a new Multimodal Attention-based Normalizing Flow to enable the scalability of our proposed method to high-dimensional multimodal data. Our experimental results on three different multimodal learning tasks, i.e., semantic segmentation, image-to-image translation, and movie genre classification, have illustrated the state-of-the-art (SoTA) performance of the proposed approach.", "arxiv_id": "2508.10133v1", "arxiv_authors": ["Thanh-Dat Truong", "Christophe Bobda", "Nitin Agarwal", "Khoa Luu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2e9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1508353, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a758"}, "filepath": "data/2509.25863v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999107531757826, "type": "Poster", "name": "MAPLE: Multi-scale Attribute-enhanced Prompt Learning for Few-shot Whole Slide Image Classification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115160", "abstract": "Prompt learning has emerged as a promising paradigm for adapting pre-trained vision-language models (VLMs) to few-shot whole slide image (WSI) classification by aligning visual features with textual representations, thereby reducing annotation cost and enhancing model generalization. Nevertheless, existing methods typically rely on slide-level prompts and fail to capture the subtype-specific phenotypic variations of histological entities (e.g., nuclei, glands) that are critical for cancer diagnosis. To address this gap, we propose Multi-scale Attribute-enhanced Prompt Learning (MAPLE), a hierarchical framework for few-shot WSI classification that jointly integrates multi-scale visual semantics and performs prediction at both the entity and slide levels. Specifically, we first leverage large language models (LLMs) to generate entity-level prompts that can help identify multi-scale histological entities and their phenotypic attributes, as well as slide-level prompts to capture global visual descriptions. Then, an entity-guided cross-attention module is proposed to generate entity-level features, followed by aligning with their corresponding subtype-specific attributes for fine-grained entity-level prediction. To enrich entity representations, we further develop a cross-scale entity graph learning module that can update these representations by capturing their semantic correlations within and across scales. The refined representations are then aggregated into a slide-level representation and aligned with the corresponding prompts for slide-level prediction. Finally, we combine both entity-level and slide-level outputs to produce the final prediction results. Results on three cancer cohorts confirm the effectiveness of our approach in addressing few-shot pathology diagnosis tasks.", "arxiv_id": "2509.25863v1", "arxiv_authors": ["Junjie Zhou", "Wei Shao", "Yagao Yue", "Wei Mu", "Peng Wan", "Qi Zhu", "Daoqiang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ea"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063598, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a759"}, "filepath": "data/2507.07978v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996857466214585, "type": "Poster", "name": "Martian World Models: Controllable Video Synthesis with Physically Accurate 3D Reconstructions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121457", "abstract": "The synthesis of realistic Martian landscape videos, essential for mission rehearsal and robotic simulation, presents unique challenges. These primarily stem from the scarcity of high-quality Martian data and the significant domain gap relative to terrestrial imagery.To address these challenges, we introduce a holistic solution comprising two main components: 1) a data curation framework, Multimodal Mars Synthesis (M3arsSynth), which processes stereo navigation images to render high-fidelity 3D video sequences. 2) a video-based Martian terrain generator (MarsGen), that utilizes multimodal conditioning data to accurately synthesize novel, 3D-consistent frames. Our data are sourced from NASA\u2019s Planetary Data System (PDS), covering diverse Martian terrains and dates, enabling the production of physics-accurate 3D surface models at metric-scale resolution. During inference, MarsGen is conditioned on an initial image frame and can be guided by specified camera trajectories or textual prompts to generate new environments.Experimental results demonstrate that our solution surpasses video synthesis approaches trained on terrestrial data, achieving superior visual quality and 3D structural consistency.", "arxiv_id": "2507.07978v1", "arxiv_authors": ["Longfei Li", "Zhiwen Fan", "Wenyan Cong", "Xinhang Liu", "Yuyang Yin", "Matt Foutter", "Panwang Pan", "Chenyu You", "Yue Wang", "Zhangyang Wang", "Yao Zhao", "Marco Pavone", "Yunchao Wei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2eb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3623795, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75a"}, "filepath": "data/2504.12739v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998029294902977, "type": "Poster", "name": "Mask Image Watermarking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117000", "abstract": "We present MaskMark, a simple, efficient, and flexible framework for image watermarking. MaskMark has two variants: (1) MaskMark-D, which supports global watermark embedding, watermark localization, and local watermark extraction for applications such as tamper detection; (2) MaskMark-ED, which focuses on local watermark embedding and extraction, offering enhanced robustness in small regions to support fine-grined image protection. MaskMark-D builds on the classical encoder-distortion layer-decoder training paradigm. In MaskMark-D, we introduce a simple masking mechanism during the decoding stage that enables both global and local watermark extraction. During training, the decoder is guided by various types of masks applied to watermarked images before extraction, helping it learn to localize watermarks and extract them from the corresponding local areas. MaskMark-ED extends this design by incorporating the mask into the encoding stage as well, guiding the encoder to embed the watermark in designated local regions, which improves robustness under regional attacks. Extensive experiments show that MaskMark achieves state-of-the-art performance in global and local watermark extraction, watermark localization, and multi-watermark embedding. It outperforms all existing baselines, including the recent leading model WAM for local watermarking, while preserving high visual quality of the watermarked images. In addition, MaskMark is highly efficient and adaptable. It requires only 20 hours of training on a single A6000 GPU, achieving 15\u00d7 computational efficiency compared to WAM. By simply adjusting the distortion layer, MaskMark can be quickly fine-tuned to meet varying robustness requirements.", "arxiv_id": "2504.12739v3", "arxiv_authors": ["Runyi Hu", "Jie Zhang", "Shiqian Zhao", "Nils Lukas", "Jiwei Li", "Qing Guo", "Han Qiu", "Tianwei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ec"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1082295, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75b"}, "filepath": "data/2510.17845v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992316663139578, "type": "Poster", "name": "MAT-Agent: Learning to Dynamically Optimize Multi-Label Image Classification Training via Multi-Agent Collaboration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117440", "abstract": "We propose a novel collaborative multi-agent optimization framework for adaptive training in multi-label image classification, fundamentally advancing beyond static decision rules and isolated automation. Our method deploys a set of distributed, task-specific agents, each responsible for dynamically orchestrating critical training components\u2014including data augmentation, optimization methods, learning rate schedules, and loss functions\u2014according to evolving visual-semantic relationships and training states. Each agent employs an advanced non-stationary multi-armed bandit algorithm, integrating both $\\epsilon$-greedy and upper confidence bound strategies, to judiciously balance exploration with exploitation throughout the training lifecycle. A hierarchical composite reward mechanism synergizes overall classification accuracy, rare class recognition, and training stability, fostering both independent optimization and implicit collaborative behavior among agents. The framework further leverages refined techniques such as dual-rate exponential moving average smoothing and structured mixed-precision training to enhance robustness and computational efficiency. Extensive experiments across benchmarks including Pascal VOC, COCO, Yeast, and Mediamill demonstrate that our approach achieves superior mean average precision and rare-class F1 scores compared to state-of-the-art methods, while also exhibiting rapid convergence and remarkable cross-domain generalization. Our results indicate that collaborative multi-agent adaptive optimization offers a scalable and principled solution for self-optimizing deep learning in complex multi-label scenarios.", "arxiv_id": "2510.17845v1", "arxiv_authors": ["Jusheng Zhang", "Kaitong Cai", "Yijia Fan", "Ningyuan Liu", "Keze Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ed"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068365, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75c"}, "filepath": "data/2510.01532v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992319426730676, "type": "Poster", "name": "MATCH: Multi-faceted Adaptive Topo-Consistency for Semi-Supervised Histopathology Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117177", "abstract": "In semi-supervised segmentation, capturing meaningful semantic structures from unlabeled data is essential. This is particularly challenging in histopathology image analysis where objects are densely distributed. To address this issue, we propose a semi-supervised segmentation framework designed to robustly identify and preserve relevant topological features. Our method leverages multiple perturbed predictions obtained through stochastic dropouts and temporal training snapshots, enforcing topological consistency across these varied outputs. This consistency mechanism helps distinguish biologically meaningful structures from transient and noisy artifacts. A key challenge in this process is to accurately match the corresponding topological features across the predictions in the absence of ground truth. To overcome this, we introduce a novel matching strategy that integrates spatial overlap with global structural alignment, minimizing discrepancies among predictions. Extensive experiments demonstrate that our approach effectively reduces topological errors, resulting in more robust and accurate segmentations essential for reliable downstream analysis.", "arxiv_id": "2510.01532v1", "arxiv_authors": ["Meilong Xu", "Xiaoling Hu", "Shahira Abousamra", "Chen Li", "Chao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ee"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031604, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75d"}, "filepath": "data/2510.11387v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996570722669782, "type": "Poster", "name": "MaterialRefGS: Reflective Gaussian Splatting with Multi-view Consistent Material Inference", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118850", "abstract": "Modeling reflections from 2D images is essential for photorealistic rendering and novel view synthesis. Recent approaches enhance Gaussian primitives with reflection-related material attributes to enable physically based rendering (PBR) with 3D Gaussian Splatting (3DGS). However, the material inference often lacks sufficient constraints, especially under limited environment modeling, resulting in illumination aliasing and reduced generalization. In this work, we revisit the problem from a multi-view perspective and show that multi-view consistent material inference with more physically-based environment modeling is key to learning accurate reflections with 3DGS. To this end, we enforce 3D Gaussians to produce multi-view consistent material maps during deferred shading. We also track photometric variations across views to identify highly reflective regions, which serve as strong priors for reflection strength terms. To handle indirect illumination caused by inter-object occlusions, we further introduce an environment modeling strategy through ray tracing with 3DGS, enabling photorealistic rendering of indirect radiance. Experiments on widely used benchmarks show that our method faithfully recovers both illumination and geometry, achieving state-of-the-art rendering quality in novel views synthesis.", "arxiv_id": "2510.11387v2", "arxiv_authors": ["Wenyuan Zhang", "Jimin Tang", "Weiqi Zhang", "Yi Fang", "Yu-Shen Liu", "Zhizhong Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ef"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.507Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4572703, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75e"}, "filepath": "data/2506.00838v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993286674977087, "type": "Poster", "name": "Max Entropy Moment Kalman Filter for Polynomial Systems with Arbitrary Noise", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118033", "abstract": "Designing optimal Bayes filters for nonlinear non-Gaussian systems is a challenging task. The main difficulties are: 1) representing complex beliefs, 2) handling non-Gaussian noise, and 3) marginalizing past states. To address these challenges, we focus on polynomial systems and propose the Max Entropy Moment Kalman Filter (MEM-KF). To address 1), we represent arbitrary beliefs by a Moment-Constrained Max-Entropy Distribution (MED). The MED can asymptotically approximate almost any distribution given an increasing number of moment constraints. To address 2), we model the noise in the process and observation model as MED. To address 3), we propagate the moments through the process model and recover the distribution as MED, thus avoiding symbolic integration, which is generally intractable. All the steps in MEM-KF, including the extraction of a point estimate, can be solved via convex optimization. We showcase the MEM-KF in challenging robotics tasks, such as localization with unknown data association.", "arxiv_id": "2506.00838v1", "arxiv_authors": ["Sangli Teng", "Harry Zhang", "David Jin", "Ashkan Jasour", "Ram Vasudevan", "Maani Ghaffari", "Luca Carlone"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f0"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 993502, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a75f"}, "filepath": "data/2510.23301v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997395534425227, "type": "Poster", "name": "MDReID: Modality-Decoupled Learning for Any-to-Any Multi-Modal Object Re-Identification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119678", "abstract": "The challenge of inconsistent modalities in real-world applications presents significant obstacles to effective object re-identification (ReID). However, most existing approaches assume modality-matched conditions, significantly limiting their effectiveness in modality-mismatched scenarios. To overcome this limitation and achieve a more flexible ReID, we introduce MDReID to allow any-to-any image-level ReID systems. MDReID is inspired by the widely recognized perspective that modality information comprises both modality-shared features, predictable across modalities, and unpredictable modality-specific features, which are inherently modality-dependent and consist of two key components: the Modality Decoupling Module (MDM) and Modality-aware Metric Learning (MML). Specifically, MDM explicitly decomposes modality features into modality-shared and modality-specific representations, enabling effective retrieval in both modality-aligned and mismatched scenarios. MML, a tailored metric learning strategy, further enhances feature discrimination and decoupling by exploiting distributional relationships between shared and specific modality features. Extensive experiments conducted on three challenging multi-modality ReID benchmarks (RGBNT201, RGBNT100, MSVR310) consistently demonstrate the superiority of MDL. MDReID achieves significant mAP improvements of 9.8\\%, 3.0\\%, and 11.5\\% in modality-matched scenarios, and average gains of 3.4\\%, 11.8\\%, and 10.9\\% in modality-mismatched scenarios, respectively.", "arxiv_id": "2510.23301v1", "arxiv_authors": ["Yingying Feng", "Jie Li", "Jie Hu", "Yukang Zhang", "Lei Tan", "Jiayi Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085744, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a760"}, "filepath": "data/2501.04184v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999736667564931, "type": "Poster", "name": "MedicalNarratives: Connecting Medical Vision and Language with Localized Narratives", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121849", "abstract": "Multi-modal models are data hungry. While datasets with natural images are abundant, medical image datasets can not afford the same luxury. To enable representation learning for medical images at scale, we turn to YouTube, a platform with a large reservoir of open-source medical pedagogical videos. We curate MedicalNarratives, a dataset 4.7M medical image-text pairs, with 1M samples containing dense annotations in the form of traces and bounding boxes. Similar to think-aloud studies where instructors speak while hovering their mouse cursor movements over relevant image regions, 1M images in MedicalNarratives contains localized mouse traces in image pixels, creating a spatial association between the text and pixels. To evaluate the utility of MedicalNarratives, we train GenMedClip with a CLIP-like objective using our dataset spanning 12 medical domains. GenMedClip outperforms previous state-of-the-art models on all 12 domains on a newly constructed medical imaging benchmark. Data, demo, code, and models will be made available.", "arxiv_id": "2501.04184v2", "arxiv_authors": ["Wisdom O. Ikezogwo", "Kevin Zhang", "Mehmet Saygin Seyfioglu", "Fatemeh Ghezloo", "Linda Shapiro", "Ranjay Krishna"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1564324, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a761"}, "filepath": "data/2505.11852v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995114459931069, "type": "Poster", "name": "MedSG-Bench: A Benchmark for Medical Image Sequences Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121815", "abstract": "Visual grounding is essential for precise perception and reasoning in multimodal large language models (MLLMs), especially in medical imaging domains. While existing medical visual grounding benchmarks primarily focus on single-image scenarios, real-world clinical applications often involve sequential images, where accurate lesion localization across different modalities and temporal tracking of disease progression (e.g., pre- vs. post-treatment comparison) require fine-grained cross-image semantic alignment and context-aware reasoning. To remedy the underrepresentation of image sequences in existing medical visual grounding benchmarks, we propose MedSG-Bench, the first benchmark tailored for Medical Image Sequences Grounding. It comprises eight VQA-style tasks, formulated into two paradigms of the grounding tasks, including 1) Image Difference Grounding, which focuses on detecting change regions across images, and 2) Image Consistency Grounding, which emphasizes detection of consistent or shared semantics across sequential images. MedSG-Bench covers 76 public datasets, 10 medical imaging modalities, and a wide spectrum of anatomical structures and diseases, totaling 9,630 question\u2013answer pairs. We benchmark both general-purpose MLLMs (e.g., Qwen2.5-VL) and medical-domain specialized MLLMs (e.g., HuatuoGPT-vision), observing that even the advanced models exhibit substantial limitations in medical sequential grounding tasks. To advance this field, we construct MedSG-188K, a large-scale instruction-tuning dataset tailored for sequential visual grounding, and further develop MedSeq-Grounder, an MLLM designed to facilitate future research on fine-grained understanding across medical sequential images. We release all resources on https://anonymous.4open.science/r/test-ABC123", "arxiv_id": "2505.11852v1", "arxiv_authors": ["Jingkun Yue", "Siqi Zhang", "Zinan Jia", "Huihuan Xu", "Zongbo Han", "Xiaohong Liu", "Guangyu Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1024650, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a762"}, "filepath": "data/2505.16602v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996253049668646, "type": "Poster", "name": "MEgoHand: Multi-Modal Egocentric Hand-Object Interaction Motion Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118979", "abstract": "Egocentric hand-object motion generation is crucial for immersive AR/VR and robotic imitation but remains challenging due to unstable viewpoints, self-occlusions, perspective distortion, and noisy ego-motion. Existing methods rely on predefined 3D object priors, limiting generalization to novel objects, which restricts their generalizability to novel objects. Meanwhile, recent multimodal approaches suffer from ambiguous generation from abstract textual cues, intricate pipelines for modeling 3D hand-object correlation, and compounding errors in open-loop prediction. We propose **MEgoHand**, a multimodal framework that synthesizes physically plausible hand-object interactions from egocentric RGB, text, and initial hand pose. MEgoHand introduces a bi-level architecture: a high-level \u201ccerebrum\u201d leverages a vision language model (VLM) to infer motion priors from visual-textual context and a monocular depth estimator for object-agnostic spatial reasoning, while a low-level DiT-based flow-matching policy generates fine-grained trajectories with temporal orthogonal filtering to enhance stability. To address dataset inconsistency, we design a dataset curation paradigm with an Inverse MANO Retargeting Network and Virtual RGB-D Renderer, curating a unified dataset of **3.35M** RGB-D frames, **24K** interactions, and **1.2K** objects. Extensive experiments across **five** in-domain and **two** cross-domain datasets demonstrate the effectiveness of MEgoHand, achieving substantial reductions in wrist translation error (**86.9%**) and joint rotation error (**34.1%**), highlighting its capacity to accurately model fine-grained hand joint structures and generalize robustly across diverse scenarios.", "arxiv_id": "2505.16602v1", "arxiv_authors": ["Bohan Zhou", "Yi Zhan", "Zhongbin Zhang", "Zongqing Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038938, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a763"}, "filepath": "data/2509.19672v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991151923912936, "type": "Poster", "name": "Memory-Augmented Potential Field Theory: A Framework for Adaptive Control in Non-Convex Domains", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120263", "abstract": "Stochastic optimal control methods often struggle in complex non-convex landscapes, frequently becoming trapped in local optima due to their inability to learn from historical trajectory data. This paper introduces Memory-Augmented Potential Field Theory, a unified mathematical framework that integrates historical experience into stochastic optimal control. Our approach dynamically constructs memory-based potential fields that identify and encode key topological features of the state space, enabling controllers to automatically learn from past experiences and adapt their optimization strategy. We provide a theoretical analysis showing that memory-augmented potential fields possess non-convex escape properties, asymptotic convergence characteristics, and computational efficiency. We implement this theoretical framework in a Memory-Augmented Model Predictive Path Integral (MPPI) controller that demonstrates significantly improved performance in challenging non-convex environments. The framework represents a generalizable approach to experience-based learning within control systems (especially robotic dynamics), enhancing their ability to navigate complex state spaces without requiring specialized domain knowledge or extensive offline training.", "arxiv_id": "2509.19672v1", "arxiv_authors": ["Dongzhe Zheng", "Wenjie Mei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f5"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1035200, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a764"}, "filepath": "data/2506.03144v2.png", "tags": [], "_media_type": "image", "_rand": 0.999963312460813, "type": "Poster", "name": "MERIT: Multilingual Semantic Retrieval with Interleaved Multi-Condition Query", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117050", "abstract": "Semantic retrieval is crucial for modern applications yet remains underexplored in current research. Existing datasets are limited to single languages, single images, or singular retrieval conditions, often failing to fully exploit the expressive capacity of visual information as evidenced by maintained performance when images are replaced with captions. However, practical retrieval scenarios frequently involve interleaved multi-condition queries with multiple images.Hence, this paper introduces MERIT, the first multilingual dataset for interleaved multi-condition semantic retrieval, comprising 320,000 queries with 135,000 products in 5 languages, covering 7 distinct product categories.Extensive experiments on MERIT identify existing models's critical limitation: focusing solely on global semantic information while neglecting specific conditional elements in queries.Consequently, we propose Coral, a novel fine-tuning framework that adapts pre-trained MLLMs by integrating embedding reconstruction to preserve fine-grained conditional elements and contrastive learning to extract comprehensive global semantics.Experiments demonstrate that Coral achieves a 45.9% performance improvement over conventional approaches on MERIT, with strong generalization capabilities validated across 8 established retrieval benchmarks. Collectively, our contributions\u2014a novel dataset, identification of critical limitations in existing approaches, and an innovative fine-tuning framework\u2014establish a foundation for future research in interleaved multi-condition semantic retrieval. Anonymous Project Page:[https://anoy1314.github.io](https://anoy1314.github.io).", "arxiv_id": "2506.03144v2", "arxiv_authors": ["Wei Chow", "Yuan Gao", "Linfeng Li", "Xian Wang", "Qi Xu", "Hang Song", "Lingdong Kong", "Ran Zhou", "Yi Zeng", "Yidong Cai", "Botian Jiang", "Shilin Xu", "Jiajun Zhang", "Minghui Qiu", "Xiangtai Li", "Tianshu Yang", "Siliang Tang", "Juncheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2315751, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a765"}, "filepath": "data/2509.22281v1.png", "tags": [], "_media_type": "image", "_rand": 0.999286484264895, "type": "Poster", "name": "MesaTask: Towards Task-Driven Tabletop Scene Generation via 3D Spatial Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117791", "abstract": "The ability of robots to interpret human instructions and execute manipulation tasks necessitates the availability of task-relevant tabletop scenes for training. However, traditional methods for creating these scenes rely on time-consuming manual layout design or purely randomized layouts, which are limited in terms of plausibility or alignment with the tasks. In this paper, we formulate a novel task, namely task-oriented tabletop scene generation, which poses significant challenges due to the substantial gap between high-level task instructions and the tabletop scenes. To support research on such a challenging task, we introduce \\textbf{MesaTask-10K}, a large-scale dataset comprising approximately 10,700 synthetic tabletop scenes with \\emph{manually crafted layouts} that ensure realistic layouts and intricate inter-object relations. To bridge the gap between tasks and scenes, we propose a \\textbf{Spatial Reasoning Chain} that decomposes the generation process into object inference, spatial interrelation reasoning, and scene graph construction for the final 3D layout. We present \\textbf{MesaTask}, an LLM-based framework that utilizes this reasoning chain and is further enhanced with DPO algorithms to generate physically plausible tabletop scenes that align well with given task descriptions. Exhaustive experiments demonstrate the superior performance of MesaTask compared to baselines in generating task-conforming tabletop scenes with realistic layouts.", "arxiv_id": "2509.22281v1", "arxiv_authors": ["Jinkun Hao", "Naifu Liang", "Zhen Luo", "Xudong Xu", "Weipeng Zhong", "Ran Yi", "Yichen Jin", "Zhaoyang Lyu", "Feng Zheng", "Lizhuang Ma", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1323900, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a766"}, "filepath": "data/2508.14879v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999112332950663, "type": "Poster", "name": "MeshLLM: LLM-Powered Structured Mesh Code Generation from Point Clouds", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117054", "abstract": "Reconstructing 3D objects into editable programs is pivotal for applications like reverse engineering and shape editing. However, existing methods often rely on limited domain-specific languages (DSLs) and small-scale datasets, restricting their ability to model complex geometries and structures. To address these challenges, we introduce MeshLLM, a novel framework that reconstructs complex 3D objects from point clouds into editable Blender Python scripts. We develop a comprehensive set of expressive Blender Python APIs capable of synthesizing intricate geometries. Leveraging these APIs, we construct a large-scale paired object-code dataset, where the code for each object is decomposed into distinct semantic parts. Subsequently, we train a multimodal large language model (LLM) that translates 3D point cloud into executable Blender Python scripts. Our approach not only achieves superior performance in shape-to-code reconstruction tasks but also facilitates intuitive geometric and topological editing through convenient code modifications. Furthermore, our code-based representation enhances the reasoning capabilities of LLMs in 3D shape understanding tasks. Together, these contributions establish MeshLLM as a powerful and flexible solution for programmatic 3D shape reconstruction and understanding.", "arxiv_id": "2508.14879v2", "arxiv_authors": ["Bingquan Dai", "Li Ray Luo", "Qihong Tang", "Jie Wang", "Xinyu Lian", "Hao Xu", "Minghan Qin", "Xudong Xu", "Bo Dai", "Haoqian Wang", "Zhaoyang Lyu", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f8"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3109730, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a767"}, "filepath": "data/2505.16761v2.png", "tags": [], "_media_type": "image", "_rand": 0.999959952405454, "type": "Poster", "name": "Mesh-RFT: Enhancing Mesh Generation via Fine-grained Reinforcement Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115575", "abstract": "Existing pretrained models for 3D mesh generation often suffer from data biases and produce low-quality results, while global reinforcement learning (RL) methods rely on object-level rewards that struggle to capture local structure details. To address these challenges, we present $\\textbf{Mesh-RFT}$, a novel fine-grained reinforcement fine-tuning framework that employs Masked Direct Preference Optimization (M-DPO) to enable localized refinement via quality-aware face masking. To facilitate efficient quality evaluation, we introduce an objective topology-aware scoring system to evaluate geometric integrity and topological regularity at both object and face levels through two metrics: Boundary Edge Ratio (BER) and Topology Score (TS). By integrating these metrics into a fine-grained RL strategy, Mesh-RFT becomes the first method to optimize mesh quality at the granularity of individual faces, resolving localized errors while preserving global coherence. Experiment results show that our M-DPO approach reduces Hausdorff Distance (HD) by 24.6\\% and improves Topology Score (TS) by 3.8\\% over pre-trained models, while outperforming global DPO methods with a 17.4\\% HD reduction and 4.9\\% TS gain. These results demonstrate Mesh-RFT\u2019s ability to improve geometric integrity and topological regularity, achieving new state-of-the-art performance in production-ready mesh generation.", "arxiv_id": "2505.16761v2", "arxiv_authors": ["Jian Liu", "Jing Xu", "Song Guo", "Jing Li", "Jingfeng Guo", "Jiaao Yu", "Haohan Weng", "Biwen Lei", "Xianghui Yang", "Zhuo Chen", "Fangqi Zhu", "Tao Han", "Chunchao Guo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2f9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4914985, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a768"}, "filepath": "data/2507.22062v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991721718165141, "type": "Poster", "name": "MetaCLIP 2: A Worldwide Scaling Recipe", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117255", "abstract": "Contrastive Language\u2013Image Pretraining (CLIP) is a popular foundation model, supporting from zero-shot classification, retrieval to encoders for multimodal large language models (MLLMs). Although CLIP is successfully trained on billion-scale image-text pairs from the English world, scaling CLIP's training further to learning from the worldwide web data is still challenging: (1) no curation method is available to handle data points from non-English world; (2) the English performance from existing multilingual CLIP is worse than its English-only counterpart, i.e., ``curse of multilinguality'' that is common in LLMs. Here, we present MetaCLIP 2, the first recipe training CLIP from scratch on worldwide web-scale image-text pairs. To generalize our findings, we conduct rigorous ablations with minimal changes that are necessary to address the above challenges and present a recipe enabling mutual benefits from English and non-English world data.In zero-shot ImageNet classification, MetaCLIP 2 ViT-H/14 surpasses its English-only counterpart by 0.8% and mSigLIP by 0.7%, and surprisingly sets new state-of-the-art without system-level confounding factors (e.g., translation, bespoke architecture changes) on multilingual benchmarks, such as CVQA with 57.4%, Babel-ImageNet with 50.2% and XM3600 with 64.3% on image-to-text retrieval.Code is in supplementary material and models will be made publicly available.", "arxiv_id": "2507.22062v3", "arxiv_authors": ["Yung-Sung Chuang", "Yang Li", "Dong Wang", "Ching-Feng Yeh", "Kehan Lyu", "Ramya Raghavendra", "James Glass", "Lifei Huang", "Jason Weston", "Luke Zettlemoyer", "Xinlei Chen", "Zhuang Liu", "Saining Xie", "Wen-tau Yih", "Shang-Wen Li", "Hu Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2fa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1367075, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a769"}, "filepath": "data/2510.04057v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996816685543704, "type": "Poster", "name": "MetaFind: Scene-Aware 3D Asset Retrieval for Coherent Metaverse Scene Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115513", "abstract": "We present MetaFind, a scene-aware multi-modal retrieval framework designed to enhance scene generation in the metaverse by retrieving 3D assets from large-scale repositories. MetaFind addresses two core challenges: (i) inconsistent asset retrieval that overlooks spatial, semantic, and stylistic constraints, and (ii) the absence of a standardized retrieval paradigm specifically tailored for 3D asset retrieval, as existing approaches predominantly rely on general-purpose 3D shape representation models. Our key innovation is a retrieval mechanism that enhances both spatial reasoning and style consistency by jointly modeling object-level features (including appearance) and scene-level layout structures. Methodologically, MetaFind introduces a plug-and-play layout encoder that captures both spatial relationships and object appearance features, ensuring retrieved 3D assets are contextually and stylistically coherent with the existing scene. The framework supports iterative scene construction by continuously adapting retrieval results to current scene updates. Empirical evaluations demonstrate the improved spatial and stylistic consistency of MetaFind in various retrieval tasks compared to baseline methods.", "arxiv_id": "2510.04057v1", "arxiv_authors": ["Zhenyu Pan", "Yucheng Lu", "Han Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2fb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.508Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097954, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76a"}, "filepath": "data/2405.20791v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992555198733565, "type": "Poster", "name": "MetaGS: A Meta-Learned Gaussian-Phong Model for Out-of-Distribution 3D Scene Relighting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117757", "abstract": "Out-of-distribution (OOD) 3D relighting requires novel view synthesis under unseen lighting conditions that differ significantly from the observed images. Existing relighting methods, which assume consistent light source distributions between training and testing, often degrade in OOD scenarios. We introduce **MetaGS** to tackle this challenge from two perspectives. First, we propose a meta-learning approach to train 3D Gaussian splatting, which explicitly promotes learning generalizable Gaussian geometries and appearance attributes across diverse lighting conditions, even with biased training data. Second, we embed fundamental physical priors from the *Blinn-Phong* reflection model into Gaussian splatting, which enhances the decoupling of shading components and leads to more accurate 3D scene reconstruction. Results on both synthetic and real-world datasets demonstrate the effectiveness of MetaGS in challenging OOD relighting tasks, supporting efficient point-light relighting and generalizing well to unseen environment lighting maps.", "arxiv_id": "2405.20791v2", "arxiv_authors": ["Yumeng He", "Yunbo Wang", "Xiaokang Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2fc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1093608, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76b"}, "filepath": "data/2505.20772v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998432270977098, "type": "Poster", "name": "MetaSlot: Break Through the Fixed Number of Slots in Object-Centric Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119292", "abstract": "Learning object-level, structured representations is widely regarded as a key to better generalization in vision and underpins the design of next-generation Pre-trained Vision Models (PVMs). Mainstream Object-Centric Learning (OCL) methods adopt Slot Attention or its variants to iteratively aggregate objects' super-pixels into a fixed set of query feature vectors, termed slots. However, their reliance on a static slot count leads to an object being represented as multiple parts when the number of objects varies. We introduce MetaSlot, a plug-and-play Slot Attention variant that adapts to variable object counts. MetaSlot (i) maintains a codebook that holds prototypes of objects in a dataset by vector-quantizing the resulting slot representations; (ii) removes duplicate slots from the traditionally aggregated slots by quantizing them with the codebook; and (iii) injects progressively weaker noise into the Slot Attention iterations to accelerate and stabilize the aggregation. MetaSlot is a general Slot Attention variant that can be seamlessly integrated into existing OCL architectures. Across multiple public datasets and tasks--including object discovery and recognition--models equipped with MetaSlot achieve significant performance gains and markedly interpretable slot representations, compared with existing Slot Attention variants.The code is available at https://anonymous.4open.science/r/MetaSlot.", "arxiv_id": "2505.20772v2", "arxiv_authors": ["Hongjia Liu", "Rongzhen Zhao", "Haohan Chen", "Joni Pajarinen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2fd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040757, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76c"}, "filepath": "data/2506.12945v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995711994904044, "type": "Poster", "name": "Metropolis-Hastings Sampling for 3D Gaussian Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119401", "abstract": "We propose an adaptive sampling framework for 3D Gaussian Splatting (3DGS) that leverages comprehensive multi-view photometric error signals within a unified Metropolis-Hastings approach. Traditional 3DGS methods heavily rely on heuristic-based density-control mechanisms (e.g., cloning, splitting, and pruning), which can lead to redundant computations or the premature removal of beneficial Gaussians. Our framework overcomes these limitations by reformulating densification and pruning as a probabilistic sampling process, dynamically inserting and relocating Gaussians based on aggregated multi-view errors and opacity scores. Guided by Bayesian acceptance tests derived from these error-based importance scores, our method substantially reduces reliance on heuristics, offers greater flexibility, and adaptively infers Gaussian distributions without requiring predefined scene complexity. Experiments on benchmark datasets, including Mip-NeRF360, Tanks and Temples, and Deep Blending, show that our approach reduces the number of Gaussians needed, enhancing computational efficiency while matching or modestly surpassing the view-synthesis quality of state-of-the-art models.", "arxiv_id": "2506.12945v2", "arxiv_authors": ["Hyunjin Kim", "Haebeom Jung", "Jaesik Park"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2fe"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083171, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76d"}, "filepath": "data/2510.23429v1.png", "tags": [], "_media_type": "image", "_rand": 0.999990387916774, "type": "Poster", "name": "MiCADangelo: Fine-Grained Reconstruction of Constrained CAD Models from 3D Scans", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118942", "abstract": "Computer-Aided Design (CAD) plays a foundational role in modern manufacturing and product development, often requiring designers to modify or build upon existing models. Converting 3D scans into parametric CAD representations\u2014a process known as CAD reverse engineering\u2014remains a significant challenge due to the high precision and structural complexity of CAD models. Existing deep learning-based approaches typically fall into two categories: bottom-up, geometry-driven methods, which often fail to produce fully parametric outputs, and top-down strategies, which tend to overlook fine-grained geometric details. Moreover, current methods neglect an essential aspect of CAD modeling: sketch-level constraints. In this work, we introduce a novel approach to CAD reverse engineering inspired by how human designers manually perform the task. Our method leverages multi-plane cross-sections to extract 2D patterns and capture fine parametric details more effectively. Our method enables the reconstruction of detailed and editable CAD models, outperforming state-of-the-art methods and, for the first time, incorporating sketch constraints directly into the reconstruction process.", "arxiv_id": "2510.23429v1", "arxiv_authors": ["Ahmet Serdar Karadeniz", "Dimitrios Mallis", "Danila Rukhovich", "Kseniya Cherenkova", "Anis Kacem", "Djamila Aouada"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a2ff"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033881, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76e"}, "filepath": "data/2506.22434v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992376734338825, "type": "Poster", "name": "MiCo: Multi-image Contrast for Reinforcement Visual Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117848", "abstract": "This work explores enabling Chain-of-Thought (CoT) reasoning to link visual cues across multiple images. A straightforward solution is to adapt rule-based reinforcement learning for Vision-Language Models (VLMs). However, such methods typically rely on manually curated question-answer pairs, which can be particularly challenging when dealing with fine-grained visual details and complex logic across images. Inspired by self-supervised visual representation learning, we observe that images contain inherent constraints that can serve as supervision. Based on this insight, we construct image triplets comprising two augmented views of the same image and a third, similar but distinct image. During training, the model is prompted to generate a reasoning process to compare these images (i.e., determine same or different). Then we optimize the model with rule-based reinforcement learning. Due to the high visual similarity and the presence of augmentations, the model must attend to subtle visual cues and perform logical reasoning to succeed. Experimental results demonstrate that, although trained solely on visual comparison tasks, the learned reasoning ability generalizes effectively to a wide range of questions. Without relying on any human-annotated question-answer pairs, our method achieves significant improvements on multi-image reasoning benchmarks and shows strong performance on general vision tasks.", "arxiv_id": "2506.22434v1", "arxiv_authors": ["Xi Chen", "Mingkang Zhu", "Shaoteng Liu", "Xiaoyang Wu", "Xiaogang Xu", "Yu Liu", "Xiang Bai", "Hengshuang Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a300"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1048110, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a76f"}, "filepath": "data/2503.09499v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992539339761781, "type": "Poster", "name": "MindGYM: What Matters in Question Synthesis for Thinking-Centric Fine-Tuning?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121725", "abstract": "Large foundation models face challenges in acquiring transferable, structured thinking abilities, especially when supervised with rigid templates or crowd-annotated instruction datasets. Unlike prior approaches, we focus on a thinking-centric data synthesis paradigm that enables models to evolve through self-generated, cognitively guided data. We propose MindGYM, a structured and scalable framework for question synthesis, composed of: (1) Cognitive Thinking Process Injection, which infuses high-level reasoning objectives to shape the model\u2019s synthesis behavior; (2) Seed Single-Hop Question Synthesis, generating atomic questions from diverse semantic types to encourage broader thinking; and (3) Challenging Multi-Hop QA Synthesis, composing more complex multi-hop questions based on QA seeds for deeper reasoning. Detailed analysis shows that synthetic data generated by our method achieves 16.7% higher average quality and 67.91% lower quality variance compared to baseline sources, highlighting that both high-quality and self-contained data are essential for effective, thinking-oriented fine-tuning. MindGYM improves performance on six reasoning benchmarks, achieving gains of up to 16% on MathVision using only 400 data samples, and generalizable improvements across different model sizes and architectures. MindGYM underscores the viability of self-challenging mechanisms in refining large model capabilities while minimizing human intervention and resource demands.Code and data are released to promote data-centric research into self-evolving foundation models driven by their internal reasoning capabilities.", "arxiv_id": "2503.09499v2", "arxiv_authors": ["Zhe Xu", "Daoyuan Chen", "Zhenqing Ling", "Yaliang Li", "Ying Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a301"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037518, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a770"}, "filepath": "data/2506.02938v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998172113549154, "type": "Poster", "name": "MIND: Material Interface Generation from UDFs for Non-Manifold Surface Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119921", "abstract": "Unsigned distance fields (UDFs) are widely used in 3D deep learning due to their ability to represent shapes with arbitrary topology. While prior work has largely focused on learning UDFs from point clouds or multi-view images, extracting meshes from UDFs remains challenging, as the learned fields rarely attain exact zero distances. A common workaround is to reconstruct signed distance fields (SDFs) locally from UDFs to enable surface extraction via Marching Cubes. However, this often introduces topological artifacts such as holes or spurious components. Moreover, local SDFs are inherently incapable of representing non-manifold geometry, leading to complete failure in such cases. To address this gap, we propose MIND ($\\underline{M}aterial$ $\\underline{I}nterface$ $from$ $\\underline{N}on$-$manifold$ $\\underline{D}istance$ $fields$), a novel algorithm for generating material interfaces directly from UDFs, enabling non-manifold mesh extraction from a global perspective. The core of our method lies in deriving a meaningful spatial partitioning from the UDF, where the target surface emerges as the interface between distinct regions. We begin by computing a two-signed local field to distinguish the two sides of manifold patches, and then extend this to a multi-labeled global field capable of separating all sides of a non-manifold structure. By combining this multi-labeled field with the input UDF, we construct material interfaces that support non-manifold mesh extraction via a multi-labeled Marching Cubes algorithm. Extensive experiments on UDFs generated from diverse data sources, including point cloud reconstruction, multi-view reconstruction, and medial axis transforms, demonstrate that our approach robustly handles complex non-manifold surfaces and significantly outperforms existing methods.", "arxiv_id": "2506.02938v1", "arxiv_authors": ["Xuhui Chen", "Fei Hou", "Wencheng Wang", "Hong Qin", "Ying He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a302"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029524, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a771"}, "filepath": "data/2509.15791v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994271208632781, "type": "Poster", "name": "Minimal Semantic Sufficiency Meets Unsupervised Domain Generalization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115318", "abstract": "The generalization ability of deep learning has been extensively studied in supervised settings, yet it remains less explored in unsupervised scenarios. Recently, the Unsupervised Domain Generalization (UDG) task has been proposed to enhance the generalization of models trained with prevalent unsupervised learning techniques, such as Self-Supervised Learning (SSL). UDG confronts the challenge of distinguishing semantics from variations without category labels. Although some recent methods have employed domain labels to tackle this issue, such domain labels are often unavailable in real-world contexts. In this paper, we address these limitations by formalizing UDG as the task of learning a Minimal Sufficient Semantic Representation: a representation that (i) preserves all semantic information shared across augmented views (sufficiency), and (ii) maximally removes information irrelevant to semantics (minimality). We theoretically ground these objectives from the perspective of information theory, demonstrating that optimizing representations to achieve sufficiency and minimality directly reduces out-of-distribution risk. Practically, we implement this optimization through Minimal-Sufficient UDG (MS-UDG), a learnable model by integrating (a) an InfoNCE-based objective to achieve sufficiency; (b) two complementary components to promote minimality: a novel semantic-variation disentanglement loss and a reconstruction-based mechanism for capturing adequate variation. Empirically, MS-UDG sets a new state-of-the-art on popular unsupervised domain-generalization benchmarks, consistently outperforming existing SSL and UDG methods, without category or domain labels during representation learning.", "arxiv_id": "2509.15791v2", "arxiv_authors": ["Tan Pan", "Kaiyu Guo", "Dongli Xu", "Zhaorui Tan", "Chen Jiang", "Deshu Chen", "Xin Guo", "Brian C. Lovell", "Limei Han", "Yuan Cheng", "Mahsa Baktashmotlagh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a303"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1052060, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a772"}, "filepath": "data/2505.24873v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999001601432191, "type": "Poster", "name": "MiniMax-Remover: Taming Bad Noise Helps Video Object Removal", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119273", "abstract": "Recent advances in video diffusion models have driven rapid progress in video editing techniques. However, video object removal, a critical subtask of video editing, remains challenging due to issues such as hallucinated objects and visual artifacts. Furthermore, existing methods often rely on computationally expensive sampling procedures and classifier-free guidance (CFG), resulting in slow inference. To address these limitations, we propose ***MiniMax-Remover***, a novel two-stage video object removal approach. Motivated by the observation that text condition is not best suited for this task, we simplify the pretrained video generation model by removing textual input and cross-attention layers, resulting in a more lightweight and efficient model architecture in the first stage. In the second stage, we distilled our remover on successful videos produced by the stage-1 model and curated by human annotators, using a minimax optimization strategy to further improve editing quality and inference speed. Specifically, the inner maximization identifies adversarial input noise (\"bad noise\") that makes failure removals, while the outer minimization step trains the model to generate high-quality removal results even under such challenging conditions. As a result, our method achieves a state-of-the-art video object removal results with as few as 6 sampling steps and doesn't rely on CFG, significantly improving inference efficiency. Extensive experiments demonstrate the effectiveness and superiority of MiniMax-Remover compared to existing methods.", "arxiv_id": "2505.24873v1", "arxiv_authors": ["Bojia Zi", "Weixuan Peng", "Xianbiao Qi", "Jianan Wang", "Shihao Zhao", "Rong Xiao", "Kam-Fai Wong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a304"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4154409, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a773"}, "filepath": "data/2510.22127v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997097118460236, "type": "Poster", "name": "Mint: A Simple Test-Time Adaptation of Vision-Language Models against Common Corruptions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115156", "abstract": "Pretrained vision-language models such as CLIP achieve strong zero-shot generalization but remain vulnerable to distribution shifts caused by input corruptions. In this work, we investigate how corruptions affect CLIP\u2019s image embeddings and uncover a consistent phenomenon we term as embedding variance collapse, where both intra-class and inter-class variances shrink as corruption severity increases. We find that this collapse is closely tied to performance degradation, with inter-class variance strongly correlated with classification accuracy. To explain this phenomenon, we analyze how corruptions alter the structure of the embedding space. Our theoretical results suggest that the visual encoder tends to encode corruption-related signals, which dilute class-discriminative features and compress the representation geometry. We further show that maximizing inter-class variance, even when estimated from pseudo-labels, can provably enhance embedding quality. Based on this insight, we propose Mint, a simple test-time adaptation method that maximizes pseudo-label-based inter-class variance on the fly using cumulative prototypes and gradient estimates. Mint operates effectively with small batch sizes and consistently improves performance across multiple corruption benchmarks and CLIP architectures.", "arxiv_id": "2510.22127v1", "arxiv_authors": ["Wenxuan Bao", "Ruxi Deng", "Jingrui He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a305"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040505, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a774"}, "filepath": "data/2506.05331v1.png", "tags": [], "_media_type": "image", "_rand": 0.999673367661865, "type": "Poster", "name": "MINT-CoT: Enabling Interleaved Visual Tokens in Mathematical Chain-of-Thought Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115416", "abstract": "Chain-of-Thought (CoT) has widely enhanced mathematical reasoning in Large Language Models (LLMs), but it still remains challenging for extending it to multimodal domains. Existing works either adopt a similar textual reasoning for image input, or seek to interleave visual signals into mathematical CoT. However, they face three key limitations for math problem-solving: *reliance on coarse-grained box-shaped image regions, limited perception of vision encoders on math content, and dependence on external capabilities for visual modification*. In this paper, we propose **MINT-CoT**, introducing **M**athematical **IN**terleaved **T**okens for **C**hain-**o**f-**T**hought visual reasoning. MINT-CoT adaptively interleaves relevant visual tokens into textual reasoning steps via an Interleave Token, which dynamically selects visual regions of any shapes within math figures. To empower this capability, we construct the MINT-CoT dataset, containing 54K mathematical problems aligning each reasoning step with visual regions at the token level, accompanied by a rigorous data generation pipeline. We further present a three-stage MINT-CoT training strategy, progressively combining text-only CoT SFT, interleaved CoT SFT, and interleaved CoT RL, which derives our MINT-CoT-7B model. Extensive experiments demonstrate the effectiveness of our method for effective visual interleaved reasoning in mathematical domains, where MINT-CoT-7B outperforms the baseline model by +34.08% on MathVista and +28.78% on GeoQA, respectively.", "arxiv_id": "2506.05331v1", "arxiv_authors": ["Xinyan Chen", "Renrui Zhang", "Dongzhi Jiang", "Aojun Zhou", "Shilin Yan", "Weifeng Lin", "Hongsheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a306"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043487, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a775"}, "filepath": "data/2505.24238v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990145433711047, "type": "Poster", "name": "MIRAGE: Assessing Hallucination in Multimodal Reasoning Chains of MLLM", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117293", "abstract": "Multimodal hallucination in multimodal large language models (MLLMs) restricts the correctness of MLLMs. However, multimodal hallucinations are multi-sourced and arise from diverse causes. Existing benchmarks fail to adequately distinguish between perception-induced hallucinations and reasoning-induced hallucinations. This failure constitutes a significant issue and hinders the diagnosis of multimodal reasoning failures within MLLMs. To address this, we propose the MIRAGE benchmark, which isolates reasoning hallucinations by constructing questions where input images are correctly perceived by MLLMs yet reasoning errors persist. MIRAGE introduces multi-granular evaluation metrics: accuracy, factuality, and LLMs hallucination score for hallucination quantification. Our analysis reveals strong correlations between question types and specific hallucination patterns, particularly systematic failures of MLLMs in spatial reasoning involving complex relationships (\\emph{e.g.}, complex geometric patterns across images). This highlights a critical limitation in the reasoning capabilities of current MLLMs and provides targeted insights for hallucination mitigation on specific types. To address these challenges, we propose Logos, a method that combines curriculum reinforcement fine-tuning to encourage models to generate logic-consistent reasoning chains by stepwise reducing learning difficulty, and collaborative hint inference to reduce reasoning complexity. Logos establishes a baseline on MIRAGE, and reduces the logical hallucinations in original base models. MIRAGE will be publicly available.", "arxiv_id": "2505.24238v2", "arxiv_authors": ["Bowen Dong", "Minheng Ni", "Zitong Huang", "Guanglei Yang", "Wangmeng Zuo", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a307"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.509Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1063272, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a776"}, "filepath": "data/2505.12826v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991458467557904, "type": "Poster", "name": "Mitigating Hallucination in VideoLLMs via Temporal-Aware Activation Engineering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119674", "abstract": "Multimodal large language models (MLLMs) have achieved remarkable progress in video understanding. However, hallucination, where the model generates plausible yet incorrect outputs, persists as a significant and under-addressed challenge in the video domain. Among existing solutions, activation engineering has proven successful in mitigating hallucinations in LLMs and ImageLLMs, yet its applicability to VideoLLMs remains largely unexplored. In this work, we are the first to systematically investigate the effectiveness and underlying mechanisms of activation engineering for mitigating hallucinations in VideoLLMs. We initially conduct an investigation of the key factors affecting the performance of activation engineering and find that a model\u2019s sensitivity to hallucination depends on $\\textbf{temporal variation}$ rather than task type. Moreover, selecting appropriate internal modules and dataset for activation engineering is critical for reducing hallucination. Guided by these findings, we propose a temporal-aware activation engineering framework for VideoLLMs, which adaptively identifies and manipulates hallucination-sensitive modules based on the temporal variation characteristic, substantially mitigating hallucinations without additional LLM fine-tuning. Experiments across multiple models and benchmarks demonstrate that our method markedly reduces hallucination in VideoLLMs, thereby validating the robustness of our findings.", "arxiv_id": "2505.12826v1", "arxiv_authors": ["Jianfeng Cai", "Wengang Zhou", "Zongmeng Zhang", "Jiale Hong", "Nianji Zhan", "Houqiang Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a308"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 991336, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a777"}, "filepath": "data/2509.16738v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996725567773492, "type": "Poster", "name": "Mixture of Noise for Pre-Trained Model-Based Class-Incremental Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115337", "abstract": "Class Incremental Learning (CIL) aims to continuously learn new categories while retaining the knowledge of old ones. Pre-trained models (PTMs) show promising capabilities in CIL. However, existing approaches that apply lightweight fine-tuning to backbones still induce parameter drift, thereby compromising the generalization capability of pre-trained models. Parameter drift can be conceptualized as a form of noise that obscures critical patterns learned for previous tasks. However, recent researches have shown that noise is not always harmful. For example, the large number of visual patterns learned from pre-training can be easily abused by a single task, and introducing appropriate noise can suppress some low-correlation features, thus leaving a margin for future tasks. To this end, we propose learning beneficial noise for CIL guided by information theory and propose Mixture of Noise (MiN), aiming to mitigate the degradation of backbone generalization from adapting new tasks. Specifically, task-specific noise is learned from high-dimension features of new tasks. Then, a set of weights is adjusted dynamically for optimal mixture of different task noise. Finally, MiN embeds the beneficial noise into the intermediate features to mask the response of inefficient patterns. Extensive experiments on six benchmark datasets demonstrate that MiN achieves state-of-the-art performance in most incremental settings, with particularly outstanding results in 50-steps incremental settings. This shows the significant potential for beneficial noise in continual learning. Anonymous code link is available at https://anonymous.4open.science/r/MiN-16097.", "arxiv_id": "2509.16738v3", "arxiv_authors": ["Kai Jiang", "Zhengyan Shi", "Dell Zhang", "Hongyuan Zhang", "Xuelong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a309"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060470, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a778"}, "filepath": "data/2407.04842v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995570889934444, "type": "Poster", "name": "MJ-Bench: Is Your Multimodal Reward Model Really a Good Judge for Text-to-Image Generation?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121393", "abstract": "While text-to-image models like GPT-4o-Image and FLUX are rapidly proliferating, they often encounter challenges such as hallucination, bias, and the production of unsafe, low-quality output. To effectively address these issues, it is crucial to align these models with desired behaviors based on feedback from a multimodal judge. Despite their significance, current multimodal judges frequently undergo inadequate evaluation of their capabilities and limitations, potentially leading to misalignment and unsafe fine-tuning outcomes. To address this issue, we introduce MJ-Bench, a novel benchmark which incorporates a comprehensive preference dataset to evaluate multimodal judges in providing feedback for image generation models across six key perspectives: alignment, safety, image quality, bias, composition, and visualization. Specifically, we evaluate a large variety of multimodal judges including smaller-sized CLIP-based scoring models, open-source VLMs, and close-source VLMs on each decomposed subcategory of our preference dataset. Experiments reveal that close-source VLMs generally provide better feedback, with GPT-4o outperforming other judges in average. Compared with open-source VLMs, smaller-sized scoring models can provide better feedback regarding text-image alignment and image quality, while VLMs provide more accurate feedback regarding safety and generation bias due to their stronger reasoning capabilities. Further studies in feedback scale reveal that VLM judges can generally provide more accurate and stable feedback in natural language than numerical scales. Notably, human evaluations on end-to-end and fine-tuned models using separate feedback from these multimodal judges provide similar conclusions, further confirming the effectiveness of MJ-Bench.", "arxiv_id": "2407.04842v1", "arxiv_authors": ["Zhaorun Chen", "Yichao Du", "Zichen Wen", "Yiyang Zhou", "Chenhang Cui", "Zhenzhen Weng", "Haoqin Tu", "Chaoqi Wang", "Zhengwei Tong", "Qinglan Huang", "Canyu Chen", "Qinghao Ye", "Zhihong Zhu", "Yuqing Zhang", "Jiawei Zhou", "Zhuokai Zhao", "Rafael Rafailov", "Chelsea Finn", "Huaxiu Yao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1078503, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a779"}, "filepath": "data/2504.13726v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997902303735541, "type": "Poster", "name": "MLEP: Multi-granularity Local Entropy Patterns for Generalized AI-generated Image Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119336", "abstract": "Advancements in image generation technologies have raised significant concerns about their potential misuse, such as producing misinformation and deepfakes. Therefore, there is an urgent need for effective methods to detect AI-generated images (AIGI). Despite progress in AIGI detection, achieving reliable performance across diverse generation models and scenes remains challenging due to the lack of source-invariant features and limited generalization capabilities in existing methods. In this work, we explore the potential of using image entropy as a cue for AIGI detection and propose Multi-granularity Local Entropy Patterns (MLEP), a set of entropy feature maps computed across shuffled small patches over multiple image scaled. MLEP comprehensively captures pixel relationships across dimensions and scales while significantly disrupting image semantics, reducing potential content bias. Leveraging MLEP, a robust CNN-based classifier for AIGI detection can be trained. Extensive experiments conducted in an open-world scenario, evaluating images synthesized by 32 distinct generative models, demonstrate significant improvements over state-of-the-art methods in both accuracy and generalization.", "arxiv_id": "2504.13726v2", "arxiv_authors": ["Lin Yuan", "Xiaowan Li", "Yan Zhang", "Jiawei Zhang", "Hongbo Li", "Xinbo Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1046999, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77a"}, "filepath": "data/2503.18135v1.png", "tags": [], "_media_type": "image", "_rand": 0.999286852132904, "type": "Poster", "name": "MLLM-For3D: Adapting Multimodal Large Language Model for 3D Reasoning Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118776", "abstract": "Reasoning segmentation aims to segment target objects in complex scenes based on human intent and spatial reasoning. While recent multimodal large language models (MLLMs) have demonstrated impressive 2D image reasoning segmentation, adapting these capabilities to 3D scenes remains underexplored. In this paper, we introduce MLLM-For3D, a simple yet effective framework that transfers knowledge from 2D MLLMs to 3D scene understanding. Specifically, we utilize MLLMs to generate multi-view pseudo-segmentation masks and corresponding text embeddings, then unproject 2D masks into 3D space and align them with the text embeddings. The primary challenge lies in the absence of 3D context and spatial consistency across multiple views, causing the model to hallucinate objects that do not exist and fail to target objects consistently. Training the 3D model with such irrelevant objects leads to performance degradation. To address this, we first filter irrelevant views using token attention. With these reliable pseudo-labels, we develop a token-for-Query approach for multimodal semantic alignment, enabling consistent identification of the same object across different views. Moreover, we introduce a spatial consistency strategy to enforce that segmentation masks remain coherent in the 3D space, effectively capturing the geometry of the scene. Extensive evaluations of various challenging indoor scene benchmarks demonstrate that, even without labeled 3D training data, MLLM-For3D outperforms existing 3D reasoning segmentation methods, effectively interpreting user intent, understanding 3D scenes, and reasoning about spatial relationships.", "arxiv_id": "2503.18135v1", "arxiv_authors": ["Jiaxin Huang", "Runnan Chen", "Ziwen Li", "Zhengqing Gao", "Xiao He", "Yandong Guo", "Mingming Gong", "Tongliang Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2511233, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77b"}, "filepath": "data/2506.01946v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998811454750816, "type": "Poster", "name": "MLLMs Need 3D-Aware Representation Supervision for Scene Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118644", "abstract": "Recent advances in scene understanding have leveraged multimodal large language models (MLLMs) for 3D reasoning by capitalizing on their strong 2D pretraining. However, the lack of explicit 3D data during MLLM pretraining limits 3D representation capability. In this paper, we investigate the 3D-awareness of MLLMs by evaluating multi-view correspondence and reveal a strong positive correlation between the quality of 3D-aware representation and downstream task performance. Motivated by this, we propose 3DRS, a framework that enhances MLLM 3D representation learning by introducing supervision from pretrained 3D foundation models. Our approach aligns MLLM visual features with rich 3D knowledge distilled from 3D models, effectively improving scene understanding. Extensive experiments across multiple benchmarks and MLLMs\u2014including visual grounding, captioning, and question answering\u2014demonstrate consistent performance gains. Code will be released to facilitate future research.", "arxiv_id": "2506.01946v1", "arxiv_authors": ["Xiaohu Huang", "Jingjing Wu", "Qunyi Xie", "Kai Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.568Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1334434, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77c"}, "filepath": "data/2306.13394v5.png", "tags": [], "_media_type": "image", "_rand": 0.9997941156559839, "type": "Poster", "name": "MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121773", "abstract": "Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization. Our benchmark has made substantial contributions to the development of MLLMs: (1) It has been applied for use by 300 different institutions, such as OpenAI, Google, Meta, CMU, MIT, and Stanford, and has become the standard test set for many MLLMs, such as LLaVA, Qwen-VL, and Intern-VL series; (2) This paper has been cited over a thousand times.", "arxiv_id": "2306.13394v5", "arxiv_authors": ["Chaoyou Fu", "Peixian Chen", "Yunhang Shen", "Yulei Qin", "Mengdan Zhang", "Xu Lin", "Jinrui Yang", "Xiawu Zheng", "Ke Li", "Xing Sun", "Yunsheng Wu", "Rongrong Ji", "Caifeng Shan", "Ran He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087561, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77d"}, "filepath": "data/2505.21333v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990246895271189, "type": "Poster", "name": "MME-VideoOCR: Evaluating OCR-Based Capabilities of Multimodal LLMs in Video Scenarios", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121517", "abstract": "Multimodal Large Language Models (MLLMs) have achieved considerable accuracy in Optical Character Recognition (OCR) from static images. However, their efficacy in video OCR is significantly diminished due to factors such as motion blur, temporal variations, and visual effects inherent in video content. To provide clearer guidance for training practical MLLMs, we introduce MME-VideoOCR, which encompasses a comprehensive range of video OCR application scenarios. MME-VideoOCR features 10 task categories comprising 25 individual tasks and spans 44 diverse scenarios, including not only text recognition tasks but also those requiring deeper comprehension and reasoning regarding the textual content within videos. The benchmark consists of 1,464 videos with varying resolutions, aspect ratios, and durations, along with 2,000 meticulously curated, manually annotated question-answer pairs. We evaluate 15 state-of-the-art MLLMs on MME-VideoOCR, revealing that even the best-performing model (Gemini-2.5 Pro) scores below 1,500 out of 2,000. Fine-grained analysis indicates that while existing MLLMs demonstrate strong performance on tasks where relevant texts are contained within a single or few frames, they exhibit limited capability in generating effective responses for tasks demanding holistic video comprehension.", "arxiv_id": "2505.21333v2", "arxiv_authors": ["Yang Shi", "Huanqian Wang", "Wulin Xie", "Huanyao Zhang", "Lijie Zhao", "Yi-Fan Zhang", "Xinfeng Li", "Chaoyou Fu", "Zhuoer Wen", "Wenting Liu", "Zhuoran Zhang", "Xinlong Chen", "Bohan Zeng", "Sihan Yang", "Yushuo Guan", "Zhang Zhang", "Liang Wang", "Haoxuan Li", "Zhouchen Lin", "Yuanxing Zhang", "Pengfei Wan", "Haotian Wang", "Wenjing Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a30f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043340, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77e"}, "filepath": "data/2505.19415v2.png", "tags": [], "_media_type": "image", "_rand": 0.999040808130673, "type": "Poster", "name": "MMGen-Bench: Towards Comprehensive and Explainable Evaluation of Multi-Modal Image Generation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121667", "abstract": "Recent multimodal image generators such as GPT-4o, Gemini 2.0 Flash, and Gemini 2.5 Pro excel at following complex instructions, editing images and maintaining concept consistency. However, they are still evaluated by *disjoint* toolkits: text-to-image (T2I) benchmarks that lacks multi-modal conditioning, and customized image generation benchmarks that overlook compositional semantics and common knowledge. We propose **MMGen-Bench**, a *comprehensive* **M**ulti-**M**odal image **Gen**eration **Bench**mark that unifies these tasks by pairing 4,850 richly annotated text prompts with 1,750 multi-view reference images across 380 subjects, spanning humans, animals, objects, and artistic styles. MMGen-Bench is equipped with a three-level evaluation framework: (1) low-level metrics for visual artifacts and identity preservation of objects; (2) novel Aspect Matching Score (AMS): a VQA-based mid-level metric that delivers fine-grained prompt-image alignment and shows strong correlation with human judgments; and (3) high-level metrics for aesthetics and human preference. Using MMGen-Bench, we benchmark 17 state-of-the-art models, including Gemini 2.5 Pro, FLUX, DreamBooth, and IP-Adapter, and validate our metrics with 32k human ratings, yielding in-depth insights into architecture and data design. We will release the dataset and evaluation code to foster rigorous, unified evaluation and accelerate future innovations in multi-modal image generation.", "arxiv_id": "2505.19415v2", "arxiv_authors": ["Hang Hua", "Ziyun Zeng", "Yizhi Song", "Yunlong Tang", "Liu He", "Daniel Aliaga", "Wei Xiong", "Jiebo Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a310"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3928465, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a77f"}, "filepath": "data/2505.10610v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996714559275462, "type": "Poster", "name": "MMLongBench: Benchmarking Long-Context Vision-Language Models Effectively and Thoroughly", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121768", "abstract": "The rapid extension of context windows in large vision-language models has given rise to long-context vision-language models (LCVLMs), which are capable of handling hundreds of images with interleaved text tokens in a single forward pass. In this work, we introduce MMLongBench, the first benchmark covering a diverse set of long-context vision-language tasks, to evaluate LCVLMs effectively and thoroughly. MMLongBench is composed of 13,331 examples spanning five different categories of downstream tasks, such as Visual RAG and Many-Shot ICL. It also provides broad coverage of image types, including various natural and synthetic images. To assess the robustness of the models to different input lengths, all examples are delivered at five standardized input lengths (8K-128K tokens) via a cross-modal tokenization scheme that combines vision patches and text tokens. Through a thorough benchmarking of 46 closed-source and open-source LCVLMs, we provide a comprehensive analysis of the current models' vision-language long-context ability. Our results show that: i) performance on a single task is a weak proxy for overall long-context capability; ii) both closed-source and open-source models face challenges in long-context vision-language tasks, indicating substantial room for future improvement; iii) models with stronger reasoning ability tend to exhibit better long-context performance. By offering wide task coverage, various image types, and rigorous length control, MMLongBench provides the missing foundation for diagnosing and advancing the next generation of LCVLMs.", "arxiv_id": "2505.10610v3", "arxiv_authors": ["Zhaowei Wang", "Wenhao Yu", "Xiyu Ren", "Jipeng Zhang", "Yu Zhao", "Rohit Saxena", "Liang Cheng", "Ginny Wong", "Simon See", "Pasquale Minervini", "Yangqiu Song", "Mark Steedman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a311"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1165260, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a780"}, "filepath": "data/2506.10963v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995015870854866, "type": "Poster", "name": "MMMG: A Massive, Multidisciplinary, Multi-Tier Generation Benchmark for Text-to-Image Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121491", "abstract": "In this paper, we introduce knowledge image generation as a new task, alongside the Massive Multi-Discipline Multi-Tier Knowledge-Image Generation Benchmark (MMMG) to probe the reasoning capability of image generation models.Knowledge images have been central to human civilization and to the mechanisms of human learning\u2014a fact underscored by dual-coding theory and the picture-superiority effect.Generating such images is challenging, demanding multimodal reasoning that fuses world knowledge with pixel-level grounding into clear explanatory visuals.To enable comprehensive evaluation, MMMG offers $4,456$ expert-validated (knowledge) image-prompt pairs spanning $10$ disciplines, $6$ educational levels, and diverse knowledge formats such as charts, diagrams, and mind maps. To eliminate confounding complexity during evaluation, we adopt a unified Knowledge Graph (KG) representation. Each KG explicitly delineates a target image\u2019s core entities and their dependencies.We further introduce MMMG-Score to evaluate generated knowledge images. This metric combines factual fidelity, measured by graph-edit distance between KGs, with visual clarity assessment.Comprehensive evaluations of $18$ state-of-the-art text-to-image generation models expose serious reasoning deficits\u2014low entity fidelity, weak relations, and clutter\u2014with GPT-4o achieving an MMMG-Score of only $46.66$, underscoring the benchmark\u2019s difficulty.To spur further progress, we release FLUX-Reason (MMMG-Score of $30.52$), an effective and open baseline that combines a reasoning LLM with diffusion models and is trained on $16,000$ curated knowledge image\u2013prompt pairs.", "arxiv_id": "2506.10963v2", "arxiv_authors": ["Yuxuan Luo", "Yuhui Yuan", "Junwen Chen", "Haonan Cai", "Ziyi Yue", "Yuwei Yang", "Fatima Zohra Daha", "Ji Li", "Zhouhui Lian"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a312"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2134861, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a781"}, "filepath": "data/2510.12565v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997422853335384, "type": "Poster", "name": "MMOT: The First Challenging Benchmark for Drone-based Multispectral Multi-Object Tracking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121413", "abstract": "Drone-based multi-object tracking is essential yet highly challenging due to small targets, severe occlusions, and cluttered backgrounds. Existing RGB-based multi-object tracking algorithms heavily depend on spatial appearance cues such as color and texture, which often degrade in aerial views, compromising tracking reliability. Multispectral imagery, capturing pixel-level spectral reflectance, provides crucial spectral cues that significantly enhance object discriminability under degraded spatial conditions. However, the lack of dedicated multispectral UAV datasets has hindered progress in this domain. To bridge this gap, we introduce \\textbf{MMOT}, the first challenging benchmark for drone-based multispectral multi-object tracking dataset. It features three key characteristics: (i) \\textbf{Large Scale} \u2014 125 video sequences with over 488.8K annotations across eight object categories; (ii) \\textbf{Comprehensive Challenges} \u2014 covering diverse real-world challenges such as extreme small targets, high-density scenarios, severe occlusions and complex platform motion; and (iii) \\textbf{Precise Oriented Annotations} \u2014 enabling accurate localization and reduced object ambiguity under aerial perspectives. To better extract spectral features and leverage oriented annotations, we further present a multispectral and orientation-aware MOT scheme adapting existing MOT methods, featuring: (i) a lightweight Spectral 3D-Stem integrating spectral features while preserving compatibility with RGB pretraining; (ii) a orientation-aware Kalman filter for precise state estimation; and (iii) an end-to-end orientation-adaptive transformer architecture. Extensive experiments across representative trackers consistently show that multispectral input markedly improves tracking performance over RGB baselines, particularly for small and densely packed objects. We believe our work will benefit the community for advancing drone-based multispectral multi-object tracking research.", "arxiv_id": "2510.12565v1", "arxiv_authors": ["Tianhao Li", "Tingfa Xu", "Ying Wang", "Haolin Qin", "Xu Lin", "Jianan Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a313"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073525, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a782"}, "filepath": "data/2509.22820v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998492489311421, "type": "Poster", "name": "MMPB: It\u2019s Time for Multi-Modal Personalization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121766", "abstract": "Visual personalization is essential in user-facing AI systems such as smart homes and healthcare, where aligning model behavior with user-centric concepts is critical. However, recent large Vision-Language Models (VLMs), despite their broad applicability, remain underexplored in their ability to adapt to individual users. In this paper, we introduce MMPB, the first extensive benchmark for evaluating VLMs on personalization. MMPB comprises 10k image-query pairs and includes 111 personalizable concepts across four categories: humans, animals, objects, and characters, with the human category enriched with preference-grounded queries. We structure personalization into three main task types, each highlighting a different key property of VLMs. Using 23 widely used VLMs including both open- and closed-source models, we evaluate personalization performance via a three-stage protocol: concept injection, multi-turn dialogue, and personalized querying. Our findings indicate that most VLMs (including some closed-source models) struggle with personalization, particularly in maintaining consistency over dialogue, handling user preferences, and adapting to visual cues. Our analysis reveals that the challenges in VLM personalization (such as refusal behaviors and long-context forgetting) highlight substantial room for improvement. By identifying these limitations and offering a scalable benchmark, MMPB offers valuable insights and a solid foundation for future research toward truly personalized multi-modal AI.", "arxiv_id": "2509.22820v2", "arxiv_authors": ["Jaeik Kim", "Woojin Kim", "Woohyeon Park", "Jaeyoung Do"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a314"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1189033, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a783"}, "filepath": "data/2505.20426v2.png", "tags": [], "_media_type": "image", "_rand": 0.999555388305383, "type": "Poster", "name": "MMPerspective: Do MLLMs Understand Perspective? A Comprehensive Benchmark for Perspective Perception, Reasoning, and Robustness", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121606", "abstract": "Understanding perspective is fundamental to human visual perception, yet the extent to which multimodal large language models (MLLMs) internalize perspective geometry remains unclear. We introduce MMPerspective, the first benchmark specifically designed to systematically evaluate MLLMs' understanding of perspective through 10 carefully crafted tasks across three complementary dimensions: Perspective Perception, Reasoning, and Robustness. Our benchmark comprises 2,711 real-world and synthetic image instances with 5,083 question-answer pairs that probe key capabilities, such as vanishing point perception and counting, perspective type reasoning, line relationship understanding in 3D space, invariance to perspective-preserving transformations, etc. Through a comprehensive evaluation of 43 state-of-the-art MLLMs, we uncover significant limitations: while models demonstrate competence on surface-level perceptual tasks, they struggle with compositional reasoning and maintaining spatial consistency under perturbations. Our analysis further reveals intriguing patterns between model architecture, scale, and perspective capabilities, highlighting both robustness bottlenecks and the benefits of chain-of-thought prompting. MMPerspective establishes a valuable testbed for diagnosing and advancing spatial understanding in vision-language systems.", "arxiv_id": "2505.20426v2", "arxiv_authors": ["Yolo Yunlong Tang", "Pinxin Liu", "Mingqian Feng", "Zhangyun Tan", "Rui Mao", "Chao Huang", "Jing Bi", "Yunzhong Xiao", "Susan Liang", "Hang Hua", "Ali Vosoughi", "Luchuan Song", "Zeliang Zhang", "Chenliang Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a315"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2838707, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a784"}, "filepath": "data/2510.11520v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996045263179981, "type": "Poster", "name": "mmWalk: Towards Multi-modal Multi-view Walking Assistance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121819", "abstract": "Walking assistance in extreme or complex environments remains a significant challenge for people with blindness or low vision (BLV), largely due to the lack of a holistic scene understanding. Motivated by the real-world needs of the BLV community, we build mmWalk, a simulated multi-modal dataset that integrates multi-view sensor and accessibility-oriented features for outdoor safe navigation. Our dataset comprises $120$ manually controlled, scenario-categorized walking trajectories with $62k$ synchronized frames. It contains over $559k$ panoramic images across RGB, depth, and semantic modalities. Furthermore, to emphasize real-world relevance, each trajectory involves outdoor corner cases and accessibility-specific landmarks for BLV users. Additionally, we generate mmWalkVQA, a VQA benchmark with over $69k$ visual question-answer triplets across $9$ categories tailored for safe and informed walking assistance. We evaluate state-of-the-art Vision-Language Models (VLMs) using zero- and few-shot settings and found they struggle with our risk assessment and navigational tasks. We validate our mmWalk-finetuned model on real-world datasets and show the effectiveness of our dataset for advancing multi-modal walking assistance.", "arxiv_id": "2510.11520v2", "arxiv_authors": ["Kedi Ying", "Ruiping Liu", "Chongyan Chen", "Mingzhe Tao", "Hao Shi", "Kailun Yang", "Jiaming Zhang", "Rainer Stiefelhagen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a316"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2777439, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a785"}, "filepath": "data/2507.16853v1.png", "tags": [], "_media_type": "image", "_rand": 0.999378800005688, "type": "Poster", "name": "MobileUse: A Hierarchical Reflection-Driven GUI Agent for Autonomous Mobile Operation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118642", "abstract": "Recent advances in Multimodal Large Language Models (MLLMs) have enabled the development of mobile agents that can understand visual inputs and follow user instructions, unlocking new possibilities for automating complex tasks on mobile devices.However, applying these models to real-world mobile scenarios remains a significant challenge due to the long-horizon task execution, difficulty in error recovery, and the cold-start problem in unfamiliar environments. To address these challenges, we propose MobileUse, a GUI agent designed for robust and adaptive mobile task execution. To improve resilience in long-horizon tasks and dynamic environments, we introduce a hierarchical reflection architecture that enables the agent to self-monitor, detect, and recover from errors across multiple temporal scales\u2014ranging from individual actions to overall task completion\u2014while maintaining efficiency through a reflection-on-demand strategy. To tackle cold-start issues, we further introduce a proactive exploration module, which enriches the agent\u2019s understanding of the environment through self-planned exploration. Evaluations on the AndroidWorld and AndroidLab benchmarks demonstrate that MobileUse establishes new state-of-the-art performance, achieving success rates of 62.9% and 44.2%, respectively. To facilitate real-world applications, we release an out-of-the-box toolkit for automated task execution on physical mobile devices.", "arxiv_id": "2507.16853v1", "arxiv_authors": ["Ning Li", "Xiangmou Qu", "Jiamu Zhou", "Jun Wang", "Muning Wen", "Kounianhua Du", "Xingyu Lou", "Qiuying Peng", "Jun Wang", "Weinan Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a317"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028486, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a786"}, "filepath": "data/2503.23307v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992264065896991, "type": "Poster", "name": "MoCha: Towards Movie-Grade Talking Character Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117961", "abstract": "Recent advancements in video generation have achieved impressive motion realism, yet they often overlook character-driven storytelling, a crucial task for automated film, animation generation. We introduce \\textbf{Talking Characters}, a more realistic task to generate talking character animations directly from speech and text. Unlike talking head, Talking Characters aims at generating the full portrait of one or more characters beyond the facial region. In this paper, we propose MoCha, the first of its kind to generate talking characters. To ensure precise synchronization between video and speech, we propose a \\textbf{localized audio attention} mechanism that effectively aligns speech and video tokens.To address the scarcity of large-scale speech-labelled video datasets, we introduce a joint training strategy that leverages both speech-labelled and text-labelled video data, significantly improving generalization across diverse character actions. We also design structured prompt templates with character tags, enabling, for the first time, \\textbf{multi-character conversation with turn-based dialogue}\u2014allowing AI-generated characters to engage in context-aware conversations with cinematic coherence.Extensive qualitative and quantitative evaluations, including human evaluation studies and benchmark comparisons, demonstrate that MoCha sets a new standard for AI-generated cinematic storytelling, achieving superior realism, controllability and generalization.", "arxiv_id": "2503.23307v1", "arxiv_authors": ["Cong Wei", "Bo Sun", "Haoyu Ma", "Ji Hou", "Felix Juefei-Xu", "Zecheng He", "Xiaoliang Dai", "Luxin Zhang", "Kunpeng Li", "Tingbo Hou", "Animesh Sinha", "Peter Vajda", "Wenhu Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a318"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 7298072, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a787"}, "filepath": "data/2505.17581v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990965476105593, "type": "Poster", "name": "MODEM: A Morton-Order Degradation Estimation Mechanism for Adverse Weather Image Recovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115258", "abstract": "Restoring images degraded by adverse weather remains a significant challenge due to the highly non-uniform and spatially heterogeneous nature of weather-induced artifacts, \\emph{e.g.}, fine-grained rain streaks versus widespread haze. Accurately estimating the underlying degradation can intuitively provide restoration models with more targeted and effective guidance, enabling adaptive processing strategies. To this end, we propose a Morton-Order Degradation Estimation Mechanism (MODEM) for adverse weather image restoration. Central to MODEM is the Morton-Order 2D-Selective-Scan Module (MOS2D), which integrates Morton-coded spatial ordering with selective state-space models to capture long-range dependencies while preserving local structural coherence. Complementing MOS2D, we introduce a Dual Degradation Estimation Module (DDEM) that disentangles and estimates both global and local degradation priors. These priors dynamically condition the MOS2D modules, facilitating adaptive and context-aware restoration. Extensive experiments and ablation studies demonstrate that MODEM achieves state-of-the-art results across multiple benchmarks and weather types, highlighting its effectiveness in modeling complex degradation dynamics. Our code will be released soon.", "arxiv_id": "2505.17581v2", "arxiv_authors": ["Hainuo Wang", "Qiming Hu", "Xiaojie Guo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a319"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.569Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073018, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a788"}, "filepath": "data/2507.02546v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990527229850379, "type": "Poster", "name": "MoGe-2: Accurate Monocular Geometry with Metric Scale and Sharp Details", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120258", "abstract": "We propose MoGe-2, an advanced open-domain geometry estimation model that recovers a metric-scale 3D point map of a scene from a single image. Our method builds upon the recent monocular geometry estimation approach, MoGe, which predicts affine-invariant point maps with unknown scales. We explore effective strategies to extend MoGe for metric geometry prediction without compromising the relative geometry accuracy provided by the affine-invariant point representation. Additionally, we discover that noise and errors in real data diminish fine-grained detail in the predicted geometry. We address this by developing a data refinement approach that filters and completes real data using sharp synthetic labels, significantly enhancing the granularity of the reconstructed geometry while maintaining the overall accuracy. We train our model on a large corpus of mixed datasets and conducted comprehensive evaluations, demonstrating its superior performance in achieving accurate relative geometry, precise metric scale, and fine-grained detail recovery -- capabilities that no previous methods have simultaneously achieved.", "arxiv_id": "2507.02546v1", "arxiv_authors": ["Ruicheng Wang", "Sicheng Xu", "Yue Dong", "Yu Deng", "Jianfeng Xiang", "Zelong Lv", "Guangzhong Sun", "Xin Tong", "Jiaolong Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080792, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a789"}, "filepath": "data/2506.05191v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992221113470066, "type": "Poster", "name": "MokA: Multimodal Low-Rank Adaptation for MLLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116047", "abstract": "In this paper, we reveal that most current efficient multimodal fine-tuning methods are hindered by a key limitation: they are directly borrowed from LLMs, often neglecting the intrinsic differences of multimodal scenarios and even affecting the full utilization of all modalities. Inspired by our empirical observation, we argue that unimodal adaptation and cross-modal adaptation are two essential parts for the effective fine-tuning of MLLMs. From this perspective, we propose Multimodal Low-rank Adaptation (MokA), a multimodal-aware efficient fine-tuning strategy that takes multimodal characteristics into consideration. It compresses unimodal information by modality-specific parameters while explicitly enhancing cross-modal interaction, ensuring both unimodal and cross-modal adaptation. Extensive experiments cover three representative multimodal scenarios (audio-visual-text, visual-text, and speech-text), and multiple LLM backbones (LLaMA2, Qwen2, Qwen2.5-VL, etc). Consistent improvements indicate the efficacy and versatility of the proposed method. Ablation studies and efficiency evaluation are also conducted to fully asses our method. Overall, we think MokA provides a more targeted solution for efficient adaptation of MLLMs, paving the way for further exploration.", "arxiv_id": "2506.05191v1", "arxiv_authors": ["Yake Wei", "Yu Miao", "Dongzhan Zhou", "Di Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1104048, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78a"}, "filepath": "data/2507.03283v1.png", "tags": [], "_media_type": "image", "_rand": 0.999617081894774, "type": "Poster", "name": "MolVision: Molecular Property Prediction with Vision Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121822", "abstract": "Molecular property prediction is a fundamental task in computational chemistry with critical applications in drug discovery and materials science. While recent works have explored Large Language Models (LLMs) for this task, they primarily rely on textual molecular representations such as SMILES/SELFIES, which can be ambiguous and structurally uninformative. In this work, we introduce MolVision, a novel approach that leverages Vision-Language Models (VLMs) by integrating both molecular structure images and textual descriptions to enhance property prediction. We construct a benchmark spanning nine diverse datasets, covering both classification and regression tasks. Evaluating nine different VLMs in zero-shot, few-shot, and fine-tuned settings, we find that visual information improves prediction performance, particularly when combined with efficient fine-tuning strategies such as LoRA. Our results reveal that while visual information alone is insufficient, multimodal fusion significantly enhances generalization across molecular properties. Adaptation of vision encoder for molecular images in conjunction with LoRA further improves the performance. The code and data is available at : https://chemvision.github.io/chemvision/.", "arxiv_id": "2507.03283v1", "arxiv_authors": ["Deepan Adak", "Yogesh Singh Rawat", "Shruti Vyas"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1267490, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78b"}, "filepath": "data/2509.07027v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993179042582476, "type": "Poster", "name": "Moment- and Power-Spectrum-Based Gaussianity Regularization for Text-to-Image Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115328", "abstract": "We propose a novel regularization loss that enforces standard Gaussianity, encouraging samples to align with a standard Gaussian distribution. This facilitates a range of downstream tasks involving optimization in the latent space of text-to-image models.We treat elements of a high-dimensional sample as one-dimensional standard Gaussian variables and define a composite loss that combines moment-based regularization in the spatial domain with power spectrum-based regularization in the spectral domain. Since the expected values of moments and power spectrum distributions are analytically known, the loss promotes conformity to these properties. To ensure permutation invariance, the losses are applied to randomly permuted inputs. Notably, existing Gaussianity-based regularizations fall within our unified framework: some correspond to moment losses of specific orders, while the previous covariance-matching loss is equivalent to our spectral loss but incurs higher time complexity due to its spatial-domain computation. We showcase the application of our regularization in generative modeling for test-time reward alignment with a text-to-image model, specifically to enhance aesthetics and text alignment. Our regularization outperforms previous Gaussianity regularization, effectively prevents reward hacking and accelerates convergence.", "arxiv_id": "2509.07027v3", "arxiv_authors": ["Jisung Hwang", "Jaihoon Kim", "Minhyuk Sung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087984, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78c"}, "filepath": "data/2502.12558v4.png", "tags": [], "_media_type": "image", "_rand": 0.999651616058167, "type": "Poster", "name": "MomentSeeker: A Task-Oriented Benchmark For Long-Video Moment Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121519", "abstract": "Accurately locating key moments within long videos is crucial for solving long video understanding (LVU) tasks. However, existing benchmarks are either severely limited in terms of video length and task diversity, or they focus solely on the end-to-end LVU performance, making them inappropriate for evaluating whether key moments can be accurately accessed. To address this challenge, we propose \\textbf{MomentSeeker}, a novel benchmark for long-video moment retrieval, distinguished by the following features. First, it is created based on long and diverse videos, averaging over 1200 seconds in duration and collected from various domains, e.g., movie, anomaly, egocentric, sports, etc. Second, it covers a variety of real-world tasks, such as action recognition, object localization, and causal reasoning. Third, it incorporates rich forms of queries, including text-only queries, image-conditioned queries, and video-conditioned queries. On top of MomentSeeker, we conduct comprehensive experiments for both generation-based approaches (directly using MLLMs) and retrieval-based approaches (leveraging video retrievers). Our results reveal the significant challenges in long-video moment retrieval in terms of accuracy and efficiency, despite improvements from the latest long-video MLLMs and task-specific fine-tuning. We have publicly released MomentSeeker\\footnote{https://yhy-2000.github.io/MomentSeeker/} to facilitate future research in this area.", "arxiv_id": "2502.12558v4", "arxiv_authors": ["Huaying Yuan", "Jian Ni", "Zheng Liu", "Yueze Wang", "Junjie Zhou", "Zhengyang Liang", "Bo Zhao", "Zhao Cao", "Zhicheng Dou", "Ji-Rong Wen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1020489, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78d"}, "filepath": "data/2510.21449v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990565507198097, "type": "Poster", "name": "MoniTor: Exploiting Large Language Models with Instruction for Online Video Anomaly Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119803", "abstract": "Video Anomaly Detection (VAD) aims to locate unusual activities or behaviors within videos. Recently, offline VAD has garnered substantial research attention, which has been invigorated by the progress in large language models (LLMs) and vision-language models (VLMs), offering the potential for a more nuanced understanding of anomalies. However, online VAD has seldomly received attention due to real-time constraints and computational intensity. In this paper, we introduce a novel \\textbf{M}emory-based online scoring queue scheme for \\textbf{T}raining-free VAD (MoniTor), to address the inherent complexities in online VAD. Specifically, MoniTor applies a streaming input to VLMs, leveraging the capabilities of pre-trained large-scale models. To capture temporal dependencies more effectively, we incorporate a novel prediction mechanism inspired by Long Short-Term Memory (LSTM) networks to ensure that the model can effectively model past states and leverage previous predictions to identify anomalous behaviors, thereby better understanding the current frame. Moreover, we design a scoring queue and an anomaly prior to dynamically store recent scores and cover all anomalies in the monitoring scenario, providing guidance for LLMs to distinguish between normal and abnormal behaviors over time.We evaluate MoniTor on two large datasets (i.e., UCF-Crime and XD-Violence) containing various surveillance and real-world scenarios. The results demonstrate that MoniTor outperforms state-of-the-art methods and is competitive with weakly supervised methods without training. Code will be available.", "arxiv_id": "2510.21449v1", "arxiv_authors": ["Shengtian Yang", "Yue Feng", "Yingshi Liu", "Jingrou Zhang", "Jie Qin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a31f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033834, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78e"}, "filepath": "data/2507.16228v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990469737564769, "type": "Poster", "name": "MONITRS: Multimodal Observations of Natural Incidents Through Remote Sensing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121588", "abstract": "Natural disasters cause devastating damage to communities and infrastructure every year. Effective disaster response is hampered by the difficulty of accessing affected areas during and after events. Remote sensing has allowed us to monitor natural disasters in a remote way. More recently there have been advances in computer vision and deep learning that help automate satellite imagery analysis, However, they remain limited by their narrow focus on specific disaster types, reliance on manual expert interpretation, and lack of datasets with sufficient temporal granularity or natural language annotations for tracking disaster progression. We present MONITRS, a novel multimodal dataset of $\\sim$10,000 FEMA disaster events with temporal satellite imagery with natural language annotations from news articles, accompanied by geotagged locations, and question-answer pairs. We demonstrate that fine-tuning existing MLLMs on our dataset yields significant performance improvements for disaster monitoring tasks, establishing a new benchmark for machine learning-assisted disaster response systems.", "arxiv_id": "2507.16228v1", "arxiv_authors": ["Shreelekha Revankar", "Utkarsh Mall", "Cheng Perng Phoo", "Kavita Bala", "Bharath Hariharan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a320"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2302408, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a78f"}, "filepath": "data/2505.20744v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998107313902495, "type": "Poster", "name": "MoPFormer: Motion-Primitive Transformer for Wearable-Sensor Activity Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117802", "abstract": "Human Activity Recognition (HAR) with wearable sensors is challenged by limited interpretability, which significantly impacts cross-dataset generalization. To address this challenge, we propose Motion-Primitive Transformer (MoPFormer), a novel self-supervised framework that enhances interpretability by tokenizing inertial measurement unit signals into semantically meaningful motion primitives and leverages a Transformer architecture to learn rich temporal representations. MoPFormer comprises two-stages. first stage is to partition multi-channel sensor streams into short segments and quantizing them into discrete \"motion primitive\" codewords, while the second stage enriches those tokenized sequences through a context-aware embedding module and then processes them with a Transformer encoder. The proposed MoPFormer can be pre-trained using a masked motion-modeling objective that reconstructs missing primitives, enabling it to develop robust representations across diverse sensor configurations. Experiments on six HAR benchmarks demonstrate that MoPFormer not only outperforms state-of-the-art methods but also successfully generalizes across multiple datasets. Most importantly, the learned motion primitives significantly enhance both interpretability and cross-dataset performance by capturing fundamental movement patterns that remain consistent across similar activities regardless of dataset origin.", "arxiv_id": "2505.20744v1", "arxiv_authors": ["Hao Zhang", "Zhan Zhuang", "Xuehao Wang", "Xiaodong Yang", "Yu Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a321"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032865, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a790"}, "filepath": "data/2510.23574v1.png", "tags": [], "_media_type": "image", "_rand": 0.999885139755188, "type": "Poster", "name": "More Than Generation: Unifying Generation and Depth Estimation via Text-to-Image Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118836", "abstract": "Generative depth estimation methods leverage the rich visual priors stored in pretrained text-to-image diffusion models, demonstrating astonishing zero-shot capability. However, parameter updates during training lead to catastrophic degradation in the image generation capability of the pretrained model. We introduce MERGE, a unified model for image generation and depth estimation, starting from a fixed-parameters pretrained text-to-image model. MERGE demonstrates that the pretrained text-to-image model can do more than image generation but also expand to depth estimation effortlessly. Specifically, MERGE introduces a plug-and-play framework that enables seamless switching between image generation and depth estimation modes through simple and pluggable converters. Meanwhile, we propose a Group Reuse Mechanism to encourage parameter reuse and improve the utilization of the additional learnable parameter. MERGE unleashes the powerful depth estimation capability of the pretrained text-to-image model while preserving its original image generation ability. Compared to other unified models for image generation and depth estimation, MERGE achieves state-of-the-art performance across multiple depth estimation benchmarks. The code and model will be made available.", "arxiv_id": "2510.23574v1", "arxiv_authors": ["Hongkai Lin", "Dingkang Liang", "Mingyang Du", "Xin Zhou", "Xiang Bai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a322"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4689469, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a791"}, "filepath": "data/2505.16533v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992310644300002, "type": "Poster", "name": "Motion Matters: Compact Gaussian Streaming for Free-Viewpoint Video Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115614", "abstract": "3D Gaussian Splatting (3DGS) has emerged as a high-fidelity and efficient paradigm for online free-viewpoint video (FVV) reconstruction, offering viewers rapid responsiveness and immersive experiences. However, existing online methods face challenge in prohibitive storage requirements primarily due to point-wise modeling that fails to exploit the motion properties. To address this limitation, we propose a novel Compact Gaussian Streaming (ComGS) framework, leveraging the locality and consistency of motion in dynamic scene, that models object-consistent Gaussian point motion through keypoint-driven motion representation. By transmitting only the keypoint attributes, this framework provides a more storage-efficient solution. Specifically, we first identify a sparse set of motion-sensitive keypoints localized within motion regions using a viewspace gradient difference strategy. Equipped with these keypoints, we propose an adaptive motion-driven mechanism that predicts a spatial influence field for propagating keypoint motion to neighboring Gaussian points with similar motion. Moreover, ComGS adopts an error-aware correction strategy for key frame reconstruction that selectively refines erroneous regions and mitigates error accumulation without unnecessary overhead. Overall, ComGS achieves a remarkable storage reduction of over 159 \u00d7 compared to 3DGStream and 14 \u00d7 compared to the SOTA method QUEEN, while maintaining competitive visual fidelity and rendering speed. Our code will be released.", "arxiv_id": "2505.16533v1", "arxiv_authors": ["Jiacong Chen", "Qingyu Mao", "Youneng Bao", "Xiandong Meng", "Fanyang Meng", "Ronggang Wang", "Yongsheng Liang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a323"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1779741, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a792"}, "filepath": "data/2509.26391v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996028536715024, "type": "Poster", "name": "MotionRAG: Motion Retrieval-Augmented Image-to-Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115107", "abstract": "Image-to-video generation has made remarkable progress with the advancements in diffusion models, yet generating videos with realistic motion remains highly challenging. This difficulty arises from the complexity of accurately modeling motion, which involves capturing physical constraints, object interactions, and domain-specific dynamics that are not easily generalized across diverse scenarios. To address this, we propose MotionRAG, a retrieval-augmented framework that enhances motion realism by adapting motion priors from relevant reference videos through Context-Aware Motion Adaptation (CAMA). The key technical innovations include: (i) a retrieval-based pipeline extracting high-level motion features using video encoder and specialized resamplers to distill semantic motion representations; (ii) an in-context learning approach for motion adaptation implemented through a causal transformer architecture; (iii) an attention-based motion injection adapter that seamlessly integrates transferred motion features into pretrained video diffusion models. Extensive experiments demonstrate that our method achieves significant improvements across multiple domains and various models. Furthermore, our modular design enables zero-shot generalization to new domains by simply updating the retrieval database without retraining any components. This research enhances the core capability of video generation systems by enabling the effective retrieval and transfer of motion priors, facilitating the synthesis of realistic motion dynamics.", "arxiv_id": "2509.26391v1", "arxiv_authors": ["Chenhui Zhu", "Yilu Wu", "Shuai Wang", "Gangshan Wu", "Limin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a324"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029339, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a793"}, "filepath": "data/2510.01619v1.png", "tags": [], "_media_type": "image", "_rand": 0.999353440954511, "type": "Poster", "name": "MPMAvatar: Learning 3D Gaussian Avatars with Accurate and Robust Physics-Based Dynamics", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118094", "abstract": "While there has been significant progress in the field of 3D avatar creation from visual observations, modeling physically plausible dynamics of humans with loose garments remains a challenging problem. Although a few existing works address this problem by leveraging physical simulation, they suffer from limited accuracy or robustness to novel animation inputs. In this work, we present MPMAvatar, a framework for creating 3D human avatars from multi-view videos that supports highly realistic, robust animation, as well as photorealistic rendering from free viewpoints. For accurate and robust dynamics modeling, our key idea is to use a Material Point Method-based simulator, which we carefully tailor to model garments with complex deformations and contact with the underlying body by incorporating an anisotropic constitutive model and a novel collision handling algorithm. We combine this dynamics modeling scheme with our canonical avatar that can be rendered using 3D Gaussian Splatting with quasi-shadowing, enabling high-fidelity rendering for physically realistic animations. In our experiments, we demonstrate that MPMAvatar significantly outperforms the existing state-of-the-art physics-based avatar in terms of (1) dynamics modeling accuracy, (2) rendering accuracy, and (3) robustness and efficiency. Additionally, we present a novel application in which our avatar generalizes to unseen interactions in a zero-shot manner\u2014which was not achievable with previous learning-based methods due to their limited simulation generalizability. Our code will be publicly available.", "arxiv_id": "2510.01619v1", "arxiv_authors": ["Changmin Lee", "Jihyun Lee", "Tae-Kyun Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a325"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1713151, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a794"}, "filepath": "data/2509.15548v4.png", "tags": [], "_media_type": "image", "_rand": 0.9991381862268262, "type": "Poster", "name": "MS-GS: Multi-Appearance Sparse-View 3D Gaussian Splatting in the Wild", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116898", "abstract": "In-the-wild photo collections often contain limited volumes of imagery and exhibit multiple appearances, e.g., taken at different times of day or seasons, posing significant challenges to scene reconstruction and novel view synthesis. Although recent adaptations of Neural Radiance Field (NeRF) and 3D Gaussian Splatting (3DGS) have improved in these areas, they tend to oversmooth and are prone to overfitting. In this paper, we present MS-GS, a novel framework designed with \\textbf{M}ulti-appearance capabilities in \\textbf{S}parse-view scenarios using 3D\\textbf{GS}. To address the lack of support due to sparse initializations, our approach is built on the geometric priors elicited from monocular depth estimations. The key lies in extracting and utilizing local semantic regions with a Structure-from-Motion (SfM) points anchored algorithm for reliable alignment and geometry cues. Then, to introduce multi-view constraints, we propose a series of geometry-guided supervision at virtual views in a fine-grained and coarse scheme to encourage 3D consistency and reduce overfitting. We also introduce a dataset and an in-the-wild experiment setting to set up more realistic benchmarks. We demonstrate that MS-GS achieves photorealistic renderings under various challenging sparse-view and multi-appearance conditions and outperforms existing approaches significantly across different datasets.", "arxiv_id": "2509.15548v4", "arxiv_authors": ["Deming Li", "Kaiwen Jiang", "Yutao Tang", "Ravi Ramamoorthi", "Rama Chellappa", "Cheng Peng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a326"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 971061, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a795"}, "filepath": "data/2506.10609v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991851247209658, "type": "Poster", "name": "MSTAR: Box-free Multi-query Scene Text Retrieval with Attention Recycling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118782", "abstract": "Scene text retrieval has made significant progress with the assistance of accurate text localization.However, existing approaches typically require costly bounding box annotations for training. Besides, they mostly adopt a customized retrieval strategy but struggle to unify various types of queries to meet diverse retrieval needs.To address these issues, we introduce Muti-query Scene Text retrieval with Attention Recycling (MSTAR), a box-free approach for scene text retrieval. It incorporates progressive vision embedding to dynamically capture the multi-grained representation of texts and harmonizes free-style text queries with style-aware instructions. Additionally, a multi-instance matching module is integrated to enhance vision-language alignment.Furthermore, we build the Multi-Query Text Retrieval (MQTR) dataset, the first benchmark designed to evaluate the multi-query scene text retrieval capability of models, comprising four query types and $16k$ images. Extensive experiments demonstrate the superiority of our method across seven public datasets and the MQTR dataset. Notably, MSTAR marginally surpasses the previous state-of-the-art model by 6.4\\% in MAP on Total-Text while eliminating box annotation costs. Moreover, on the MQTR benchmark, MSTAR significantly outperforms the previous models by an average of 8.5\\%. The code and data will be public.", "arxiv_id": "2506.10609v1", "arxiv_authors": ["Liang Yin", "Xudong Xie", "Zhang Li", "Xiang Bai", "Yuliang Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a327"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.570Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1950250, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a796"}, "filepath": "data/2412.18319v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998269257039113, "type": "Poster", "name": "Mulberry: Empowering MLLM with o1-like Reasoning and Reflection via Collective Monte Carlo Tree Search", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116245", "abstract": "In this work, we aim to develop an MLLM that understands and solves questions by learning to create each intermediate step of the reasoning involved till the final answer. To this end, we propose Collective Monte Carlo Tree Search (CoMCTS), a new learning-to-reason method for MLLMs, which introduces the concept of collective learning into ``tree search'' for effective and efficient reasoning-path searching and learning. The core idea of CoMCTS is to leverage collective knowledge from multiple models to collaboratively conjecture, search and identify effective reasoning paths toward correct answers via four iterative operations including Expansion, Simulation and Error Positioning, Backpropagation, and Selection. Using CoMCTS, we construct Mulberry-260k, a multimodal dataset with a tree of rich, explicit and well-defined reasoning nodes for each question. With Mulberry-260k, we perform collective SFT to train our model, Mulberry, a series of MLLMs with o1-like step-by-step Reasoning and Reflection capabilities. Extensive experiments demonstrate the superiority of our proposed methods on various benchmarks. Anonymous code is available at https://anonymous.4open.science/r/Mulberry-NIPS25.", "arxiv_id": "2412.18319v2", "arxiv_authors": ["Huanjin Yao", "Jiaxing Huang", "Wenhao Wu", "Jingyi Zhang", "Yibo Wang", "Shunyu Liu", "Yingjie Wang", "Yuxin Song", "Haocheng Feng", "Li Shen", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a328"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1473638, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a797"}, "filepath": "data/2506.20879v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996708107104275, "type": "Poster", "name": "MultiHuman-Testbench: Benchmarking Image Generation for Multiple Humans", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121853", "abstract": "Generation of images containing multiple humans, performing complex actions, while preserving their facial identities, is a significant challenge. A major factor contributing to this is the lack of a a dedicated benchmark. To address this, we introduce MultiHuman-Testbench, a novel benchmark for rigorously evaluating generative models for multi-human generation. The benchmark comprises 1800 samples, including carefully curated text prompts, describing a range of simple to complex human actions. These prompts are matched with a total of 5,550 unique human face images, sampled uniformly to ensure diversity across age, ethnic background, and gender. Alongside captions, we provide human-selected pose conditioning images which accurately match the prompt. We propose a multi-faceted evaluation suite employing four key metrics to quantify face count, ID similarity, prompt alignment, and action detection. We conduct a thorough evaluation of a diverse set of models, including zero-shot approaches and training-based methods, with and without regional priors. We also propose novel techniques to incorporate image and region isolation using human segmentation and Hungarian matching, significantly improving ID similarity. Our proposed benchmark and key findings provide valuable insights and a standardized tool for advancing research in multi-human image generation.", "arxiv_id": "2506.20879v3", "arxiv_authors": ["Shubhankar Borse", "Seokeon Choi", "Sunghyun Park", "Jeongho Kim", "Shreya Kadambi", "Risheek Garrepalli", "Sungrack Yun", "Munawar Hayat", "Fatih Porikli"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a329"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2317033, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a798"}, "filepath": "data/2510.11112v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990925357663849, "type": "Poster", "name": "Multimodal Disease Progression Modeling via Spatiotemporal Disentanglement and Multiscale Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120126", "abstract": "Longitudinal multimodal data, including electronic health records (EHR) and sequential chest X-rays (CXRs), is critical for modeling disease progression, yet remains underutilized due to two key challenges: (1) redundancy in consecutive CXR sequences, where static anatomical regions dominate over clinically-meaningful dynamics, and (2) temporal misalignment between sparse, irregular imaging and continuous EHR data. We introduce $\\texttt{DiPro}$, a novel framework that addresses these challenges through region-aware disentanglement and multi-timescale alignment. First, we disentangle static (anatomy) and dynamic (pathology progression) features in sequential CXRs, prioritizing disease-relevant changes. Second, we hierarchically align these static and dynamic CXR features with asynchronous EHR data via local (pairwise interval-level) and global (full-sequence) synchronization to model coherent progression pathways. Extensive experiments on the MIMIC dataset demonstrate that $\\texttt{DiPro}$ could effectively extract temporal clinical dynamics and achieve state-of-the-art performance on both disease progression identification and general ICU prediction tasks.", "arxiv_id": "2510.11112v1", "arxiv_authors": ["Chen Liu", "Wenfang Yao", "Kejing Yin", "William K. Cheung", "Jing Qin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1049031, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a799"}, "filepath": "data/2509.25851v1.png", "tags": [], "_media_type": "image", "_rand": 0.999680361323508, "type": "Poster", "name": "Multimodal Symbolic Logical Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115490", "abstract": "Multimodal symbolic logical reasoning, which aims to deduce new facts from multimodal input via formal logic, is critical in high-stakes applications such as autonomous driving and medical diagnosis, as its rigorous, deterministic reasoning helps prevent serious consequences. To evaluate such capabilities of current state-of-the-art vision language models (VLMs), we introduce the first benchmark MuSLR for multimodal symbolic logical reasoning grounded in formal logical rules. MuSLR comprises 1,093 instances across 7 domains, including 35 atomic symbolic logic and 976 logical combinations, with reasoning depths ranging from 2 to 9. We evaluate 7 state-of-the-art VLMs on MuSLR and find that they all struggle with multimodal symbolic reasoning, with the best model, GPT-4.1, achieving only 46.8%.Thus, we propose LogiCAM, a modular framework that applies formal logical rules to multimodal inputs, boosting GPT-4.1\u2019s Chain-of-Thought performance by 14.13%, and delivering even larger gains on complex logics such as first-order logic. We also conduct a comprehensive error analysis, showing that around 70% of failures stem from logical misalignment between modalities, offering key insights to guide future improvements.", "arxiv_id": "2509.25851v1", "arxiv_authors": ["Jundong Xu", "Hao Fei", "Yuhui Zhang", "Liangming Pan", "Qijun Huang", "Qian Liu", "Preslav Nakov", "Min-Yen Kan", "William Yang Wang", "Mong-Li Lee", "Wynne Hsu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1252270, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79a"}, "filepath": "data/2509.17429v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991018175231099, "type": "Poster", "name": "Multi-scale Temporal Prediction via Incremental Generation and Multi-agent Collaboration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115325", "abstract": "Accurate temporal prediction is the bridge between comprehensive scene understanding and embodied artificial intelligence. However, predicting multiple fine-grained states of scene at multiple temporal scales is difficult for vision-language models.We formalize the Multi\u2010Scale Temporal Prediction (MSTP) task in general and surgical scene by decomposing multi\u2010scale into two orthogonal dimensions: the temporal scale, forecasting states of human and surgery at varying look\u2010ahead intervals, and the state scale, modeling a hierarchy of states in general and surgical scene. For instance in general scene, states of contacting relationship are finer-grained than states of spatial relationship. For instance in surgical scene, medium\u2010level steps are finer\u2010grained than high\u2010level phases yet remain constrained by their encompassing phase. To support this unified task, we introduce the first MSTP Benchmark, featuring synchronized annotations across multiple state scales and temporal scales. We further propose a novel method, Incremental Generation and Multi\u2010agent Collaboration (IG-MC), which integrates two key innovations. Firstly, we propose an plug-and-play incremental generation to keep high-quality temporal prediction that continuously synthesizes up-to-date visual previews at expanding temporal scales to inform multiple decision-making agents, ensuring decision content and generated visuals remain synchronized and preventing performance degradation as look\u2010ahead intervals lengthen.Secondly, we propose a decision\u2010driven multi\u2010agent collaboration framework for multiple states prediction, comprising generation, initiation, and multi\u2010state assessment agents that dynamically triggers and evaluates prediction cycles to balance global coherence and local fidelity. Extensive experiments on the MSTP Benchmark in general and surgical scene show that IG\u2010MC is a generalizable plug-and-play method for MSTP, demonstrating the effectiveness of incremental generation and the stability of decision\u2010driven multi\u2010agent collaboration.", "arxiv_id": "2509.17429v2", "arxiv_authors": ["Zhitao Zeng", "Guojian Yuan", "Junyuan Mao", "Yuxuan Wang", "Xiaoshuang Jia", "Yueming Jin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 998057, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79b"}, "filepath": "data/2506.07235v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992703094128448, "type": "Poster", "name": "Multi-step Visual Reasoning with Visual Tokens Scaling and Verification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115184", "abstract": "Multi-modal large language models (MLLMs) have achieved remarkable capabilities by integrating visual perception with language understanding, enabling applications such as image-grounded dialogue, visual question answering, and scientific analysis. However, most MLLMs adopt a static inference paradigm, encoding the entire image into fixed visual tokens upfront, which limits their ability to iteratively refine understanding or adapt to context during inference. This contrasts sharply with human perception, which is dynamic, selective, and feedback-driven.In this work, we introduce a novel framework for inference-time visual token scaling that enables MLLMs to perform iterative, verifier-guided reasoning over visual content. We formulate the problem as a Markov Decision Process, involving a reasoner that proposes visual actions and a verifier\u2014trained via multi-step Direct Preference Optimization (DPO)\u2014that evaluates these actions and determines when reasoning should terminate. To support this, we present a new dataset, VTS, comprising supervised reasoning trajectories (VTS-SFT) and preference-labeled reasoning comparisons (VTS-DPO).Our method significantly outperforms existing approaches across diverse visual reasoning benchmarks, offering not only improved accuracy but also more interpretable and grounded reasoning processes. These results demonstrate the promise of dynamic inference mechanisms for enabling fine-grained, context-aware visual reasoning in next-generation MLLMs.", "arxiv_id": "2506.07235v1", "arxiv_authors": ["Tianyi Bai", "Zengjie Hu", "Fupeng Sun", "Jiantao Qiu", "Yizhen Jiang", "Guangxin He", "Bohan Zeng", "Conghui He", "Binhang Yuan", "Wentao Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 998814, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79c"}, "filepath": "data/2510.21406v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990545164024013, "type": "Poster", "name": "MUVR: A Multi-Modal Untrimmed Video Retrieval Benchmark with Multi-Level Visual Correspondence", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121621", "abstract": "We propose the Multi-modal Untrimmed Video Retrieval task, along with a new benchmark (MUVR) to advance video retrieval for long-video platforms. MUVR aims to retrieve untrimmed videos containing relevant segments using multi-modal queries. It has the following features: **1) Practical retrieval paradigm:** MUVR supports video-centric multi-modal queries, expressing fine-grained retrieval needs through long text descriptions, video tag prompts, and mask prompts. It adopts a one-to-many retrieval paradigm and focuses on untrimmed videos, tailored for long-video platform applications. **2) Multi-level visual correspondence:** To cover common video categories (e.g., news, travel, dance) and precisely define retrieval matching criteria, we construct multi-level visual correspondence based on core video content (e.g., news events, travel locations, dance moves) which users are interested in and want to retrieve. It covers six levels: copy, event, scene, instance, action, and others. **3) Comprehensive evaluation criteria:** We develop 3 versions of MUVR (i.e., Base, Filter, QA). MUVR-Base/Filter evaluates retrieval models, while MUVR-QA assesses MLLMs in a question-answering format. We also propose a Reranking Score to evaluate the reranking ability of MLLMs. MUVR consists of 53K untrimmed videos from the video platform Bilibili, with 1,050 multi-modal queries and 84K matches. Extensive evaluations of 3 state-of-the-art video retrieval models, 6 image-based VLMs, and 10 MLLMs are conducted. MUVR reveals the limitations of retrieval methods in processing untrimmed videos and multi-modal queries, as well as MLLMs in multi-video understanding and reranking. Our code and benchmark will be open-sourced soon.", "arxiv_id": "2510.21406v1", "arxiv_authors": ["Yue Feng", "Jinwei Hu", "Qijia Lu", "Jiawei Niu", "Li Tan", "Shuo Yuan", "Ziyi Yan", "Yizhen Jia", "Qingzhi He", "Shiping Ge", "Ethan Q. Chen", "Wentong Li", "Limin Wang", "Jie Qin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1103119, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79d"}, "filepath": "data/2505.21483v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991790747080709, "type": "Poster", "name": "MV-CoLight: Efficient Object Compositing with Consistent Lighting and Shadow Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118325", "abstract": "Object compositing offers significant promise for augmented reality (AR) and embodied intelligence applications. Existing approaches predominantly focus on single-image scenarios or intrinsic decomposition techniques, facing challenges with multi-view consistency, complex scenes, and diverse lighting conditions. Recent inverse rendering advancements, such as 3D Gaussian and diffusion-based methods, have enhanced consistency but are limited by scalability, heavy data requirements, or prolonged reconstruction time per scene. To broaden its applicability, we introduce MV-CoLight, a two-stage framework for illumination-consistent object compositing in both 2D images and 3D scenes. Our novel feed-forward architecture models lighting and shadows directly, avoiding the iterative biases of diffusion-based methods. We employ a Hilbert curve\u2013based mapping to align 2D image inputs with 3D Gaussian scene representations seamlessly. To facilitate training and evaluation, we further introduce a large-scale 3D compositing dataset. Experiments demonstrate state-of-the-art harmonized results across standard benchmarks and our dataset, as well as casually captured real-world scenes demonstrate the framework\u2019s robustness and wide generalization.", "arxiv_id": "2505.21483v1", "arxiv_authors": ["Kerui Ren", "Jiayang Bai", "Linning Xu", "Lihan Jiang", "Jiangmiao Pang", "Mulin Yu", "Bo Dai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a32f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4228305, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79e"}, "filepath": "data/2510.05782v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999575977920042, "type": "Poster", "name": "Mysteries of the Deep: Role of Intermediate Representations in Out of Distribution Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118891", "abstract": "Out-of-distribution (OOD) detection is essential for reliably deploying machine learning models in the wild. Yet, most methods treat large pre-trained models as monolithic encoders and rely solely on their final-layer representations for detection. We challenge this wisdom. We reveal the intermediate layers of pre-trained models, shaped by residual connections that subtly transform input projections, can encode surprisingly rich and diverse signals for detecting distributional shifts. Importantly, to exploit latent representation diversity across layers, we introduce an entropy-based criterion to automatically identify layers offering the most complementary information in a training-free setting, without access to OOD data. We show that selectively incorporating these intermediate representations can increase the accuracy of OOD detection by up to $10\\%$ in far-OOD and over $7\\%$ in near-OOD benchmarks compared to state-of-the-art training-free methods across various model architectures and training objectives. Our findings reveal a new avenue for OOD detection research and uncover the impact of various training objectives and model architectures on confidence-based OOD detection methods.", "arxiv_id": "2510.05782v2", "arxiv_authors": ["I. M. De la Jara", "C. Rodriguez-Opazo", "D. Teney", "D. Ranasinghe", "E. Abbasnejad"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a330"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076774, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a79f"}, "filepath": "data/2506.15684v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992244940844475, "type": "Poster", "name": "Nabla-R2D3: Effective and Efficient 3D Diffusion Alignment with 2D Rewards", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119200", "abstract": "Generating high-quality and photorealistic 3D assets remains a longstanding challenge in 3D vision and computer graphics. Although state-of-the-art generative models, such as diffusion models, have made significant progress in 3D generation, they often fall short of human-designed content due to limited ability to follow instructions, align with human preferences, or produce realistic textures, geometries, and physical attributes. In this paper, we introduce Nabla-R2D3, a highly effective and sample-efficient reinforcement learning alignment framework for 3D-native diffusion models using 2D rewards. Built upon the recently proposed Nabla-GFlowNet method for reward finetuning, our Nabla-R2D3 enables effective adaptation of 3D diffusion models through pure 2D reward feedback. Extensive experiments show that, unlike naive finetuning baselines which either fail to converge or suffer from overfitting, Nabla-R2D3 consistently achieves higher rewards and reduced prior forgetting within few finetuning steps.", "arxiv_id": "2506.15684v1", "arxiv_authors": ["Qingming Liu", "Zhen Liu", "Dinghuai Zhang", "Kui Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a331"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1248765, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a0"}, "filepath": "data/2506.03131v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999967223053315, "type": "Poster", "name": "Native-Resolution Image Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118124", "abstract": "We introduce native-resolution image synthesis, a novel paradigm in generative modeling capable of synthesizing images at arbitrary resolutions and aspect ratios. This approach overcomes the limitations of standard fixed-resolution, square-image methods by inherently handling variable-length visual tokens\u2014a core challenge for conventional techniques. To this end, we propose the Native-resolution diffusion Transformer (NiT), an architecture that explicitly models varying resolutions and aspect ratios within its denoising process. Unconstrained by fixed formats, NiT learns intrinsic visual distributions from images encompassing a wide range of resolutions and aspect ratios. Notably, a single NiT model simultaneously achieves the state-of-the-art performance on both ImageNet-256x256 and 512x512 benchmarks. Surprisingly, akin to the robust zero-shot capabilities seen in advanced Large Language Models, NiT, pretrained solely on ImageNet, demonstrates excellent zero-shot generalization performance. It successfully generates high-fidelity images at previously unseen high resolutions (e.g., 1024x1024, 1536x1536) and diverse aspect ratios (e.g., 16:9,3:1, 4:3), as shown in Figure 1. These findings indicate the significant potential of native-resolution modeling as a bridge between visual generative modeling and advanced LLM methodologies. Code and pretrained models will be made publicly available.", "arxiv_id": "2506.03131v1", "arxiv_authors": ["Zidong Wang", "Lei Bai", "Xiangyu Yue", "Wanli Ouyang", "Yiyuan Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a332"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6990051, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a1"}, "filepath": "data/2505.16993v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991353784750667, "type": "Poster", "name": "Native Segmentation Vision Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117718", "abstract": "Uniform downsampling remains the de facto standard for reducing spatial resolution in vision backbones. In this work, we propose an alternative design built around a content-aware spatial grouping layer that dynamically assigns tokens to a reduced set based on image boundaries and their semantic content. Stacking our grouping layer across consecutive backbone stages results in hierarchical segmentation that arises *natively* in the feature extraction process, resulting in our coined Native Segmentation Vision Transformer.We show that a careful design of our architecture enables the emergence of strong segmentation masks solely from grouping layers, that is, without additional segmentation-specific heads. This sets the foundation for a new paradigm of *native*, backbone-level segmentation, which enables strong zero-shot results without mask supervision, as well as a minimal and efficient standalone model design for downstream segmentation tasks.", "arxiv_id": "2505.16993v1", "arxiv_authors": ["Guillem Bras\u00f3", "Aljo\u0161a O\u0161ep", "Laura Leal-Taix\u00e9"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a333"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1888896, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a2"}, "filepath": "data/2506.01031v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993420891092921, "type": "Poster", "name": "NavBench: Probing Multimodal Large Language Models for Embodied Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116120", "abstract": "Multimodal Large Language Models (MLLMs) have demonstrated strong generalization in vision-language tasks, yet their ability to understand and act within embodied environments remains underexplored. We present NavBench, a benchmark to evaluate the embodied navigation capabilities of MLLMs under zero-shot settings. NavBench consists of two components: (1) navigation comprehension, assessed through three cognitively grounded tasks including global instruction alignment, temporal progress estimation, and local observation-action reasoning, covering 3,200 question-answer pairs; and (2) step-by-step execution in 432 episodes across 72 indoor scenes, stratified by spatial, cognitive, and execution complexity. To support real-world deployment, we introduce a pipeline that converts MLLMs' outputs into robotic actions. We evaluate both proprietary and open-source models, finding that GPT-4o performs well across tasks, while lighter open-source models succeed in simpler cases. Results also show that models with higher comprehension scores tend to achieve better execution performance. Providing map-based context improves decision accuracy, especially in medium-difficulty scenarios. However, most models struggle with temporal understanding, particularly in estimating progress during navigation, which may pose a key challenge.", "arxiv_id": "2506.01031v1", "arxiv_authors": ["Yanyuan Qiao", "Haodong Hong", "Wenqi Lyu", "Dong An", "Siqi Zhang", "Yutong Xie", "Xinyu Wang", "Qi Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a334"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.571Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2098498, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a3"}, "filepath": "data/2412.16326v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992686237931795, "type": "Poster", "name": "Navigating the Compression Generation Trade-off In Visual Tokenization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116076", "abstract": "Current image generation methods are based on a two-stage training approach. In stage 1, an auto-encoder is trained to compress an image into a latent space; in stage 2, a generative model is trained to learn a distribution over that latent space. This reveals a fundamental trade-off, do we compress more aggressively to make the latent distribution easier for the stage 2 model to learn even if it makes reconstruction worse? We study this problem in the context of discrete, auto-regressive image generation. Through the lens of scaling laws, we show that smaller stage 2 models can benefit from more compressed stage 1 latents even if reconstruction performance worsens, demonstrating that generation modeling capacity plays a role in this trade-off. Diving deeper, we rigorously study the connection between compute scaling and the stage 1 rate-distortion trade-off. Next, we introduce Causally Regularized Tokenization (CRT), which uses knowledge of the stage 2 generation modeling procedure to embed useful inductive biases in stage 1 latents. This regularization improves stage 2 generation performance better by making the tokens easier to model without affecting the stage 1 compression rate and marginally affecting distortion: we are able to improve compute efficiency 2-3$\\times$ over baseline. Finally, we use CRT with further optimizations to the visual tokenizer setup to result in a generative pipeline that matches LlamaGen-3B generation performance (2.18 FID) with half the tokens per image (256 vs. 576) and a fourth the total model parameters (775M vs. 3.1B) while using the same architecture and inference procedure.", "arxiv_id": "2412.16326v1", "arxiv_authors": ["Vivek Ramanujan", "Kushal Tirumala", "Armen Aghajanyan", "Luke Zettlemoyer", "Ali Farhadi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a335"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1936478, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a4"}, "filepath": "data/2510.08565v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999599954010207, "type": "Poster", "name": "NaViL: Rethinking Scaling Properties of Native Multimodal Large Language Models with Data Constraint", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119488", "abstract": "Compositional training has been the de-facto paradigm in existing Multimodal Large Language Models (MLLMs), where pre-trained vision encoders are connected with pre-trained LLMs through continuous multimodal pre-training. However, the multimodal scaling property of this paradigm remains difficult to explore due to the separated training. In this paper, we focus on the native training of MLLMs in an end-to-end manner and systematically study its design space and scaling property under a practical setting, i.e., data constraint. Through careful study of various choices in MLLM, we obtain the optimal meta-architecture that best balances performance and training cost. After that, we further explore the scaling properties of the native MLLM and indicate the positively correlated scaling relationship between visual encoders and LLMs. Based on these findings, we propose a native MLLM called NaViL, combined with a simple and cost-effective recipe. Experimental results on 14 multimodal benchmarks confirm the competitive performance of NaViL against existing MLLMs. Besides, our findings and results provide in-depth insights for the future study of native MLLMs.", "arxiv_id": "2510.08565v1", "arxiv_authors": ["Changyao Tian", "Hao Li", "Gen Luo", "Xizhou Zhu", "Weijie Su", "Hanming Deng", "Jinguo Zhu", "Jie Shao", "Ziran Zhu", "Yunpeng Liu", "Lewei Lu", "Wenhai Wang", "Hongsheng Li", "Jifeng Dai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a336"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069872, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a5"}, "filepath": "data/2508.06044v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996641333597583, "type": "Poster", "name": "NEP: Autoregressive Image Editing via Next Editing Token Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119980", "abstract": "Text-guided image editing involves modifying a source image based on a language instruction and, typically, requires changes to only small local regions. However, existing approaches generate the entire target image rather than selectively regenerate only the intended editing areas. This results in (1) unnecessary computational costs and (2) a bias toward reconstructing non-editing regions, which compromises the quality of the intended edits. To resolve these limitations, we propose to formulate image editing as Next Editing-token Prediction (NEP) based on autoregressive image generation, where only regions that need to be edited are regenerated, thus avoiding unintended modification to the non-editing areas. To enable any-region editing, we propose to pre-train an any-order autoregressive text-to-image (T2I) model. Once trained, it is capable of zero-shot image editing and can be easily adapted to NEP for image editing, which achieves a new state-of-the-art on widely used image editing benchmarks. Moreover, our model naturally supports test-time scaling (TTS) through iteratively refining its generation in a zero-shot manner.", "arxiv_id": "2508.06044v2", "arxiv_authors": ["Huimin Wu", "Xiaojian Ma", "Haozhe Zhao", "Yanpeng Zhao", "Qing Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a337"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2284504, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a6"}, "filepath": "data/2509.20745v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992075103165652, "type": "Poster", "name": "Neptune-X: Active X-to-Maritime Generation for Universal Maritime Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116049", "abstract": "Maritime object detection is essential for navigation safety, surveillance, and autonomous operations, yet constrained by two key challenges: the scarcity of annotated maritime data and poor generalization across various maritime attributes (e.g., object category, viewpoint, location, and imaging environment). To address these challenges, we propose Neptune-X, a data-centric generative-selection framework that enhances training effectiveness by leveraging synthetic data generation with task-aware sample selection. From the generation perspective, we develop X-to-Maritime, a multi-modality-conditioned generative model that synthesizes diverse and realistic maritime scenes. A key component is the Bidirectional Object-Water Attention module, which captures boundary interactions between objects and their aquatic surroundings to improve visual fidelity. To further improve downstream tasking performance, we propose Attribute-correlated Active Sampling, which dynamically selects synthetic samples based on their task relevance. To support robust benchmarking, we construct the Maritime Generation Dataset, the first dataset tailored for generative maritime learning, encompassing a wide range of semantic conditions. Extensive experiments demonstrate that our approach sets a new benchmark in maritime scene synthesis, significantly improving detection accuracy, particularly in challenging and previously underrepresented settings.", "arxiv_id": "2509.20745v2", "arxiv_authors": ["Yu Guo", "Shengfeng He", "Yuxu Lu", "Haonan An", "Yihang Tao", "Huilin Zhu", "Jingxian Liu", "Yuguang Fang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a338"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1086934, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a7"}, "filepath": "data/2406.17345v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998041071718753, "type": "Poster", "name": "NerfBaselines: Consistent and Reproducible Evaluation of Novel View Synthesis Methods", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121436", "abstract": "Novel view synthesis is an important problem with many applications, including AR/VR, gaming, and robotic simulations. With the recent rapid development of Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (3DGS) methods, it is becoming difficult to keep track of the current state of the art (SoTA) due to methods using different evaluation protocols, codebases being difficult to install and use, and methods not generalizing well to novel 3D scenes. In our experiments, we show that even tiny differences in the evaluation protocols of various methods can artificially boost the performance of these methods. This raises questions about the validity of quantitative comparisons performed in the literature. To address these questions, we propose NerfBaselines, an evaluation framework which provides consistent benchmarking tools, ensures reproducibility, and simplifies the installation and use of various methods. We validate our implementation experimentally by reproducing the numbers reported in the original papers. For improved accessibility, we release a web platform that compares commonly used methods on standard benchmarks. We strongly believe NerfBaselines is a valuable contribution to the community as it ensures that quantitative results are comparable and thus truly measure progress in the field of novel view synthesis.", "arxiv_id": "2406.17345v1", "arxiv_authors": ["Jonas Kulhanek", "Torsten Sattler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a339"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1347632, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a8"}, "filepath": "data/2509.16336v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990518167566844, "type": "Poster", "name": "Neural Atlas Graphs for Dynamic Scene Decomposition and Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115926", "abstract": "Learning editable high-resolution scene representations for dynamic scenes is an open problem with applications across the domains from autonomous driving to creative editing - the most successful approaches today make a trade-off between editability and supporting scene complexity: neural atlases represent dynamic scenes as two deforming image layers, foreground and background, which are editable in 2D, but break down when multiple objects occlude and interact. In contrast, scene graph models make use of annotated data such as masks and bounding boxes from autonomous\u2011driving datasets to capture complex 3D spatial relationships, but their implicit volumetric node representations are challenging to edit view-consistently. We propose Neural Atlas Graphs (NAGs), a hybrid high-resolution scene representation, where every graph node is a view\u2011dependent neural atlas, facilitating both 2D appearance editing and 3D ordering and positioning of scene elements. Fit at test\u2011time, NAGs achieve state\u2011of\u2011the\u2011art quantitative results on the Waymo Open Dataset - by 5 dB PSNR increase compared to existing methods - and make environmental editing possible in high resolution and visual quality - creating counterfactual driving scenarios with new backgrounds and edited vehicle appearance. We find that the method also generalizes beyond driving scenes and compares favorably - by more than 7 dB in PSNR - to recent matting and video editing baselines on the DAVIS video dataset with a diverse set of human and animal-centric scenes.", "arxiv_id": "2509.16336v1", "arxiv_authors": ["Jan Philipp Schneider", "Pratik Singh Bisht", "Ilya Chugunov", "Andreas Kolb", "Michael Moeller", "Felix Heide"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33a"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 949403, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7a9"}, "filepath": "data/2503.16980v6.png", "tags": [], "_media_type": "image", "_rand": 0.9995029506017777, "type": "Poster", "name": "Neural Discrete Token Representation Learning for Extreme Token Reduction in Video Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117541", "abstract": "Token-based video representation has emerged as a promising approach for enabling large language models (LLMs) to interpret video content. However, existing token reduction techniques, such as pruning and merging, often disrupt essential positional embeddings and rely on continuous visual tokens sampled from nearby pixels with similar spatial\u2013temporal locations. By removing only a small fraction of tokens, these methods still produce relatively lengthy continuous sequences, which falls short of the extreme compression required to balance computational efficiency and token count in video LLMs.In this paper, we introduce the novel task of **Extreme Short Token Reduction**, which aims to represent entire videos using a minimal set of discrete tokens. We propose **VQToken**, a neural discrete token representation framework that(i) applies adaptive vector quantization to continuous ViT embeddings to learn a compact codebook and (ii) preserves spatial\u2013temporal positions via a token hash function by assigning each grid-level token to its nearest codebook entry.On the Extreme Short Token Reduction task, our VQToken compresses sequences to just **0.07\\%** of their original length while incurring only a **0.66\\%** drop in accuracy on NextQA-MC benchmark. It also achieves comparable performance on ActNet-QA, Long Video Bench, and VideoMME. We further introduce the **Token Information Density** (**TokDense**) metric and formalize fixed-length and adaptive-length subtasks, achieving state-of-the-art results in both settings. Our approach dramatically lowers theoretical complexity, increases information density, way fewer tokens counts, and enables efficient video large language models in resource-constrained environments.", "arxiv_id": "2503.16980v6", "arxiv_authors": ["Haichao Zhang", "Yun Fu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065872, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7aa"}, "filepath": "data/2507.05397v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992598909647875, "type": "Poster", "name": "Neural-Driven Image Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117788", "abstract": "Traditional image editing typically relies on manual prompting, making it labor-intensive and inaccessible to individuals with limited motor control or language abilities. Leveraging recent advances in brain-computer interfaces (BCIs) and generative models, we propose LoongX, a hands-free image editing approach driven by multimodal neurophysiological signals. LoongX utilizes state-of-the-art diffusion models trained on a comprehensive dataset of 23,928 image editing pairs, each paired with synchronized electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), photoplethysmography (PPG), and head motion signals that capture user intent.To effectively address the heterogeneity of these signals, LoongX integrates two key modules. The cross-scale state space (CS3) module encodes informative modality-specific features. The dynamic gated fusion (DGF) module further aggregates these features into a unified latent space, which is then aligned with edit semantics via fine-tuning on a diffusion transformer (DiT).Additionally, we pre-train the encoders using contrastive learning to align cognitive states with semantic intentions from embedded natural language.Extensive experiments demonstrate that LoongX achieves performance comparable to text-driven methods (CLIP-I: 0.6605 vs. 0.6558; DINO: 0.4812 vs. 0.4637) and outperforms them when neural signals are combined with speech (CLIP-T: 0.2588 vs. 0.2549). These results highlight the promise of neural-driven generative models in enabling accessible, intuitive image editing and open new directions for cognitive-driven creative technologies. Datasets and code will be released to support future work and foster progress in this emerging area.", "arxiv_id": "2507.05397v2", "arxiv_authors": ["Pengfei Zhou", "Jie Xia", "Xiaopeng Peng", "Wangbo Zhao", "Zilong Ye", "Zekai Li", "Suorong Yang", "Jiadong Pan", "Yuanxiang Chen", "Ziqiao Wang", "Kai Wang", "Qian Zheng", "Xiaojun Chang", "Gang Pan", "Shurong Dong", "Kaipeng Zhang", "Yang You"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2491359, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ab"}, "filepath": "data/2508.08421v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999426210321823, "type": "Poster", "name": "Neural Tangent Knowledge Distillation for Optical Convolutional Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119609", "abstract": "Hybrid Optical Neural Networks (ONNs, typically consisting of an optical frontend and a digital backend) offer an energy-efficient alternative to fully digital deep networks for real-time, power-constrained systems. However, their adoption is limited by two main challenges: the accuracy gap compared to large-scale networks during training, and discrepancies between simulated and fabricated systems that further degrade accuracy. While previous work has proposed end-to-end optimizations for specific datasets (e.g., MNIST) and optical systems, these approaches typically lack generalization across tasks and hardware designs. To address these limitations, we propose a task-agnostic and hardware-agnostic pipeline that supports image classification and segmentation across diverse optical systems. To assist optical system design before training, we estimate achievable model accuracy based on user-specified constraints such as physical size and the dataset. For training, we introduce Neural Tangent Knowledge Distillation (NTKD), which aligns optical models with electronic teacher networks, thereby narrowing the accuracy gap. After fabrication, NTKD also guides fine-tuning of the digital backend to compensate for implementation errors. Experiments on multiple datasets (e.g., MNIST, CIFAR, Carvana Masking) and hardware configurations show that our pipeline consistently improves ONN performance and enables practical deployment in both pre-fabrication simulations and physical implementations.", "arxiv_id": "2508.08421v1", "arxiv_authors": ["Jinlin Xiang", "Minho Choi", "Yubo Zhang", "Zhihao Zhou", "Arka Majumdar", "Eli Shlizerman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033743, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ac"}, "filepath": "data/2503.07076v5.png", "tags": [], "_media_type": "image", "_rand": 0.9992683799477098, "type": "Poster", "name": "NFIG: Multi-Scale Autoregressive Image Generation via Frequency Ordering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116151", "abstract": "Autoregressive models have achieved significant success in image generation. However, unlike the inherent hierarchical structure of image information in the spectral domain, standard autoregressive methods typically generate pixels sequentially in a fixed spatial order. To better leverage this spectral hierarchy, we introduce Next-Frequency Image Generation (NFIG). NFIG is a novel framework that decomposes the image generation process into multiple frequency-guided stages. NFIG aligns the generation process with the natural image structure. It does this by first generating low-frequency components, which efficiently capture global structure with significantly fewer tokens, and then progressively adding higher-frequency details. This frequency-aware paradigm offers substantial advantages: it not only improves the quality of generated images but crucially reduces inference cost by efficiently establishing global structure early on. Extensive experiments on the ImageNet-256 benchmark validate NFIG's effectiveness, demonstrating superior performance (FID: 2.81) and a notable 1.25x speedup compared to the strong baseline VAR-d20.", "arxiv_id": "2503.07076v5", "arxiv_authors": ["Zhihao Huang", "Xi Qiu", "Yukuo Ma", "Yifu Zhou", "Junjie Chen", "Hongyuan Zhang", "Chi Zhang", "Xuelong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1059614, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ad"}, "filepath": "data/2412.13176v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998327663658632, "type": "Poster", "name": "NFL-BA: Near-Field Light Bundle Adjustment for SLAM in Dynamic Lighting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119436", "abstract": "Simultaneous Localization and Mapping (SLAM) systems typically assume static, distant illumination; however, many real-world scenarios, such as endoscopy, subterranean robotics, and search & rescue in collapsed environments, require agents to operate with a co-located light and camera in the absence of external lighting. In such cases, dynamic near-field lighting introduces strong, view-dependent shading that significantly degrades SLAM performance. We introduce Near-Field Lighting Bundle Adjustment Loss (NFL-BA) which explicitly models near-field lighting as a part of Bundle Adjustment loss and enables better performance for scenes captured with dynamic lighting. NFL-BA can be integrated into neural rendering-based SLAM systems with implicit or explicit scene representations. Our evaluations mainly focus on endoscopy procedure where SLAM can enable autonomous navigation, guidance to unsurveyed regions, blindspot detections, and 3D visualizations, which can significantly improve patient outcomes and endoscopy experience for both physicians and patients. Replacing Photometric Bundle Adjustment loss of SLAM systems with NFL-BA leads to significant improvement in camera tracking, 37% for MonoGS and 14% for EndoGSLAM, and leads to state-of-the-art camera tracking and mapping performance on the C3VD colonoscopy dataset. Further evaluation on indoor scenes captured with phone camera with flashlight turned on, also demonstrate significant improvement in SLAM performance due to NFL-BA.", "arxiv_id": "2412.13176v3", "arxiv_authors": ["Andrea Dunn Beltran", "Daniel Rho", "Marc Niethammer", "Roni Sengupta"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a33f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.572Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1652055, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ae"}, "filepath": "data/2508.11330v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995739597088334, "type": "Poster", "name": "Noise Matters: Optimizing Matching Noise for Diffusion Classifiers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115516", "abstract": "Although today's pretrained discriminative vision-language models (e.g., CLIP) have demonstrated strong perception abilities, such as zero-shot image classification, they also suffer from the bag-of-words problem and spurious bias. To mitigate these problems, some pioneering studies leverage powerful generative models (e.g., pretrained diffusion models) to realize generalizable image classification, dubbed Diffusion Classifier (DC). Specifically, by randomly sampling a Gaussian noise, DC utilizes the differences of denoising effects with different category conditions to classify categories. Unfortunately, an inherent and notorious weakness of existing DCs is noise instability: different random sampled noises lead to significant performance changes. To achieve stable classification performance, existing DCs always ensemble the results of hundreds of sampled noises, which significantly reduces the classification speed. To this end, we firstly explore the role of noise in DC, and conclude that: there are some ``good noises'' that can relieve the instability. Meanwhile, we argue that these good noises should meet two principles: 1) Frequency Matching: noise should destroy the specific frequency signals; 2) Spatial Matching: noise should destroy the specific spatial areas. Regarding both principles, we propose a novel Noise Optimization method to learn matching (i.e., good) noise for DCs: NoOp. For frequency matching, NoOp first optimizes a dataset-specific noise: Given a dataset and a timestep $t$, optimize one randomly initialized parameterized noise. For Spatial Matching, NoOp trains a Meta-Network that adopts an image as input and outputs image-specific noise offset. The sum of optimized noise and noise offset will be used in DC to replace random noise. Extensive ablations on various datasets demonstrated the effectiveness of NoOp. It is worth noting that our noise optimization is orthogonal to existing optimization methods (e.g., prompt tuning), our NoOP can even benefit from these methods to further boost performance.", "arxiv_id": "2508.11330v1", "arxiv_authors": ["Yanghao Wang", "Long Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a340"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1268775, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7af"}, "filepath": "data/2510.21122v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998766787848327, "type": "Poster", "name": "NoisyGRPO: Incentivizing Multimodal CoT Reasoning via Noise Injection and Bayesian Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115801", "abstract": "Reinforcement learning (RL) has shown promise in enhancing the general Chain-of-Thought (CoT) reasoning capabilities of multimodal large language models (MLLMs). However, when applied to improve general CoT reasoning, existing RL frameworks often struggle to generalize beyond the training distribution. To address this, we propose NoisyGRPO, a systematic multimodal RL framework that introduces controllable noise into visual inputs for enhanced exploration and explicitly models the advantage estimation process via a Bayesian framework.Specifically, NoisyGRPO improves RL training by: (1) \\textbf{Noise-Injected Exploration Policy}: Perturbing visual inputs with Gaussian noise to encourage exploration across a wider range of visual scenarios; and (2) \\textbf{Bayesian Advantage Estimation}: Formulating advantage estimation as a principled Bayesian inference problem, where the injected noise level serves as a prior and the observed trajectory reward as the likelihood. This Bayesian modeling fuses both sources of information to compute a robust posterior estimate of trajectory advantage, effectively guiding MLLMs to prefer visually grounded trajectories over noisy ones.Experiments on standard CoT quality, general capability, and hallucination benchmarks demonstrate that NoisyGRPO substantially improves generalization and robustness, especially in RL settings with small-scale MLLMs such as Qwen2.5-VL 3B.", "arxiv_id": "2510.21122v1", "arxiv_authors": ["Longtian Qiu", "Shan Ning", "Jiaxuan Sun", "Xuming He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a341"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1179943, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b0"}, "filepath": "data/2504.13055v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998683801237577, "type": "Poster", "name": "NoisyRollout: Reinforcing Visual Reasoning with Data Augmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119493", "abstract": "Recent advances in reinforcement learning (RL) have strengthened the reasoning capabilities of vision-language models (VLMs). However, enhancing policy exploration to better scale test-time compute remains largely underexplored. In addition, VLMs continue to struggle with imperfect visual perception, which in turn affects the subsequent reasoning process.To this end, we propose **NoisyRollout**, a simple yet effective data augmentation method that mixes trajectories from both clean and moderately distorted images during RL training. By injecting targeted diversity in visual perception and the resulting reasoning patterns, NoisyRollout promotes better policy exploration through vision-oriented inductive biases, ultimately leading to more robust reasoning behaviors. We further adopt a noise annealing schedule that gradually reduces distortion strength over training, leveraging noisy signals early on while ensuring training stability in later stages.Crucially, our method is easy-to-adopt\u2014**requiring no additional training cost and no modifications to the RL objective**. Extensive experiments on $2$ distinct training datasets demonstrate that NoisyRollout achieves state-of-the-art performance among open-source RL-tuned models across $5$ out-of-domain reasoning and perception benchmarks. Furthermore, we validate the effectiveness of NoisyRollout across model sizes ($7$B and $32$B) and data scales (from $1$K to $6$K), highlighting its generalizability and scalability.", "arxiv_id": "2504.13055v3", "arxiv_authors": ["Xiangyan Liu", "Jinjie Ni", "Zijian Wu", "Chao Du", "Longxu Dou", "Haonan Wang", "Tianyu Pang", "Michael Qizhe Shieh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a342"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112426, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b1"}, "filepath": "data/2507.12107v1.png", "tags": [], "_media_type": "image", "_rand": 0.999970362060791, "type": "Poster", "name": "Non-Adaptive Adversarial Face Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117700", "abstract": "Adversarial attacks on face recognition systems (FRSs) pose serious security and privacy threats, especially when these systems are used for identity verification. In this paper, we propose a novel method for generating adversarial faces\u2014synthetic facial images that are visually distinct yet recognized as a target identity by the FRS. Unlike iterative optimization-based approaches (e.g., gradient descent or other iterative solvers), our method leverages the structural characteristics of the FRS feature space. We figure out that individuals sharing the same attribute (e.g., gender or race) form an attributed subsphere. By utilizing such subspheres, our method achieves both non-adaptiveness and a remarkably small number of queries. This eliminates the need for relying on transferability and open-source surrogate models, which have been a typical strategy when repeated adaptive queries to commercial FRSs are impossible. Despite requiring only a single non-adaptive query consisting of 100 face images, our method achieves a high success rate of over 93% against AWS\u2019s CompareFaces API at its default threshold. Furthermore, unlike many existing attacks that perturb a given image, our method can deliberately produce adversarial faces that impersonate the target identity while exhibiting high-level attributes chosen by the adversary.", "arxiv_id": "2507.12107v1", "arxiv_authors": ["Sunpill Kim", "Seunghun Paik", "Chanwoo Hwang", "Minsu Kim", "Jae Hong Seo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a343"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1119181, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b2"}, "filepath": "data/2505.21179v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992070813408954, "type": "Poster", "name": "Normalized Attention Guidance: Universal Negative Guidance for Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117946", "abstract": "Negative guidance -- explicitly suppressing unwanted attributes -- remains a fundamental challenge in diffusion models, particularly in few-step sampling regimes. While Classifier-Free Guidance (CFG) works well in standard settings, it fails under aggressive sampling step compression due to divergent predictions between positive and negative branches. We present Normalized Attention Guidance (NAG), an efficient, training-free mechanism that applies extrapolation in attention space with L1-based normalization and refinement. NAG restores effective negative guidance where CFG collapses while maintaining fidelity. Unlike existing approaches, NAG generalizes across architectures (UNet, DiT), sampling regimes (few-step, multi-step), and modalities (image, video), functioning as a \\textit{universal} plug-in with minimal computational overhead. Through extensive experimentation, we demonstrate consistent improvements in text alignment (CLIP Score), fidelity (FID, PFID), and human-perceived quality (ImageReward). Our ablation studies validate each design component, while user studies confirm significant preference for NAG-guided outputs. As a model-agnostic inference-time approach requiring no retraining, NAG provides effortless negative guidance for all modern diffusion frameworks -- pseudocode in the Appendix!", "arxiv_id": "2505.21179v3", "arxiv_authors": ["Dar-Yen Chen", "Hmrishav Bandyopadhyay", "Kai Zou", "Yi-Zhe Song"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a344"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3276023, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b3"}, "filepath": "data/2506.04401v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990628249347957, "type": "Poster", "name": "Normalize Filters! Classical Wisdom for Deep Vision", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115369", "abstract": "Classical image filters, such as those for averaging or differencing, are carefully normalized to ensure consistency, interpretability, and to avoid artifacts like intensity shifts, halos, or ringing. In contrast, convolutional filters learned end-to-end in deep networks lack such constraints. Although they may resemble wavelets and blob/edge detectors, they are not normalized in the same or any way. Consequently, when images undergo atmospheric transfer, their responses become distorted, leading to incorrect outcomes. We address this limitation by proposing filter normalization, followed by learnable scaling and shifting, akin to batch normalization. This simple yet effective modification ensures that the filters are atmosphere-equivariant, enabling co-domain symmetry. By integrating classical filtering principles into deep learning (applicable to both convolutional neural networks and convolution-dependent vision transformers), our method achieves significant improvements on artificial and natural intensity variation benchmarks. Our ResNet34 could even outperform CLIP by a large margin. Our analysis reveals that unnormalized filters degrade performance, whereas filter normalization regularizes learning, promotes diversity, and improves robustness and generalization.", "arxiv_id": "2506.04401v2", "arxiv_authors": ["Gustavo Perez", "Stella X. Yu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a345"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.573Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1037480, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b4"}, "filepath": "data/2505.14064v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999295979098745, "type": "Poster", "name": "NOVA: A Benchmark for Anomaly Localization and Clinical Reasoning in Brain MRI", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121770", "abstract": "In many real-world applications, deployed models encounter inputs that differ from the data seen during training. Out-of-distribution detection identifies whether an input stems from an unseen distribution, while open-world recognition flags such inputs to ensure the system remains robust as ever-emerging, previously *unknown* categories appear and must be addressed without retraining.Foundation and vision-language models are pre-trained on large and diverse datasets with the expectation of broad generalization across domains including medical imaging.However, benchmarking these models on test sets with only a few common outlier types silently collapses the evaluation back to a closed-set problem, masking failures on rare or truly novel conditions encountered in clinical use.We therefore present *NOVA*, a challenging, real-life *evaluation-only* benchmark of $\\sim$900 brain MRI studies that span 281 rare pathologies and heterogeneous acquisition protocols. Each case includes rich clinical narratives and double-blinded expert bounding-box annotations. Together, these enable joint assessment of anomaly localisation, visual captioning, and diagnostic reasoning. Because NOVA is never used for training, it serves as an \\textit{extreme} stress-test of out-of-distribution generalisation: models must bridge a distribution gap both in sample appearance and in semantic space. Baseline results with leading vision-language models (GPT-4o, Gemini 2.0 Flash, and Qwen2.5-VL-72B) reveal substantial performance drops across all tasks, establishing NOVA as a rigorous testbed for advancing models that can detect, localize, and reason about truly unknown anomalies.", "arxiv_id": "2505.14064v1", "arxiv_authors": ["Cosmin I. Bercea", "Jun Li", "Philipp Raffler", "Evamaria O. Riedel", "Lena Schmitzer", "Angela Kurz", "Felix Bitzer", "Paula Ro\u00dfm\u00fcller", "Julian Canisius", "Mirjam L. Beyrle", "Che Liu", "Wenjia Bai", "Bernhard Kainz", "Julia A. Schnabel", "Benedikt Wiestler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a346"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074923, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b5"}, "filepath": "data/2510.13307v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999934255367297, "type": "Poster", "name": "Novel Class Discovery for Point Cloud Segmentation via Joint Learning of Causal Representation and Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117865", "abstract": "In this paper, we focus on Novel Class Discoveryfor Point Cloud Segmentation (3D-NCD), aiming to learn a model that can segment unlabeled(novel) 3D classes using only the supervisionfrom labeled (base) 3D classes. The key of the thistask is to setup the exact correlations between thepoint representations and their base class labels,as well as the representation correlations betweenthe points from base and novel classes. A coarseor statistical correlation learning may lead to theconfusion in novel class inference. lf we imposea casual relationship as a strong correlated constraint upon the learning process, the essentialpoint cloud representations that accurately correspond to the classes should be uncovered. Tothis end, we introduce a structural causal model(SCM) to re-formalize the 3D-NCD problem andpropose a new method, i.e., Joint Learning ofCausal Representation and Reasoning. Specifically, we first analyze hidden confounders in thebase class representations and the causal relationships between the base and novel classes throughSCM. We devise a causal representation prototypethat eliminates confounders to capture the causalrepresentations of base classes. A graph structureis then used to model the causal relationships between the base classes\u2019 casual representation prototypes and the novel class prototypes, enablingcausal reasoning from base to novel classes. Extensive experiments and visualization results on3D and 2D NCD semantic segmentation demonstrate the superiorities of our method.", "arxiv_id": "2510.13307v2", "arxiv_authors": ["Yang Li", "Aming Wu", "Zihao Zhang", "Yahong Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a347"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028211, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b6"}, "filepath": "data/2506.04227v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992537545600776, "type": "Poster", "name": "Object-centric 3D Motion Field for Robot Learning from Human Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116341", "abstract": "Learning robot control policies from human videos is a promising direction for scaling up robot learning. However, how to extract action knowledge (or action representations) from videos for policy learning remains a key challenge. Existing action representations such as video frames, pixelflow, and pointcloud flow have inherent limitations such as modeling complexity or loss of information. In this paper, we propose to use object-centric 3D motion field to represent actions for robot learning from human videos, and present a novel framework for extracting this representation from videos for zero-shot control. We introduce two novel components. First, a novel training pipeline for training a ``denoising'' 3D motion field estimator to extract fine object 3D motions from human videos with noisy depth robustly. Second, a dense object-centric 3D motion field prediction architecture that favors both cross-embodiment transfer and policy generalization to background. We evaluate the system in real world setups. Experiments show that our method reduces 3D motion estimation error by over 50% compared to the latest method, achieve 55% average success rate in diverse tasks where prior approaches fail ($\\lesssim 10$\\%), and can even acquire fine-grained manipulation skills like insertion.", "arxiv_id": "2506.04227v1", "arxiv_authors": ["Zhao-Heng Yin", "Sherry Yang", "Pieter Abbeel"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a348"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2487282, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b7"}, "filepath": "data/2502.14113v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995201315103504, "type": "Poster", "name": "Object-centric binding in Contrastive Language-Image Pretraining", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116977", "abstract": "Recent advances in vision language models (VLM) have been driven by contrastive models such as CLIP, which learn to associate visual information with their corresponding text descriptions. However, these models have limitations in understanding complex compositional scenes involving multiple objects and their spatial relationships. To address these challenges, we propose a novel approach that diverges from commonly used strategies that rely on the design of finegrained hard-negative augmentations. Instead, our work focuses on integrating inductive biases into the pretraining of CLIP-like models to improve their compositional understanding. To that end, we introduce a binding module that connects a scene graph, derived from a text description, with a slot-structured image representation, facilitating a structured similarity assessment between the two modalities. We also leverage relationships as text-conditioned visual constraints, thereby capturing the intricate interactions between objects and their contextual relationships more effectively. Our resulting model not only enhances the performance of CLIP-based models in multi-object compositional understanding but also paves the way towards more accurate and sample-efficient image-text matching of complex scenes.", "arxiv_id": "2502.14113v1", "arxiv_authors": ["Rim Assouel", "Pietro Astolfi", "Florian Bordes", "Michal Drozdzal", "Adriana Romero-Soriano"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a349"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1282702, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b8"}, "filepath": "data/2510.04714v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998844428900163, "type": "Poster", "name": "Object-Centric Representation Learning for Enhanced 3D Semantic Scene Graph Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118536", "abstract": "3D Semantic Scene Graph Prediction aims to detect objects and their semantic relationships in 3D scenes, and has emerged as a crucial technology for robotics and AR/VR applications. While previous research has addressed dataset limitations and explored various approaches including Open-Vocabulary settings, they frequently fail to optimize the representational capacity of object and relationship features, showing excessive reliance on Graph Neural Networks despite insufficient discriminative capability. In this work, we demonstrate through extensive analysis that the quality of object features plays a critical role in determining overall scene graph accuracy. To address this challenge, we design a highly discriminative object feature encoder and employ a contrastive pretraining strategy that decouples object representation learning from the scene graph prediction. This design not only enhances object classification accuracy but also yields direct improvements in relationship prediction. Notably, when plugging in our pretrained encoder into existing frameworks, we observe substantial performance improvements across all evaluation metrics. Additionally, whereas existing approaches have not fully exploited the integration of relationship information, we effectively combine both geometric and semantic features to achieve superior relationship prediction. Comprehensive experiments on the 3DSSG dataset demonstrate that our approach significantly outperforms previous state-of-the-art methods.", "arxiv_id": "2510.04714v1", "arxiv_authors": ["KunHo Heo", "GiHyun Kim", "SuYeon Kim", "MyeongAh Cho"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1017170, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7b9"}, "filepath": "data/2505.21635v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994862668660166, "type": "Poster", "name": "Object Concepts Emerge from Motion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118157", "abstract": "Object concepts play a foundational role in human visual cognition, enabling perception, memory, and interaction in the physical world. Inspired by findings in developmental neuroscience\u2014where infants are shown to acquire object understanding through observation of motion\u2014we propose a biologically inspired framework for learning object-centric visual representations in an unsupervised manner.Our key insight is that motion boundary serves as a strong signal for object-level grouping, which can be used to derive pseudo instance supervision from raw videos. Concretely, we generate motion-based instance masks using off-the-shelf optical flow and clustering algorithms, and use them to train visual encoders via contrastive learning. Our framework is fully label-free and does not rely on camera calibration, making it scalable to large-scale unstructured video data.We evaluate our approach on three downstream tasks spanning both low-level (monocular depth estimation) and high-level (3D object detection and occupancy prediction) vision. Our models outperform previous supervised and self-supervised baselines and demonstrate strong generalization to unseen scenes. These results suggest that motion-induced object representations offer a compelling alternative to existing vision foundation models, capturing a crucial but overlooked level of abstraction: the visual instance.The corresponding code will be released upon paper acceptance.", "arxiv_id": "2505.21635v1", "arxiv_authors": ["Haoqian Liang", "Xiaohui Wang", "Zhichao Li", "Ya Yang", "Naiyan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2075500, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ba"}, "filepath": "data/2506.04789v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998770255205426, "type": "Poster", "name": "Object-X: Learning to Reconstruct Multi-Modal 3D Object Representations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116149", "abstract": "Learning effective multi-modal 3D representations of objects is essential for numerous applications, such as augmented reality and robotics. Existing methods often rely on task-specific embeddings that are tailored either for semantic understanding or geometric reconstruction. As a result, these embeddings typically cannot be decoded into explicit geometry and simultaneously reused across tasks.In this paper, we propose Object-X, a versatile multi-modal object representation framework capable of encoding rich object embeddings (e.g., images, point cloud, text) and decoding them back into detailed geometric and visual reconstructions. Object-X operates by geometrically grounding the captured modalities in a 3D voxel grid and learning an unstructured embedding fusing the information from the voxels with the object attributes. The learned embedding enables 3D Gaussian Splatting-based object reconstruction, while also supporting a range of downstream tasks, including scene alignment, single-image 3D object reconstruction, and localization.Evaluations on two challenging real-world datasets demonstrate that Object-X produces high-fidelity novel-view synthesis comparable to standard 3D Gaussian Splatting, while significantly improving geometric accuracy. Moreover, Object-X achieves competitive performance with specialized methods in scene alignment and localization.Critically, our object-centric descriptors require 3-4 orders of magnitude less storage compared to traditional image- or point cloud-based approaches, establishing Object-X as a scalable and highly practical solution for multi-modal 3D scene representation.", "arxiv_id": "2506.04789v2", "arxiv_authors": ["Gaia Di Lorenzo", "Federico Tombari", "Marc Pollefeys", "Daniel Barath"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.574Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2403839, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7bb"}, "filepath": "data/2501.00321v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994054720648609, "type": "Poster", "name": "OCRBench v2: An Improved Benchmark for Evaluating Large Multimodal Models on Visual Text Localization and Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121614", "abstract": "Scoring the Optical Character Recognition (OCR) capabilities of Large Multimodal Models (LMMs) has witnessed growing interest. Existing benchmarks have highlighted the impressive performance of LMMs in text recognition; however, their abilities in certain challenging tasks, such as text localization, handwritten content extraction, and logical reasoning, remain underexplored. To bridge this gap, we introduce OCRBench v2, a large-scale bilingual text-centric benchmark with currently the most comprehensive set of tasks ($4\\times$ more tasks than the previous multi-scene benchmark OCRBench), the widest coverage of scenarios ($31$ diverse scenarios), and thorough evaluation metrics, with $10,000$ human-verified question-answering pairs and a high proportion of difficult samples. Moreover, we construct a private test set with $1,500$ manually annotated images. The consistent evaluation trends observed across both public and private test sets validate the \\datasetname's reliability. After carefully benchmarking state-of-the-art LMMs, we find that most LMMs score below $50$ ($100$ in total) and suffer from five-type limitations, including less frequently encountered text recognition, fine-grained perception, layout perception, complex element parsing, and logical reasoning. The benchmark and evaluation scripts are available at https://anonymous.4open.science/r/qytest-5FC4.", "arxiv_id": "2501.00321v2", "arxiv_authors": ["Ling Fu", "Zhebin Kuang", "Jiajun Song", "Mingxin Huang", "Biao Yang", "Yuzhe Li", "Linghao Zhu", "Qidi Luo", "Xinyu Wang", "Hao Lu", "Zhang Li", "Guozhi Tang", "Bin Shan", "Chunhui Lin", "Qi Liu", "Binghong Wu", "Hao Feng", "Hao Liu", "Can Huang", "Jingqun Tang", "Wei Chen", "Lianwen Jin", "Yuliang Liu", "Xiang Bai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038308, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7bc"}, "filepath": "data/2506.09417v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995536962893433, "type": "Poster", "name": "ODG: Occupancy Prediction Using Dual Gaussians", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119271", "abstract": "3D occupancy provides fine-grained 3D geometry and semantics for scene understanding which is critical for autonomous driving. Most existing methods, however, carry high compute costs, requiring dense 3D feature volume and cross-attention to effectively aggregate information. More recent works have adopted Bird's Eye View (BEV) or sparse points as scene representation with much reduced cost, but still suffer from their respective shortcomings. More concretely, BEV struggles with small objects that often experience significant information loss after being projected to the ground plane. On the other hand, points can flexibly model little objects in 3D, but is inefficient at capturing flat surfaces or large objects. To address these challenges, in this paper, we present a novel 3D occupancy prediction approach, ODG, which combines BEV and sparse points based representations. We propose a dual-branch design: a query-based sparse points branch and a BEV branch. The 3D information learned in the sparse points branch is shared with the BEV stream via cross-attention, which enriches the weakened signals of difficult objects on the BEV plane. The outputs of both branches are finally fused to generate predicted 3D occupancy. We conduct extensive experiments on the Occ3D-nuScenes and Occ3D-Waymo benchmarks that demonstrate the superiority of our proposed ODG. Moreover, ODG also delivers competitive inference speed when compared to the latest efficient approaches.", "arxiv_id": "2506.09417v2", "arxiv_authors": ["Yunxiao Shi", "Yinhao Zhu", "Shizhong Han", "Jisoo Jeong", "Amin Ansari", "Hong Cai", "Fatih Porikli"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060829, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7bd"}, "filepath": "data/2505.18445v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992029800612452, "type": "Poster", "name": "OmniConsistency: Learning Style-Agnostic Consistency from Paired Stylization Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118712", "abstract": "Diffusion models have advanced image stylization significantly, yet two core challenges persist: (1) maintaining consistent stylization in complex scenes, particularly identity, composition, and fine details, and (2) preventing style degradation in image-to-image pipelines with style LoRAs. GPT-4o's exceptional stylization consistency highlights the performance gap between open-source methods and proprietary models. To bridge this gap, we propose \\textbf{OmniConsistency}, a universal consistency plugin leveraging large-scale Diffusion Transformers (DiTs). OmniConsistency contributes: (1) an in-context consistency learning framework trained on aligned image pairs for robust generalization; (2) a two-stage progressive learning strategy decoupling style learning from consistency preservation to mitigate style degradation; and (3) a fully plug-and-play design compatible with arbitrary style LoRAs under the Flux framework. Extensive experiments show that OmniConsistency significantly enhances visual coherence and aesthetic quality, achieving performance comparable to commercial state-of-the-art model GPT-4o.", "arxiv_id": "2505.18445v1", "arxiv_authors": ["Yiren Song", "Cheng Liu", "Mike Zheng Shou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a34f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4862478, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7be"}, "filepath": "data/2510.13660v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999107635191821, "type": "Poster", "name": "OmniGaze: Reward-inspired Generalizable Gaze Estimation In The Wild", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117994", "abstract": "Current 3D gaze estimation methods struggle to generalize across diverse data domains, primarily due to $\\textbf{i)}$ the scarcity of annotated datasets, and $\\textbf{ii)}$ the insufficient diversity of labeled data. In this work, we present OmniGaze, a semi-supervised framework for 3D gaze estimation, which utilizes large-scale unlabeled data collected from diverse and unconstrained real-world environments to mitigate domain bias and generalize gaze estimation in the wild. First, we build a diverse collection of unlabeled facial images, varying in facial appearances, background environments, illumination conditions, head poses, and eye occlusions. In order to leverage unlabeled data spanning a broader distribution, OmniGaze adopts a standard pseudo-labeling strategy and devises a reward model to assess the reliability of pseudo labels. Beyond pseudo labels as 3D direction vectors, the reward model also incorporates visual embeddings extracted by an off-the-shelf visual encoder and semantic cues from gaze perspective generated by prompting a Multimodal Large Language Model to compute confidence scores.Then, these scores are utilized to select high-quality pseudo labels and weight them for loss computation.Extensive experiments demonstrate that OmniGaze achieves state-of-the-art performance on five datasets under both in-domain and cross-domain settings. Furthermore, we also evaluate the efficacy of OmniGaze as a scalable data engine to build a foundation model for gaze estimation, which exhibits robust zero-shot generalization performance on four unseen datasets. The source code will be released.", "arxiv_id": "2510.13660v2", "arxiv_authors": ["Hongyu Qu", "Jianan Wei", "Xiangbo Shu", "Yazhou Yao", "Wenguan Wang", "Jinhui Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a350"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042338, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7bf"}, "filepath": "data/2507.09122v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998619582059745, "type": "Poster", "name": "OmniMotion: Human Motion Generation from Expressive Texts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115940", "abstract": "Text-to-motion generation has experienced remarkable progress in recent years. However, current approaches remain limited to synthesizing motion from short or general text prompts, primarily due to dataset constraints. This limitation undermines fine-grained controllability and generalization to unseen prompts. In this paper, we introduce OmniMotion, a new text-motion dataset featuring high-quality motion capture data paired with accurate, \\textit{expressive} textual annotations. The dataset comprises 20K motion clips totaling 44 hours, accompanied by 122 detailed textual descriptions averaging 48 words per description (vs. 12 words of HumanML3D). Importantly, these motion clips preserve original temporal continuity as they were in long sequences, facilitating research in long-term motion generation and blending. We also improve upon previous generative masked modeling approaches. Our model, MoMask++, transforms motion into \\textbf{multi-scale} token sequences that better exploit the token capacity, and learns to generate all tokens using a single generative masked transformer. MoMask++ achieves state-of-the-art performance on both HumanML3D and OmniMotion benchmarks. Additionally, we demonstrate the ability to process casual user prompts by employing an LLM to reformat inputs to align with the expressivity and narration style of OmniMotion.", "arxiv_id": "2507.09122v2", "arxiv_authors": ["Chuan Guo", "Inwoo Hwang", "Jian Wang", "Bing Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a351"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1018306, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c0"}, "filepath": "data/2505.20256v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997344851926602, "type": "Poster", "name": "Omni-R1: Reinforcement Learning for Omnimodal Reasoning via Two-System Collaboration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119706", "abstract": "Enabling intelligent systems to simultaneously process and reason over information from multiple modalities\u2014such as text, video, and audio\u2014in complex real-world scenarios, while performing planning and precise decision-making, has long been regarded as the ultimate goal in the field of artificial intelligence.Recently, with the rapid advancement of multimodal pretraining and supervised fine-tuning (SFT) techniques, a series of Omni-modal models have emerged, bringing us closer to this vision.However, current Omni-modal models still exhibit significant shortcomings in understanding and reasoning over long video and audio sequences, as well as in fine-grained, pixel-level comprehension tasks.On the other hand, while reinforcement learning (RL) has achieved remarkable success in enhancing the reasoning capabilities of MLLMs in pure-text domains, its application within Omni-modal models remains at an early stage.This is due to two major challenges: first, the lack of effective reasoning datasets and task formulations for training Omni-modal models; second, the effectiveness of existing RL techniques in Omni-modal settings has not yet been thoroughly validated.To address these issues, we focus on two of the most challenging tasks: Referring Audio-Visual Segmentation (RefAVS) and Referring Video Object Segmentation (REVOS).We decouple the complex video understanding task into two key subtasks: (1) identifying long-range keyframes in videos, and (2) generating task-specific re-captions.We propose a fully RL-based framework that leverages large-scale existing datasets to jointly optimize both capabilities of Omni-modal models.Experimental results demonstrate that our model achieves state-of-the-art performance on both the RefAVS and REVOS benchmarks.Furthermore, we show that our RL approach brings significant improvements on other general-purpose understanding tasks as well.", "arxiv_id": "2505.20256v1", "arxiv_authors": ["Hao Zhong", "Muzhi Zhu", "Zongze Du", "Zheng Huang", "Canyu Zhao", "Mingyu Liu", "Wen Wang", "Hao Chen", "Chunhua Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a352"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1038546, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c1"}, "filepath": "data/2505.21724v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999788872397144, "type": "Poster", "name": "OmniResponse: Online Multimodal Conversational Response Generation in Dyadic Interactions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115393", "abstract": "In this paper, we introduce Online Multimodal Conversational Response Generation (OMCRG), a novel task that aims to generate synchronized verbal and non-verbal listener feedback in real-time, conditioned on the speaker's multimodal input. OMCRG reflects natural dyadic interactions and poses new challenges in achieving synchronization between the generated audio and facial responses of the listener.To address these challenges, we innovatively introduce text as an intermediate modality to bridge the audio and facial responses. We hence propose OmniResponse, a Multimodal Large Language Model (MLLM) that autoregressively generates high-quality multi-modal listener responses. OmniResponse leverages a pretrained LLM enhanced with two novel components: Chrono-Text, which temporally anchors generated text tokens, and TempoVoice, a controllable online TTS module that produces speech synchronized with facial reactions. To support further OMCRG research, we present ResponseNet, a new dataset comprising 696 high-quality dyadic interactions featuring synchronized split-screen videos, multichannel audio, transcripts, and facial behavior annotations. Comprehensive evaluations conducted on ResponseNet demonstrate that OmniResponse significantly outperforms baseline models in terms of semantic speech content, audio-visual synchronization, and generation quality.Our dataset, code, and models will be made publicly available.", "arxiv_id": "2505.21724v1", "arxiv_authors": ["Cheng Luo", "Jianghui Wang", "Bing Li", "Siyang Song", "Bernard Ghanem"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a353"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1046021, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c2"}, "filepath": "data/2509.10813v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999515748627645, "type": "Poster", "name": "OmniScenes: A Large-scale Simulatable Indoor Scene Dataset with Realistic Layouts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121859", "abstract": "The advancement of Embodied AI heavily relies on large-scale, simulatable 3D scene datasets characterized by scene diversity and realistic layouts. However, existing datasets typically suffer from limitations in diversity or simulatability, sanitized layouts lacking small items, and severe object collisions. To address these shortcomings, we introduce \\textbf{OmniScenes}, a novel large-scale simulatable indoor scene dataset comprising approximately 40,000 diverse scenes by integrating three disparate scene sources, \\ie, real-world scans, procedurally generated scenes, and designer-created scenes, including 1.96M objects and 800k CAD models that cover 15 common scene types and 288 object classes, resulting in complex layouts that have most-ever 41.5 objects per region in average. Our comprehensive data processing pipeline ensures simulatability by creating real-to-sim replicas for real-world scans, achieves realistic layouts by preserving small items, and enhances interactivity by incorporating interactive objects and resolving collisions. We demonstrate the value of OmniScenes with two benchmark applications: scene layout generation and point-goal navigation. Both show the new challenges posed by the complex and realistic layouts. More importantly, OmniScenes paves the way for scaling up the model training for both tasks, making the generation and navigation in such complex scenes possible. We commit to open-sourcing the data, models, and benchmarks to benefit the community.", "arxiv_id": "2509.10813v2", "arxiv_authors": ["Weipeng Zhong", "Peizhou Cao", "Yichen Jin", "Li Luo", "Wenzhe Cai", "Jingli Lin", "Hanqing Wang", "Zhaoyang Lyu", "Tai Wang", "Bo Dai", "Xudong Xu", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a354"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4504970, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c3"}, "filepath": "data/2509.15096v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990367752844503, "type": "Poster", "name": "OmniSegmentor: A Flexible Multi-Modal Learning Framework for Semantic Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118620", "abstract": "Recent research on representation learning has proved the merits of multi-modal clues for robust semantic segmentation. Nevertheless, a flexible pretrain-and-finetune pipeline for multiple visual modalities remains unexplored. In this paper, we propose a novel multi-modal learning framework, termed OmniSegmentor. It has two key innovations: 1) Based on ImageNet, we assemble a large-scale dataset for multi-modal pretraining, called OmniSegmentor, which contains five popular visual modalities; 2) We provide an efficient pretraining manner to endow the model with the capacity to encode different modality information in the OmniSegmentor. For the first time, we introduce a universal multi-modal pretraining framework that consistently amplifies the model's perceptual capabilities across various scenarios, regardless of the arbitrary combination of the involved modalities. Remarkably, our OmniSegmentor achieves new state-of-the-art records on a wide range of multi-modal semantic segmentation datasets, including NYU Depthv2, EventScape, MFNet, DeLiVER, SUNRGBD, and KITTI-360. Data, model checkpoints, and source code will be made publicly available.", "arxiv_id": "2509.15096v1", "arxiv_authors": ["Bo-Wen Yin", "Jiao-Long Cao", "Xuying Zhang", "Yuming Chen", "Ming-Ming Cheng", "Qibin Hou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a355"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1580482, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c4"}, "filepath": "data/2504.06263v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993548265523108, "type": "Poster", "name": "OmniSVG: A Unified Scalable Vector Graphics Generation Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115696", "abstract": "Scalable Vector Graphics (SVG) is an important image format widely adopted in graphic design because of their resolution independence and editability. The study of generating high-quality SVG has continuously drawn attention from both designers and researchers in the AIGC community. However, existing methods either produces unstructured outputs with huge computational cost or is limited to generating monochrome icons of over-simplified structures. To produce high-quality and complex SVG, we propose OmniSVG, a unified framework that leverages pre-trained Vision-Language Models (VLMs) for end-to-end multimodal SVG generation. By parameterizing SVG commands and coordinates into discrete tokens, OmniSVG decouples structural logic from low-level geometry for efficient training while maintaining the expressiveness of complex SVG structure. To further advance the development of SVG synthesis, we introduce MMSVG-2M, a multimodal dataset with two million richly annotated SVG assets, along with a standardized evaluation protocol for conditional SVG generation tasks. Extensive experiments show that OmniSVG outperforms existing methods and demonstrates its potential for integration into professional SVG design workflows.", "arxiv_id": "2504.06263v2", "arxiv_authors": ["Yiying Yang", "Wei Cheng", "Sijin Chen", "Xianfang Zeng", "Fukun Yin", "Jiaxu Zhang", "Liao Wang", "Gang Yu", "Xingjun Ma", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a356"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2071378, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c5"}, "filepath": "data/2505.21448v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997026672433443, "type": "Poster", "name": "OmniSync: Towards Universal Lip Synchronization via Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119534", "abstract": "Lip synchronization is the task of aligning a speaker\u2019s lip movements in video with corresponding speech audio, and it is essential for creating realistic, expressive video content. However, existing methods often rely on reference frames and masked-frame inpainting, which limit their robustness to identity consistency, pose variations, facial occlusions, and stylized content. In addition, since audio signals provide weaker conditioning than visual cues, lip shape leakage from the original video will affect lip sync quality.In this paper, we present OmniSync, a universal lip synchronization framework for diverse visual scenarios. Our approach introduces a mask-free training paradigm using Diffusion Transformer models for direct frame editing without explicit masks, enabling unlimited-duration inference while maintaining natural facial dynamics and preserving character identity.During inference, we propose a flow-matching-based progressive noise initialization to ensure pose and identity consistency, while allowing precise mouth-region editing. To address the weak conditioning signal of audio, we develop a Dynamic Spatiotemporal Classifier-Free Guidance (DS-CFG) mechanism that adaptively adjusts guidance strength over time and space.We also establish the AIGC-LipSync Benchmark, the first evaluation suite for lip synchronization in diverse AI-generated videos. Extensive experiments demonstrate that OmniSync significantly outperforms prior methods in both visual quality and lip sync accuracy, achieving superior results in both real-world and AI-generated videos.", "arxiv_id": "2505.21448v2", "arxiv_authors": ["Ziqiao Peng", "Jiwen Liu", "Haoxian Zhang", "Xiaoqiang Liu", "Songlin Tang", "Pengfei Wan", "Di Zhang", "Hongyan Liu", "Jun He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a357"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5356913, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c6"}, "filepath": "data/2504.02433v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991828883366786, "type": "Poster", "name": "OmniTalker: One-shot Real-time Text-Driven Talking Audio-Video Generation With Multimodal Style Mimicking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116929", "abstract": "Although significant progress has been made in audio-driven talking head generation, text-driven methods remain underexplored. In this work, we present OmniTalker, a unified framework that jointly generates synchronized talking audio-video content from input text while emulating the target identity's speaking and facial movement styles, including speech characteristics, head motion, and facial dynamics. Our framework adopts a dual-branch diffusion transformer (DiT) architecture, with one branch dedicated to audio generation and the other to video synthesis.At the shallow layers, cross-modal fusion modules are introduced to integrate information between the two modalities. In deeper layers, each modality is processed independently, with the generated audio decoded by a vocoder and the video rendered using a GAN-based high-quality visual renderer. Leveraging DiT\u2019s in-context learning capability through a masked-infilling strategy, our model can simultaneously capture both audio and visual styles without requiring explicit style extraction modules. Thanks to the efficiency of the DiT backbone and the optimized visual renderer, OmniTalker achieves real-time inference at 25 FPS.To the best of our knowledge, OmniTalker is the first one-shot framework capable of jointly modeling speech and facial styles in real time. Extensive experiments demonstrate its superiority over existing methods in terms of generation quality, particularly in preserving style consistency and ensuring precise audio-video synchronization, all while maintaining efficient inference.", "arxiv_id": "2504.02433v2", "arxiv_authors": ["Zhongjian Wang", "Peng Zhang", "Jinwei Qi", "Guangyuan Wang", "Chaonan Ji", "Sheng Xu", "Bang Zhang", "Liefeng Bo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a358"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050553, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c7"}, "filepath": "data/2508.13632v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995952710419831, "type": "Poster", "name": "OmniTry: Virtual Try-On Anything without Masks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116675", "abstract": "Virtual Try-ON (VTON) is a practical and widely-applied task, for which most of existing works focus on clothes. This paper presents OmniTry, a unified framework that extends VTON beyond garment to encompass any wearable objects, e.g., jewelries and accessories, with mask-free setting for more practical application. When extending to various types of objects, data curation is challenging for obtaining paired images, i.e., the object image and the corresponding try-on result. To tackle this problem, we propose a two-staged pipeline: For the first stage, we leverage large-scale unpaired images, i.e., portraits with any wearable items, to train the model for mask-free localization. Specifically, we repurpose the inpainting model to automatically draw objects in suitable positions given an empty mask. For the second stage, the model is further fine-tuned with paired images to transfer the consistency of object appearance. We observed that the model after the first stage shows quick convergence even with few paired samples. OmniTry is evaluated on a comprehensive benchmark consisting of 12 common classes of wearable objects, with both in-shop and in-the-wild images. Experimental results suggest that OmniTry shows better performance on both object localization and ID-preservation compared with existing methods. The code, model weights, and evaluation benchmark of OmniTry will be made publicly available.", "arxiv_id": "2508.13632v1", "arxiv_authors": ["Yutong Feng", "Linlin Zhang", "Hengyuan Cao", "Yiming Chen", "Xiaoduan Feng", "Jian Cao", "Yuxiong Wu", "Bin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a359"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3797330, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c8"}, "filepath": "data/2506.23361v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997368002938305, "type": "Poster", "name": "OmniVCus: Feedforward Subject-driven Video Customization with Multimodal Control Conditions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116563", "abstract": "Existing feedforward subject-driven video customization methods mainly study single-subject scenarios due to the difficulty of constructing multi-subject training data pairs. Another challenging problem that how to use the signals such as depth, mask, camera, and text prompts to control and edit the subject in the customized video is still less explored. In this paper, we first propose a data construction pipeline, VideoCus-Factory, to produce training data pairs for multi-subject customization from raw videos without labels and control signals such as depth-to-video and mask-to-video pairs. Based on our constructed data, we develop an Image-Video Transfer Mixed (IVTM) training with image editing data to enable instructive editing for the subject in the customized video. Then we propose a diffusion Transformer framework, OmniVCus, with two embedding mechanisms, Lottery Embedding (LE) and Temporally Aligned Embedding (TAE). LE enables inference with more subjects by using the training subjects to activate more frame embeddings. TAE encourages the generation process to extract guidance from temporally aligned control signals by assigning the same frame embeddings to the control and noise tokens. Experiments demonstrate that our method significantly surpasses state-of-the-art methods in both quantitative and qualitative evaluations.", "arxiv_id": "2506.23361v2", "arxiv_authors": ["Yuanhao Cai", "He Zhang", "Xi Chen", "Jinbo Xing", "Yiwei Hu", "Yuqian Zhou", "Kai Zhang", "Zhifei Zhang", "Soo Ye Kim", "Tianyu Wang", "Yulun Zhang", "Xiaokang Yang", "Zhe Lin", "Alan Yuille"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.575Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1128069, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7c9"}, "filepath": "data/2506.07977v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990614861809214, "type": "Poster", "name": "OneIG-Bench: Omni-dimensional Nuanced Evaluation for Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121649", "abstract": "Text-to-image (T2I) models have garnered significant attention for generating high-quality images aligned with text prompts. However, rapid T2I model advancements reveal limitations in early benchmarks, lacking comprehensive evaluations, especially for text rendering and style. Notably, recent state-of-the-art models, with their rich knowledge modeling capabilities, show potential in reasoning-driven image generation, yet existing evaluation systems have not adequately addressed this frontier. To systematically address these gaps, we introduce $\\textbf{OneIG-Bench}$, a meticulously designed comprehensive benchmark framework for fine-grained evaluation of T2I models across multiple dimensions, including subject-element alignment, text rendering precision, reasoning-generated content, stylization, and diversity. By structuring the evaluation, this benchmark enables in-depth analysis of model performance, helping researchers and practitioners pinpoint strengths and bottlenecks in the full pipeline of image generation. Our codebase and dataset are now publicly available to facilitate reproducible evaluation studies and cross-model comparisons within the T2I research community.", "arxiv_id": "2506.07977v3", "arxiv_authors": ["Jingjing Chang", "Yixiao Fang", "Peng Xing", "Shuhan Wu", "Wei Cheng", "Rui Wang", "Xianfang Zeng", "Gang Yu", "Hai-Bao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025457, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ca"}, "filepath": "data/2510.09008v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994062703948495, "type": "Poster", "name": "On Epistemic Uncertainty of Visual Tokens for Object Hallucinations in Large Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116514", "abstract": "Large vision-language models (LVLMs), which integrate a vision encoder (VE) with a large language model, have achieved remarkable success across various tasks. However, there are still crucial challenges in LVLMs such as object hallucination, generating descriptions of objects that are not in the input image. Here, we argue that uncertain visual tokens within the VE is a key factor that contributes to object hallucination. Our statistical analysis found that there are positive correlations between visual tokens with high epistemic uncertainty and the occurrence of hallucinations. Furthermore, we show theoretically and empirically that visual tokens in early VE layers that exhibit large representation deviations under small adversarial perturbations indicate high epistemic uncertainty. Based on these findings, we propose a simple yet effective strategy to mitigate object hallucination by modifying the VE only. Our method comprises a proxy method with adversarial perturbations for identifying uncertain visual tokens efficiently and a method to mask these uncertain visual tokens during the self-attention process in the middle layers of the VE, suppressing their influence on visual encoding and thus alleviating hallucinations. Extensive experiments show that our method significantly reduces object hallucinations in LVLMs and can synergistically work with other prior arts.", "arxiv_id": "2510.09008v1", "arxiv_authors": ["Hoigi Seo", "Dong Un Kang", "Hyunjin Cho", "Joohoon Lee", "Se Young Chun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069455, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7cb"}, "filepath": "data/2505.16687v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990243592294423, "type": "Poster", "name": "One-Step Diffusion-Based Image Compression with Semantic Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117982", "abstract": "While recent diffusion-based generative image codecs have shown impressive performance, their iterative sampling process introduces unpleasing latency. In this work, we revisit the design of a diffusion-based codec and argue that multi-step sampling is not necessary for generative compression. Based on this insight, we propose OneDC, a One-step Diffusion-based generative image Codec\u2014that integrates a latent compression module with a one-step diffusion generator. Recognizing the critical role of semantic guidance in one-step diffusion, we propose using the hyperprior as a semantic signal, overcoming the limitations of text prompts in representing complex visual content. To further enhance the semantic capability of the hyperprior, we introduce a semantic distillation mechanism that transfers knowledge from a pretrained generative tokenizer to the hyperprior codec. Additionally, we adopt a hybrid pixel- and latent-domain optimization to jointly enhance both reconstruction fidelity and perceptual realism. Extensive experiments demonstrate that OneDC achieves SOTA perceptual quality even with one-step generation, offering over 40% bitrate reduction and 20$\\times$ faster decoding compared to prior multi-step diffusion-based codecs. Code will be released later.", "arxiv_id": "2505.16687v1", "arxiv_authors": ["Naifu Xue", "Zhaoyang Jia", "Jiahao Li", "Bin Li", "Yuan Zhang", "Yan Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1036567, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7cc"}, "filepath": "data/2506.15591v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997328474035531, "type": "Poster", "name": "One-Step Diffusion for Detail-Rich and Temporally Consistent Video Super-Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117234", "abstract": "It is a challenging problem to reproduce rich spatial details while maintaining temporal consistency in real-world video super-resolution (Real-VSR), especially when we leverage pre-trained generative models such as stable diffusion (SD) for realistic details synthesis. Existing SD-based Real-VSR methods often compromise spatial details for temporal coherence, resulting in suboptimal visual quality. We argue that the key lies in how to effectively extract the degradation-robust temporal consistency priors from the low-quality (LQ) input video and enhance the video details while maintaining the extracted consistency priors.To achieve this, we propose a Dual LoRA Learning (DLoRAL) paradigm to train an effective SD-based one-step diffusion model, achieving realistic frame details and temporal consistency simultaneously.Specifically, we introduce a Cross-Frame Retrieval (CFR) module to aggregate complementary information across frames, and train a Consistency-LoRA (C-LoRA) to learn robust temporal representations from degraded inputs.After consistency learning, we fix the CFR and C-LoRA modules and train a Detail-LoRA (D-LoRA) to enhance spatial details while aligning with the temporal space defined by C-LoRA to keep temporal coherence. The two phases alternate iteratively for optimization, collaboratively delivering consistent and detail-rich outputs. During inference, the two LoRA branches are merged into the SD model, allowing efficient and high-quality video restoration in a single diffusion step. Experiments show that DLoRAL achieves strong performance in both accuracy and speed. Code and models will be released.", "arxiv_id": "2506.15591v3", "arxiv_authors": ["Yujing Sun", "Lingchen Sun", "Shuaizheng Liu", "Rongyuan Wu", "Zhengqiang Zhang", "Lei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097157, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7cd"}, "filepath": "data/2510.08273v5.png", "tags": [], "_media_type": "image", "_rand": 0.9995600190806213, "type": "Poster", "name": "One Stone with Two Birds: A Null-Text-Null Frequency-Aware Diffusion Models for Text-Guided Image Inpainting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119863", "abstract": "Text-guided image inpainting aims at reconstructing the masked regions as per text prompts, where the longstanding challenges lie in the preservation for unmasked regions, while achieving the semantics consistency between unmasked and inpainted masked regions. Previous arts failed to address both of them, always with either of them to be remedied. Such facts, as we observed, stem from the entanglement of the hybrid (e.g., mid-and-low) frequency bands that encode varied image properties, which exhibit different robustness to text prompts during the denoising process. In this paper, we propose a null-text-null frequency-aware diffusion models, dubbed NTN-Diff, for text-guided image inpainting, by decomposing the semantics consistency across masked and unmasked regions into the consistencies as per each frequency band, while preserving the unmasked regions, to circumvent two challenges in a row. Based on the diffusion process, we further divide the denoising process into early (high-level noise) and late (low-level noise) stages, where the mid-and-low frequency bands are disentangled during the denoising process. As observed that, the stable mid-frequency band is progressively denoised to be semantically aligned during text-guided denoising process, which, meanwhile, serves as the guidance to the null-text denoising process to denoise low-frequency band for the masked regions, followed by a subsequent text-guided denoising process at late stage, to achieve the semantics consistency for mid-and-low frequency bands across masked and unmasked regions, while preserve the unmasked regions. Extensive experiments validate the superiority of NTN-Diff over the state-of-the-art diffusion models to text-guided diffusion models. Our code can be accessed from Appendix.", "arxiv_id": "2510.08273v5", "arxiv_authors": ["Haipeng Liu", "Yang Wang", "Meng Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a35f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1023043, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ce"}, "filepath": "data/2505.22444v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999017506095794, "type": "Poster", "name": "On Geometry-Enhanced Parameter-Efficient Fine-Tuning for 3D Scene Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119729", "abstract": "The emergence of large-scale pretrained point cloud models has significantly advanced 3D scene understanding, but adapting these models to specific downstream tasks typically demands full fine-tuning, incurring high computational and storage costs. Parameter-efficient fine-tuning (PEFT) techniques, successful in natural language processing and 2D vision tasks, would underperform when naively applied to 3D point cloud models due to significant geometric and spatial distribution shifts.Existing PEFT methods commonly treat points as orderless tokens, neglecting important local spatial structures and global geometric contexts in 3D modeling.To bridge this gap, we introduce the Geometric Encoding Mixer (GEM), a novel geometry-aware PEFT module specifically designed for 3D point cloud transformers. GEM explicitly integrates fine-grained local positional encodings with a lightweight latent attention mechanism to capture comprehensive global context, thereby effectively addressing the spatial and geometric distribution mismatch.Extensive experiments demonstrate that GEM achieves performance comparable to or sometimes even exceeding full fine-tuning, while only updating 1.6\\% of the model's parameters, fewer than other PEFT methods.With significantly reduced training time and memory requirements, our approach thus sets a new benchmark for efficient, scalable, and geometry-aware fine-tuning of large-scale 3D point cloud models. Code will be released.", "arxiv_id": "2505.22444v1", "arxiv_authors": ["Liyao Tang", "Zhe Chen", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a360"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1094194, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7cf"}, "filepath": "data/2410.21273v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991819666105649, "type": "Poster", "name": "On Inductive Biases That Enable Generalization in Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116300", "abstract": "Recent work studying the generalization of diffusion models with UNet-based denoisers reveals inductive biases that can be expressed via geometry-adaptive harmonic bases. In practice, more recent denoising networks are often based on transformers, e.g., the diffusion transformer (DiT). This raises the question: do transformer-based denoising networks exhibit inductive biases that can also be expressed via geometry-adaptive harmonic bases? To our surprise, we find that this is not the case. This discrepancy motivates our search for the inductive bias that can lead to good generalization in DiT models. Investigating a DiT\u2019s pivotal attention modules, we find that locality of attention maps in a DiT\u2019s early layers are closely associated with generalization. To verify this finding, we modify the generalization of a DiT by restricting its attention windows. We inject local attention windows in early layers of a DiT and observe an improvement in generalization. Furthermore, we empirically find that both the placement and the effective attention size of these local attention windows are crucial factors. Experimental results on the CelebA, ImageNet, MSCOCO, and LSUN data show that strengthening the inductive bias of a DiT can improve both generalization and generation quality when less training data is available. Source code will be released publicly upon paper publication.", "arxiv_id": "2410.21273v1", "arxiv_authors": ["Jie An", "De Wang", "Pengsheng Guo", "Jiebo Luo", "Alexander Schwing"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a361"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2785434, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d0"}, "filepath": "data/2510.20605v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996157131711958, "type": "Poster", "name": "OnlineSplatter: Pose-Free Online 3D Reconstruction for Free-Moving Objects", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117447", "abstract": "Free-moving object reconstruction from monocular video remains challenging, particularly without reliable pose or depth cues and under arbitrary object motion. We introduce OnlineSplatter, a novel online feed-forward framework generating high-quality, object-centric 3D Gaussians directly from RGB frames without requiring camera pose, depth priors, or bundle optimization. Our approach anchors reconstruction using the first frame and progressively refines the object representation through a dense Gaussian primitive field, maintaining constant computational cost regardless of sequence length. Our core contribution is a dual-key memory module combining latent appearance-geometry keys with explicit directional keys, robustly fusing current frame features with temporally aggregated object states. This design enables effective handling of free-moving objects via spatial-guided memory readout and an efficient sparsification mechanism, ensuring comprehensive yet compact object coverage. Evaluations on real-world datasets demonstrate that OnlineSplatter significantly outperforms state-of-the-art pose-free reconstruction baselines, consistently improving with more observations while maintaining constant memory and runtime, ideal for online robotics applications.", "arxiv_id": "2510.20605v1", "arxiv_authors": ["Mark He Huang", "Lin Geng Foo", "Christian Theobalt", "Ying Sun", "De Wen Soh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a362"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1287855, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d1"}, "filepath": "data/2501.17356v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993128681355851, "type": "Poster", "name": "On the Coexistence and Ensembling of Watermarks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115600", "abstract": "Watermarking, the practice of embedding imperceptible information into media such as images, videos, audio, and text, is essential for intellectual property protection, content provenance and attribution. The growing complexity of digital ecosystems necessitates watermarks for different uses to be embedded in the same media. However, to detect and decode all watermarks, they need to coexist well with one another. We perform the first study of coexistence of deep image watermarking methods and, contrary to intuition, we find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness. The coexistence of watermarks also opens the avenue for ensembling watermarking methods. We show how ensembling can increase the overall message capacity and enable new trade-offs between capacity, accuracy, robustness and image quality, without needing to retrain the base models.", "arxiv_id": "2501.17356v1", "arxiv_authors": ["Aleksandar Petrov", "Shruti Agarwal", "Philip H. S. Torr", "Adel Bibi", "John Collomosse"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a363"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1615330, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d2"}, "filepath": "data/2507.03683v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996992114297396, "type": "Poster", "name": "On the rankability of visual embeddings", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115697", "abstract": "We study whether visual embedding models capture continuous, ordinal attributes along linear directions, which we term _rank axes_. We define a model as _rankable_ for an attribute if projecting embeddings onto such an axis preserves the attribute's order. Across 7 popular encoders and 9 datasets with attributes like age, crowd count, head pose, aesthetics, and recency, we find that many embeddings are inherently rankable. Surprisingly, a small number of samples, or even just two extreme examples, often suffice to recover meaningful rank axes, without full-scale supervision. These findings open up new use cases for image ranking in vector databases and motivate further study into the structure and learning of rankable embeddings.", "arxiv_id": "2507.03683v1", "arxiv_authors": ["Ankit Sonthalia", "Arnas Uselis", "Seong Joon Oh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a364"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113438, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d3"}, "filepath": "data/2411.17761v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992696221580839, "type": "Poster", "name": "OpenAD: Open-World Autonomous Driving Benchmark for 3D Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121632", "abstract": "Open-world perception aims to develop a model adaptable to novel domains and various sensor configurations and can understand uncommon objects and corner cases. However, current research lacks sufficiently comprehensive open-world 3D perception benchmarks and robust generalizable methodologies. This paper introduces OpenAD, the first real open-world autonomous driving benchmark for 3D object detection. OpenAD is built upon a corner case discovery and annotation pipeline that integrates with a multimodal large language model (MLLM). The proposed pipeline annotates corner case objects in a unified format for five autonomous driving perception datasets with 2000 scenarios. In addition, we devise evaluation methodologies and evaluate various open-world and specialized 2D and 3D models. Moreover, we propose a vision-centric 3D open-world object detection baseline and further introduce an ensemble method by fusing general and specialized models to address the issue of lower precision in existing open-world methods for the OpenAD benchmark. We host an online challenge on EvalAI. Data, toolkit codes, and evaluation codes are available at https://github.com/VDIGPKU/OpenAD.", "arxiv_id": "2411.17761v2", "arxiv_authors": ["Zhongyu Xia", "Jishuo Li", "Zhiwei Lin", "Xinhao Wang", "Yongtao Wang", "Ming-Hsuan Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a365"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1024536, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d4"}, "filepath": "data/2505.18947v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993897134921289, "type": "Poster", "name": "OpenHOI: Open-World Hand-Object Interaction Synthesis with Multimodal Large Language Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120303", "abstract": "Understanding and synthesizing realistic 3D hand-object interactions (HOI) is critical for applications ranging from immersive AR/VR to dexterous robotics. Existing methods struggle with generalization, performing well on closed-set objects and predefined tasks but failing to handle unseen objects or open-vocabulary instructions. We introduce OpenHOI, the first framework for open-world HOI synthesis, capable of generating long-horizon manipulation sequences for novel objects guided by free-form language commands. Our approach integrates a 3D Multimodal Large Language Model (MLLM) fine-tuned for joint affordance grounding and semantic task decomposition, enabling precise localization of interaction regions (e.g., handles, buttons) and breakdown of complex instructions (e.g., \u201cFind a water bottle and take a sip\u201d) into executable sub-tasks. To synthesize physically plausible interactions, we propose an affordance-driven diffusion model paired with a training-free physics refinement stage that minimizes penetration and optimizes affordance alignment.Evaluations across diverse scenarios demonstrate OpenHOI\u2019s superiority over state-of-the-art methods in generalizing to novel object categories, multi-stage tasks, and complex language instructions.", "arxiv_id": "2505.18947v1", "arxiv_authors": ["Zhenhao Zhang", "Ye Shi", "Lingxiao Yang", "Suting Ni", "Qi Ye", "Jingya Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a366"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1017765, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d5"}, "filepath": "data/2510.21441v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990157309178135, "type": "Poster", "name": "OpenHype: Hyperbolic Embeddings for Hierarchical Open-Vocabulary Radiance Fields", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115048", "abstract": "Modeling the inherent hierarchical structure of 3D objects and 3D scenes is highly desirable, as it enables a more holistic understanding of environments for autonomous agents. Accomplishing this with implicit representations, such as Neural Radiance Fields, remains an unexplored challenge. Existing methods that explicitly model hierarchical structures often face significant limitations: they either require multiple rendering passes to capture embeddings at different levels of granularity, significantly increasing inference time, or rely on predefined, closed-set discrete hierarchies that generalize poorly to the diverse and nuanced structures encountered by agents in the real world. To address these challenges, we propose OpenHype, a novel approach that represents scene hierarchies using a continuous hyperbolic latent space. By leveraging the properties of hyperbolic geometry, OpenHype naturally encodes multi-scale relationships and enables smooth traversal of hierarchies through geodesic paths in latent space. Our method outperforms state-of-the-art approaches on standard benchmarks, demonstrating superior efficiency and adaptability in 3D scene understanding.", "arxiv_id": "2510.21441v1", "arxiv_authors": ["Lisa Weijler", "Sebastian Koch", "Fabio Poiesi", "Timo Ropinski", "Pedro Hermosilla"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a367"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 954140, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d6"}, "filepath": "data/2503.01691v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993183235636585, "type": "Poster", "name": "Open-Insect: Benchmarking Open-Set Recognition of Novel Species in Biodiversity Monitoring", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121829", "abstract": "Global biodiversity is declining at an unprecedented rate, yet little information isknown about most species and how their populations are changing. Indeed, some90% Earth\u2019s species are estimated to be completely unknown. Machine learning hasrecently emerged as a promising tool to facilitate long-term, large-scale biodiversitymonitoring, including algorithms for fine-grained classification of species fromimages. However, such algorithms typically are not designed to detect examplesfrom categories unseen during training \u2013 the problem of open-set recognition(OSR) \u2013 limiting their applicability for highly diverse, poorly studied taxa such asinsects. To address this gap, we introduce Open-Insect, a large-scale, fine-graineddataset to evaluate unknown species detection across different geographic regionswith varying difficulty. We benchmark 38 OSR algorithms across three categories:post-hoc, training-time regularization, and training with auxiliary data, finding thatsimple post-hoc approaches remain a strong baseline. We also demonstrate how toleverage auxiliary data to improve species discovery in regions with limited data.Our results provide timely insights to guide the development of computer visionmethods for biodiversity monitoring and species discovery.", "arxiv_id": "2503.01691v1", "arxiv_authors": ["Yuyan Chen", "Nico Lang", "B. Christian Schmidt", "Aditya Jain", "Yves Basset", "Sara Beery", "Maxim Larriv\u00e9e", "David Rolnick"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a368"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.576Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1925351, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d7"}, "filepath": "data/2503.19764v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998151972741446, "type": "Poster", "name": "OpenLex3D: A Tiered Benchmark for Open-Vocabulary 3D Scene Representations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121593", "abstract": "3D scene understanding has been transformed by open-vocabulary language models that enable interaction via natural language. However, at present the evaluation of these representations is limited to datasets with closed-set semantics that do not capture the richness of language. This work presents OpenLex3D, a dedicated benchmark for evaluating 3D open-vocabulary scene representations. OpenLex3D provides entirely new label annotations for scenes from Replica, ScanNet++, and HM3D, which capture real-world linguistic variability by introducing synonymical object categories and additional nuanced descriptions. Our label sets provide 13 times more labels per scene than the original datasets. By introducing an open-set 3D semantic segmentation task and an object retrieval task, we evaluate various existing 3D open-vocabulary methods on OpenLex3D, showcasing failure cases, and avenues for improvement. Our experiments provide insights on feature precision, segmentation, and downstream capabilities. The benchmark is publicly available at: https://openlex3d.github.io/.", "arxiv_id": "2503.19764v2", "arxiv_authors": ["Christina Kassab", "Sacha Morin", "Martin B\u00fcchner", "Mat\u00edas Mattamala", "Kumaraditya Gupta", "Abhinav Valada", "Liam Paull", "Maurice Fallon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a369"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2641687, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d8"}, "filepath": "data/2505.20292v4.png", "tags": [], "_media_type": "image", "_rand": 0.9993494928442067, "type": "Poster", "name": "OpenS2V-Nexus: A Detailed Benchmark and Million-Scale Dataset for Subject-to-Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121595", "abstract": "Subject-to-Video (S2V) generation aims to create videos that faithfully incorporate reference content, providing enhanced flexibility in the production of videos. To establish the infrastructure for S2V generation, we propose **OpenS2V-Nexus**, consisting of (i) **OpenS2V\u2011Eval**, a fine\u2011grained benchmark, and (ii) **OpenS2V\u20115M**, a million\u2011scale dataset.In contrast to existing S2V benchmarks inherited from VBench that focus on global and coarse-grained assessment of generated videos, *OpenS2V-Eval* focuses on the model's ability to generate subject-consistent videos with natural subject appearance and identity fidelity. For these purposes, *OpenS2V-Eval* introduces 180 prompts from seven major categories of S2V, which incorporate both real and synthetic test data. Furthermore, to accurately align human preferences with S2V benchmarks, we propose three automatic metrics, NexusScore, NaturalScore and GmeScore, to separately quantify subject consistency, naturalness, and text relevance in generated videos. Building on this, we conduct a comprehensive evaluation of 11 representative S2V models, highlighting their strengths and weaknesses across different content. Moreover, we create the first open-source large-scale S2V generation dataset *OpenS2V-5M*, which consists of five million high-quality 720P subject-text-video triplets. Specifically, we ensure subject\u2010information diversity in our dataset by (1) segmenting subjects and building pairing information via cross\u2010video associations and (2) prompting GPT-4o on raw frames to synthesize multi-view representations. Through *OpenS2V-Nexus*, we deliver a robust infrastructure to accelerate future S2V generation research.", "arxiv_id": "2505.20292v4", "arxiv_authors": ["Shenghai Yuan", "Xianyi He", "Yufan Deng", "Yang Ye", "Jinfa Huang", "Bin Lin", "Jiebo Luo", "Li Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1159736, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7d9"}, "filepath": "data/2507.05255v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995729079945086, "type": "Poster", "name": "Open Vision Reasoner: Transferring Linguistic Cognitive Behavior for Visual Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116059", "abstract": "Cognitive behaviors such as subgoal decomposition, reflection, and verification have emerged as key drivers of reasoning performance in large language models (LLMs) trained with reinforcement learning (RL). While recent studies have formalized these patterns and demonstrated their impact on generalization, it remains unclear whether such behaviors can transfer from language to vision in multimodal settings.In this work, we present a simple yet revealing observation: a vision-language model trained with language-only cold start exhibits surprising gains on visual reasoning tasks, along with emergent visual reflection behaviors. Motivated by DeepSeek-R1 and Open-Reasoner-Zero, we propose a three-stage training pipeline to systematically study cross-modal cognitive behavior transfer: (1) supervised fine-tuning on language-only reasoning data, (2) language-only RL to reinforce cognitive traits, and (3) multimodal RL for cross-modal adaptation.Our 7B model, \\textbf{Open-Vision-Reasoner}, achieves strong performance across both language and vision benchmarks, including 94.1% on MATH500, 50.0% on MathVision, and 52.9% on MathVerse. Our visual cognitive behavior analysis further reveals the mechanisms behind these gains and provides a promising path toward general-purpose multimodal reasoning.", "arxiv_id": "2507.05255v2", "arxiv_authors": ["Yana Wei", "Liang Zhao", "Jianjian Sun", "Kangheng Lin", "Jisheng Yin", "Jingcheng Hu", "Yinmin Zhang", "En Yu", "Haoran Lv", "Zejia Weng", "Jia Wang", "Chunrui Han", "Yuang Peng", "Qi Han", "Zheng Ge", "Xiangyu Zhang", "Daxin Jiang", "Vishal M. Patel"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1041119, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7da"}, "filepath": "data/2503.17352v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997073992163988, "type": "Poster", "name": "OpenVLThinker: Complex Vision-Language Reasoning via Iterative SFT-RL Cycles", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116720", "abstract": "We introduce *OpenVLThinker*, one of the first open-source large vision\u2013language models (LVLMs) to exhibit sophisticated chain-of-thought reasoning, achieving notable performance gains on challenging visual reasoning tasks. While text-based reasoning models (e.g., Deepseek R1) show promising results in text-only tasks, distilling their reasoning into LVLMs via supervised fine-tuning (SFT) often results in performance degradation due to imprecise visual grounding. Conversely, purely reinforcement learning (RL)-based methods face a large search space, hindering the emergence of reflective behaviors in smaller models (e.g., 7B LVLMs). Surprisingly, alternating between SFT and RL ultimately results in significant performance improvements after a few iterations. Our analysis reveals that the base model rarely exhibits reasoning behaviors initially, but SFT effectively surfaces these latent actions and narrows the RL search space, accelerating the development of reasoning capabilities. Each subsequent RL stage further refines the model's reasoning skills, producing higher-quality SFT data for continued self-improvement. OpenVLThinker-7B consistently advances performance across six benchmarks demanding mathematical and general reasoning, notably improving MathVista by 3.2\\%, EMMA by 1.4\\%, and HallusionBench by 2.7\\%. Beyond demonstrating the synergy between SFT and RL for complex reasoning tasks, our findings provide early evidence towards achieving R1-style reasoning in multimodal contexts.", "arxiv_id": "2503.17352v2", "arxiv_authors": ["Yihe Deng", "Hritik Bansal", "Fan Yin", "Nanyun Peng", "Wei Wang", "Kai-Wei Chang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 915960, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7db"}, "filepath": "data/2412.00744v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993373785857803, "type": "Poster", "name": "Open-World Drone Active Tracking with Goal-Centered Rewards", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118512", "abstract": "Drone Visual Active Tracking aims to autonomously follow a target object by controlling the motion system based on visual observations, providing a more practical solution for effective tracking in dynamic environments. However, accurate Drone Visual Active Tracking using reinforcement learning remains challenging due to the absence of a unified benchmark and the complexity of open-world environments with frequent interference. To address these issues, we pioneer a systematic solution. First, we propose DAT, the first open-world drone active air-to-ground tracking benchmark. It encompasses 24 city-scale scenes, featuring targets with human-like behaviors and high-fidelity dynamics simulation. DAT also provides a digital twin tool for unlimited scene generation. Additionally, we propose a novel reinforcement learning method called GC-VAT, which aims to improve the performance of drone tracking targets in complex scenarios. Specifically, we design a Goal-Centered Reward to provide precise feedback across viewpoints to the agent, enabling it to expand perception and movement range through unrestricted perspectives. Inspired by curriculum learning, we introduce a Curriculum-Based Training strategy that progressively enhances the tracking performance in complex environments. Besides, experiments on simulator and real-world images demonstrate the superior performance of GC-VAT, achieving an approximately 400% improvement over the SOTA methods in terms of the cumulative reward metric.", "arxiv_id": "2412.00744v2", "arxiv_authors": ["Haowei Sun", "Jinwu Hu", "Zhirui Zhang", "Haoyuan Tian", "Xinze Xie", "Yufeng Wang", "Xiaohua Xie", "Yun Lin", "Zhuliang Yu", "Mingkui Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36d"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1120933, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7dc"}, "filepath": "data/2507.05427v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997617374582981, "type": "Poster", "name": "OpenWorldSAM: Extending SAM2 for Universal Image Segmentation with Language Prompts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116847", "abstract": "The ability to segment objects based on open-ended language prompts remains a critical challenge, requiring models to ground textual semantics into precise spatial masks while handling diverse and unseen categories. We present OpenWorldSAM, a framework that extends the prompt-driven Segment Anything Model v2 (SAM2) to open-vocabulary scenarios by integrating multi-modal embeddings extracted from a lightweight vision-language model (VLM). Our approach is guided by four key principles: i) Unified prompting: OpenWorldSAM supports a diverse range of prompts, including category-level and sentence-level language descriptions, providing a flexible interface for various segmentation tasks. ii) Efficiency: By freezing the pre-trained components of SAM2 and the VLM, we train only 4.5 million parameters on the COCO-stuff dataset, achieving remarkable resource efficiency. iii) Instance Awareness: We enhance the model's spatial understanding through novel positional tie-breaker embeddings and cross-attention layers, enabling effective segmentation of multiple instances. iiv) Generalization: OpenWorldSAM exhibits strong zero-shot capabilities, generalizing well on unseen categories and an open vocabulary of concepts without additional training. Extensive experiments demonstrate that OpenWorldSAM achieves state-of-the-art performance in open-vocabulary semantic, instance, and panoptic segmentation across multiple benchmarks, including ADE20k, PASCAL, ScanNet, and SUN-RGBD.", "arxiv_id": "2507.05427v2", "arxiv_authors": ["Shiting Xiao", "Rishabh Kabra", "Yuhang Li", "Donghyun Lee", "Joao Carreira", "Priyadarshini Panda"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1231534, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7dd"}, "filepath": "data/2503.16924v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992072208105222, "type": "Poster", "name": "Optimized Minimal 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118972", "abstract": "3D Gaussian Splatting (3DGS) has emerged as a powerful representation for real-time, high-performance rendering, enabling a wide range of applications. However, representing 3D scenes with numerous explicit Gaussian primitives imposes significant storage and memory overhead. Recent studies have shown that high-quality rendering can be achieved with a substantially reduced number of Gaussians when represented with high-precision attributes. Nevertheless, existing 3DGS compression methods still rely on a relatively large number of Gaussians, focusing primarily on attribute compression. This is because a smaller set of Gaussians becomes increasingly sensitive to lossy attribute compression, leading to severe quality degradation. Since the number of Gaussians is directly tied to computational costs, it is essential to reduce the number of Gaussians effectively rather than only optimizing storage. In this paper, we propose Optimized Minimal Gaussians representation (OMG), which significantly reduces storage while using a minimal number of primitives. First, we determine the distinct Gaussian from the near ones, minimizing redundancy without sacrificing quality. Second, we propose a compact and precise attribute representation that efficiently captures both continuity and irregularity among primitives. Additionally, we propose a sub-vector quantization technique for improved irregularity representation, maintaining fast training with a negligible codebook size. Extensive experiments demonstrate that OMG reduces storage requirements by nearly 50% compared to the previous state-of-the-art and enables 600+ FPS rendering while maintaining high rendering quality.", "arxiv_id": "2503.16924v1", "arxiv_authors": ["Joo Chan Lee", "Jong Hwan Ko", "Eunbyung Park"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a36f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5230036, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7de"}, "filepath": "data/2412.12772v3.png", "tags": [], "_media_type": "image", "_rand": 0.999553064511661, "type": "Poster", "name": "Optimize the Unseen - Fast NeRF Cleanup with Free Space Prior", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119463", "abstract": "Neural Radiance Fields (NeRF) have advanced photorealistic novel view synthesis, but their reliance on photometric reconstruction introduces artifacts, commonly known as \"floaters\". These artifacts degrade novel view quality, particularly in unseen regions where NeRF optimization is unconstrained. We propose a fast, post-hoc NeRF cleanup method that eliminates such artifacts by enforcing a Free Space Prior, ensuring that unseen regions remain empty while preserving the structure of observed areas. Unlike existing approaches that rely on Maximum Likelihood (ML) estimation or complex, data-driven priors, our method adopts a Maximum-a-Posteriori (MAP) approach with a simple yet effective global prior. This enables our method to clean artifacts in both seen and unseen areas, significantly improving novel view quality even in challenging scene regions. Our approach generalizes across diverse NeRF architectures and datasets while requiring no additional memory beyond the original NeRF. Compared to state-of-the-art cleanup methods, our method is 2.5x faster in inference and completes cleanup training in under 30 seconds. Our code will be made publicly available.", "arxiv_id": "2412.12772v3", "arxiv_authors": ["Leo Segre", "Shai Avidan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a370"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028976, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7df"}, "filepath": "data/2509.23492v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992222813723024, "type": "Poster", "name": "Orientation-anchored Hyper-Gaussian for 4D Reconstruction from Casual Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116691", "abstract": "We present Orientation-anchored Gaussian Splatting (OriGS), a novel framework for high-quality 4D reconstruction from casually captured monocular videos.While recent advances extend 3D Gaussian Splatting to dynamic scenes via various motion anchors, such as graph nodes or spline control points, they often rely on low-rank assumptions and fall short in modeling complex, region-specific deformations inherent to unconstrained dynamics.OriGS addresses this by introducing a hyperdimensional representation grounded in scene orientation.We first estimate a Global Orientation Field that propagates principal forward directions across space and time, serving as stable structural guidance for dynamic modeling.Built upon this, we propose Orientation-aware Hyper-Gaussian, a unified formulation that embeds time, space, geometry, and orientation into a coherent probabilistic state.This enables inferring region-specific deformation through principled conditioned slicing, adaptively capturing diverse local dynamics in alignment with global motion intent.Experiments demonstrate the superior reconstruction fidelity of OriGS over mainstream methods in challenging real-world dynamic scenes.", "arxiv_id": "2509.23492v1", "arxiv_authors": ["Junyi Wu", "Jiachen Tao", "Haoxuan Wang", "Gaowen Liu", "Ramana Rao Kompella", "Yan Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a371"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1152357, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e0"}, "filepath": "data/2506.08640v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995950072739171, "type": "Poster", "name": "Orientation Matters: Making 3D Generative Models Orientation-Aligned", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117417", "abstract": "Humans intuitively perceive object shape and orientation from a single image, guided by strong priors about canonical poses. However, existing 3D generative models often produce misaligned results due to inconsistent training data, limiting their usability in downstream tasks. To address this gap, we introduce the task of orientation-aligned 3D object generation: producing 3D objects from single images with consistent orientations across categories. To facilitate this, we construct Objaverse-OA, a dataset of 14,832 orientation-aligned 3D models spanning 1,008 categories. Leveraging Objaverse-OA, we fine-tune two representative 3D generative models based on multi-view diffusion and 3D variational autoencoder frameworks to produce aligned objects that generalize well to unseen objects across various categories. Experimental results demonstrate the superiority of our method over post-hoc alignment approaches. Furthermore, we showcase downstream applications enabled by our aligned object generation, including zero-shot object orientation estimation via analysis-by-synthesis and efficient arrow-based object insertion.", "arxiv_id": "2506.08640v1", "arxiv_authors": ["Yichong Lu", "Yuzhuo Tian", "Zijin Jiang", "Yikun Zhao", "Yuanbo Yang", "Hao Ouyang", "Haoji Hu", "Huimin Yu", "Yujun Shen", "Yiyi Liao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a372"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 995192, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e1"}, "filepath": "data/2503.22194v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992682212788758, "type": "Poster", "name": "ORIGEN: Zero-Shot 3D Orientation Grounding in Text-to-Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119291", "abstract": "We introduce ORIGEN, the first zero-shot method for 3D orientation grounding in text-to-image generation across multiple objects and diverse categories. While previous work on spatial grounding in image generation has mainly focused on 2D positioning, it lacks control over 3D orientation.To address this, we propose a reward-guided sampling approach using a pretrained discriminative model for 3D orientation estimation and a one-step text-to-image generative flow model. While gradient-ascent-based optimization is a natural choice for reward-based guidance, it struggles to maintain image realism. Instead, we adopt a sampling-based approach using Langevin dynamics, which extends gradient ascent by simply injecting random noise\u2014requiring just a single additional line of code.Additionally, we introduce adaptive time rescaling based on the reward function to accelerate convergence. Our experiments show that \\textsc{Origen} outperforms both training-based and test-time guidance methods across quantitative metrics and user studies.", "arxiv_id": "2503.22194v3", "arxiv_authors": ["Yunhong Min", "Daehyeon Choi", "Kyeongmin Yeo", "Jihyun Lee", "Minhyuk Sung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a373"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4192872, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e2"}, "filepath": "data/2509.18350v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998608634559044, "type": "Poster", "name": "OrthoLoC: UAV 6-DoF Localization and Calibration Using Orthographic Geodata", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121611", "abstract": "Accurate visual localization from aerial views is a fundamental problem with applications in mapping, large-area inspection, and search-and-rescue operations. In many scenarios, these systems require high-precision localization while operating with limited resources (e.g., no internet connection or GNSS/GPS support), making large image databases or heavy 3D models impractical. Surprisingly, little attention has been given to leveraging orthographic geodata as an alternative paradigm, which is lightweight and increasingly available through free releases by governmental authorities (e.g., the European Union). To fill this gap, we propose OrthoLoC, the first large-scale dataset comprising 16,425 UAV images from Germany and the United States with multiple modalities. The dataset addresses domain shifts between UAV imagery and geospatial data. Its paired structure enables fair benchmarking of existing solutions by decoupling image retrieval from feature matching, allowing isolated evaluation of localization and calibration performance. Through comprehensive evaluation, we examine the impact of domain shifts, data resolutions, and covisibility on localization accuracy. Finally, we introduce a refinement technique called AdHoP, which can be integrated with any feature matcher, improving matching by up to 95% and reducing translation error by up to 63%. The dataset and code are available at: https://deepscenario.github.io/OrthoLoC .", "arxiv_id": "2509.18350v2", "arxiv_authors": ["Oussema Dhaouadi", "Riccardo Marin", "Johannes Meier", "Jacques Kaiser", "Daniel Cremers"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a374"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.577Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1114655, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e3"}, "filepath": "data/2505.16091v6.png", "tags": [], "_media_type": "image", "_rand": 0.9999415148298445, "type": "Poster", "name": "OSCAR: One-Step Diffusion Codec Across Multiple Bit-rates", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115464", "abstract": "Pretrained latent diffusion models have shown strong potential for lossy image compression, owing to their powerful generative priors. Most existing diffusion-based methods reconstruct images by iteratively denoising from random noise, guided by compressed latent representations. While these approaches have achieved high reconstruction quality, their multi-step sampling process incurs substantial computational overhead. Moreover, they typically require training separate models for different compression bit-rates, leading to significant training and storage costs. To address these challenges, we propose a one-step diffusion codec across multiple bit-rates. termed OSCAR. Specifically, our method views compressed latents as noisy variants of the original latents, where the level of distortion depends on the bit-rate. This perspective allows them to be modeled as intermediate states along a diffusion trajectory. By establishing a mapping from the compression bit-rate to a pseudo diffusion timestep, we condition a single generative model to support reconstructions at multiple bit-rates. Meanwhile, we argue that the compressed latents retain rich structural information, thereby making one-step denoising feasible. Thus, OSCAR replaces iterative sampling with a single denoising pass, significantly improving inference efficiency. Extensive experiments demonstrate that OSCAR achieves superior performance in both quantitative and visual quality metrics. The code and models will be publicly available.", "arxiv_id": "2505.16091v6", "arxiv_authors": ["Jinpei Guo", "Yifei Ji", "Zheng Chen", "Kai Liu", "Min Liu", "Wang Rao", "Wenbo Li", "Yong Guo", "Yulun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a375"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1109514, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e4"}, "filepath": "data/2507.07984v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994256518973355, "type": "Poster", "name": "OST-Bench: Evaluating the Capabilities of MLLMs in Online Spatio-temporal Scene Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121406", "abstract": "Recent advances in multimodal large language models (MLLMs) have shown remarkable capabilitiesin integrating vision and language for complex reasoning. While most existing benchmarks evaluate models under offline settings with a fixed set of pre-recorded inputs, we introduce OST-Bench, a benchmark designed to evaluate Online Spatio-Temporal understanding from the perspective of an agent actively exploring a scene. The \u201cOnline\u201d aspect emphasizes the need to process and reason over incrementally acquired observations, while the \u201cSpatio-Temporal\u201d component requires integrating current visual inputs with historical memory to support dynamic spatial reasoning. OST-Bench better reflects the challenges of real-world embodied perception. Built on an efficient data collection pipeline, OST-Bench consists of 1.4k scenes and 10k question-answer pairs collected from ScanNet, Matterport3D, and ARKitScenes. We evaluate several leading MLLMs on OST-Bench and observe that they fall short on tasks requiring complex spatio-temporal reasoning. Under the online setting, their accuracy declines as the exploration horizon extends and the memory grows. Through further experimental analysis, we identify common error patterns across models and find that both complex clue-based spatial reasoning demands and long-term memory retrieval requirements significantly drop model performance along two separate axes, highlighting the core challenges that must be addressed to improve online embodied reasoning. To foster further research and development in the field, our codes, dataset, and benchmark are available at https://github.com/rbler1234/OST-Bench.", "arxiv_id": "2507.07984v2", "arxiv_authors": ["Jingli Lin", "Chenming Zhu", "Runsen Xu", "Xiaohan Mao", "Xihui Liu", "Tai Wang", "Jiangmiao Pang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a376"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2642360, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e5"}, "filepath": "data/2505.20425v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999886546839357, "type": "Poster", "name": "OSVI-WM: One-Shot Visual Imitation for Unseen Tasks using World-Model-Guided Trajectory Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116908", "abstract": "Visual imitation learning enables robotic agents to acquire skills by observing expert demonstration videos. In the one-shot setting, the agent generates a policy after observing a single expert demonstration without additional fine-tuning. Existing approaches typically train and evaluate on the same set of tasks, varying only object configurations, and struggle to generalize to unseen tasks with different semantic or structural requirements. While some recent methods attempt to address this, they exhibit low success rates on hard test tasks that, despite being visually similar to some training tasks, differ in context and require distinct responses. Additionally, most existing methods lack an explicit model of environment dynamics, limiting their ability to reason about future states. To address these limitations, we propose a novel framework for one-shot visual imitation learning via world-model-guided trajectory generation. Given an expert demonstration video and the agent\u2019s initial observation, our method leverages a learned world model to predict a sequence of latent states and actions. This latent trajectory is then decoded into physical waypoints that guide the agent\u2019s execution. Our method is evaluated on two simulated benchmarks and three real-world robotic platforms, where it consistently outperforms prior approaches, with over 30% improvement in some cases.", "arxiv_id": "2505.20425v1", "arxiv_authors": ["Raktim Gautam Goswami", "Prashanth Krishnamurthy", "Yann LeCun", "Farshad Khorrami"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a377"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060291, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e6"}, "filepath": "data/2507.13162v1.png", "tags": [], "_media_type": "image", "_rand": 0.999636343454002, "type": "Poster", "name": "Overcoming Challenges of Long-Horizon Prediction in Driving World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118316", "abstract": "Existing world models for autonomous driving struggle with long-horizon generation and generalization to challenging scenarios. In this work, we develop a model using simple design choices, and without additional supervision or sensors, such as maps, depth, or multiple cameras.We show that our model yields state-of-the-art performance, despite having only 469M parameters and being trained on 280h of video data. It particularly stands out in difficult scenarios like turning maneuvers and urban traffic. We test whether discrete token models possibly have advantages over continuous models based on flow matching. To this end, we set up a hybrid tokenizer that is compatible with both approaches and allows for a side-by-side comparison. Our study concludes in favor of the continuous autoregressive model, which is less brittle on individual design choices and more powerful than the model built on discrete tokens. We will open source our model upon publication.", "arxiv_id": "2507.13162v1", "arxiv_authors": ["Arian Mousakhan", "Sudhanshu Mittal", "Silvio Galesso", "Karim Farid", "Thomas Brox"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a378"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 985231, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e7"}, "filepath": "data/2509.19282v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994290126124423, "type": "Poster", "name": "OverLayBench: A Benchmark for Layout-to-Image Generation with Dense Overlaps", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121763", "abstract": "Despite steady progress in layout-to-image generation, current methods still struggle with layouts containing significant overlap between bounding boxes. We identify two primary challenges: (1) large overlapping regions and (2) overlapping instances with minimal semantic distinction. Through both qualitative examples and quantitative analysis, we demonstrate how these factors degrade generation quality. To systematically assess this issue, we introduce OverLayScore, a novel metric that quantifies the complexity of overlapping bounding boxes. Our analysis reveals that existing benchmarks are biased toward simpler cases with low OverLayScore values, limiting their effectiveness in evaluating models under more challenging conditions. To reduce this gap, we present OverLayBench, a new benchmark featuring balanced OverLayScore distributions and high-quality annotations. As an initial step toward improved performance on complex overlaps, we also propose CreatiLayout-AM, a model trained on a curated amodal mask dataset. Together, our contributions establish a foundation for more robust layout-to-image generation under realistic and challenging scenarios.", "arxiv_id": "2509.19282v1", "arxiv_authors": ["Bingnan Li", "Chen-Yu Wang", "Haiyang Xu", "Xiang Zhang", "Ethan Armand", "Divyansh Srivastava", "Xiaojun Shan", "Zeyuan Chen", "Jianwen Xie", "Zhuowen Tu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a379"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4254960, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e8"}, "filepath": "data/2410.11536v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994477915020417, "type": "Poster", "name": "OVS Meets Continual Learning: Towards Sustainable Open-Vocabulary Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115181", "abstract": "Open-Vocabulary Segmentation (OVS) aims to segment classes that are not present in the training dataset. However, most existing studies assume that the training data is fixed in advance, overlooking more practical scenarios where new datasets are continuously collected over time. To address this, we first analyze how existing OVS models perform under such conditions. In this context, we explore several approaches such as retraining, fine-tuning, and continual learning but find that each of them has clear limitations. To address these issues, we propose ConOVS, a novel continual learning method based on a Mixture-of-Experts framework. ConOVS dynamically combines expert decoders based on the probability that an input sample belongs to the distribution of each incremental dataset. Through extensive experiments, we show that ConOVS consistently outperforms existing methods across pre-training, incremental, and zero-shot test datasets, effectively expanding the recognition capabilities of OVS models when data is collected sequentially.", "arxiv_id": "2410.11536v2", "arxiv_authors": ["Dongjun Hwang", "Yejin Kim", "Minyoung Lee", "Seong Joon Oh", "Junsuk Choe"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076388, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7e9"}, "filepath": "data/2506.04217v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992950010860141, "type": "Poster", "name": "OWMM-Agent: Open World Mobile Manipulation With Multi-modal Agentic Data Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115408", "abstract": "The rapid progress of navigation, manipulation, and vision models has made mobile manipulators capable in many specialized tasks. However, the open-world mobile manipulation (OWMM) task remains a challenge due to the need for generalization to open-ended instructions and environments, as well as the systematic complexity to integrate high-level decision making with low-level robot control based on both global scene understanding and current agent state.To address this complexity, we propose a novel multi-modal agent architecture that maintains multi-view scene frames and agent states for decision-making and controls the robot by function calling.A second challenge is the hallucination from domain shift. To enhance the agent performance, we further introduce an agentic data synthesis pipeline for the OWMM task to adapt the VLM model to our task domain with instruction fine-tuning.We highlight our fine-tuned OWMM-VLM as the first dedicated foundation model for mobile manipulators with global scene understanding, robot state tracking, and multi-modal action generation in a unified model. Through extensive experiments, we demonstrate that our model achieves state-of-the-art performance compared to other models.The project page is at https://owmm-vlm-project.github.io", "arxiv_id": "2506.04217v2", "arxiv_authors": ["Junting Chen", "Haotian Liang", "Lingxiao Du", "Weiyun Wang", "Mengkang Hu", "Yao Mu", "Wenhai Wang", "Jifeng Dai", "Ping Luo", "Wenqi Shao", "Lin Shao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37b"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2310007, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ea"}, "filepath": "data/2506.23725v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992974885256656, "type": "Poster", "name": "PAC Bench: Do Foundation Models Understand Prerequisites for Executing Manipulation Policies?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121553", "abstract": "Vision-Language Models (VLMs) are increasingly pivotal for generalist robot ma-nipulation, enabling tasks such as physical reasoning, policy generation, and failuredetection. However, their proficiency in these high-level applications often assumesa deep understanding of low-level physical prerequisites, a capability that is largelyunverified. To perform actions reliably, robots must comprehend intrinsic objectproperties (e.g., material, weight), action affordances (e.g., graspable, stackable),and physical constraints (e.g., stability, reachability, or an object\u2019s state like beingclosed). Despite their ubiquitous use in manipulation, we argue that off-the-shelfVLMs may lack this granular, physically-grounded understanding, as these specificprerequisites are often overlooked in their pre-training. Addressing this criticalgap, we introduce PAC Bench, a comprehensive benchmark designed to system-atically evaluate VLM comprehension of these core Properties, Affordances, andConstraints (PAC) from a task executability perspective. PAC Bench features adiverse dataset with over 30,000 annotations, comprising 673 real-world images(115 object classes, 15 property types, 1\u20133 affordances defined per class), 100real-world humanoid-view scenarios and 120 unique simulated constraint scenariosacross four tasks. Our evaluations reveal significant gaps in the ability of VLMsto grasp fundamental physical concepts, underscoring their current limitations forreliable robot manipulation and pointing to key areas that require targeted research.PAC Bench also serves as a standardized benchmark for rigorously evaluating VLMphysical reasoning and guiding the development of more robust and physicallygrounded models for robotic manipulation.", "arxiv_id": "2506.23725v1", "arxiv_authors": ["Atharva Gundawar", "Som Sagar", "Ransalu Senanayake"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37c"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039140, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7eb"}, "filepath": "data/2506.02453v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999874051438367, "type": "Poster", "name": "PAID: Pairwise Angular-Invariant Decomposition for Continual Test-Time Adaptation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118888", "abstract": "Continual Test-Time Adaptation (CTTA) aims to online adapt a pre-trained model to changing environments during inference. Most existing methods focus on exploiting target data, while overlooking another crucial source of information, the pre-trained weights, which encode underutilized domain-invariant priors. This paper takes the geometric attributes of pre-trained weights as a starting point, systematically analyzing three key components: magnitude, absolute angle, and pairwise angular structure. We find that the pairwise angular structure remains stable across diverse corrupted domains and encodes domain-invariant semantic information, suggesting it should be preserved during adaptation. Based on this insight, we propose PAID (Pairwise Angular Invariant Decomposition), a prior-driven CTTA method that decomposes weight into magnitude and direction, and introduces a learnable orthogonal matrix via Householder reflections to globally rotate direction while preserving the pairwise angular structure. During adaptation, only the magnitudes and the orthogonal matrices are updated. PAID achieves consistent improvements over recent SOTA methods on four widely used CTTA benchmarks, demonstrating that preserving pairwise angular structure offers a simple yet effective principle for CTTA.", "arxiv_id": "2506.02453v2", "arxiv_authors": ["Kunyu Wang", "Xueyang Fu", "Yuanfei Bao", "Chengjie Ge", "Chengzhi Cao", "Wei Zhai", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001112, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ec"}, "filepath": "data/2506.07992v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997322866115412, "type": "Poster", "name": "PairEdit: Learning Semantic Variations for Exemplar-based Image Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118093", "abstract": "Recent advancements in text-guided image editing have achieved notable success by leveraging natural language prompts for fine-grained semantic control. However, certain editing semantics are challenging to specify precisely using textual descriptions alone. A practical alternative involves learning editing semantics from paired examples. Existing methods using paired images typically handle only simple semantic changes or require extensive training on large-scale datasets. In this paper, we introduce PairEdit, a novel visual editing method designed to effectively learn complex editing semantics from a limited number of image pairs or even a single image pair. We propose a guidance-based noise prediction that explicitly models semantic variations within paired images through the guidance direction term. Moreover, we introduce a content-preserving noise schedule to facilitate more effective semantic learning. We also propose optimizing distinct LoRAs to disentangle the learning of semantic variations and content. Extensive qualitative and quantitative evaluations demonstrate that PairEdit successfully learns intricate semantics while significantly improving content consistency compared to baseline methods. Code will be made publicly available.", "arxiv_id": "2506.07992v1", "arxiv_authors": ["Haoguang Lu", "Jiacheng Chen", "Zhenguo Yang", "Aurele Tohokantche Gnanha", "Fu Lee Wang", "Li Qing", "Xudong Mao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3749909, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ed"}, "filepath": "data/2509.26386v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992016990650129, "type": "Poster", "name": "PANDA: Towards Generalist Video Anomaly Detection via Detective-like Agent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115891", "abstract": "Video anomaly detection (VAD) is a critical yet challenging task due to the complex and diverse nature of real-world scenarios. Previous methods typically rely on domain-specific training data and manual adjustments when applying to new scenarios and unseen anomaly types, suffering from high labor costs and limited generalization. Therefore, we aim to achieve generalist VAD, \\ie, automatically handle any scene and any anomaly types without training data or human involvement.In this work, we propose PANDA, a detective-like agent based on MLLMs. Specifically, we achieve PANDA by comprehensively devising four key capabilities: (1) self-adaption scene-aware strategy planning, (2) goal-driven heuristic reasoning, (3) tool-augmented self-reflection, and (4) self-improving chain-of-memory. Concretely, we develop a self-adaption scene-aware RAG mechanism, enabling PANDA to retrieve anomaly-specific knowledge for anomaly detection strategy planning. Next, we introduce a latent anomaly-guided heuristic prompt strategy to enhance reasoning precision. Furthermore, PANDA employs a progressive reflection mechanism alongside a suite of context-aware tools to iteratively refine decision-making in complex scenarios. Finally, a chain-of-memory mechanism enables PANDA to leverage historical experiences for continual performance improvement. Extensive experiments demonstrate that PANDA achieves state-of-the-art performance in multi-scenario, open-set, and complex scenario settings without training and manual involvement, validating its generalizable and robust anomaly detection capability.", "arxiv_id": "2509.26386v1", "arxiv_authors": ["Zhiwei Yang", "Chen Gao", "Mike Zheng Shou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a37f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097619, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ee"}, "filepath": "data/2503.23793v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995784677963433, "type": "Poster", "name": "Pan-LUT: Efficient Pan-sharpening via Learnable Look-Up Tables", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118250", "abstract": "Recently, deep learning-based pan-sharpening algorithms have achieved notable advancements over traditional methods. However, many deep learning-based approaches incur substantial computational overhead during inference, especially with high-resolution images. This excessive computational demand limits the applicability of these methods in real-world scenarios, particularly in the absence of dedicated computing devices such as GPUs and TPUs. To address these challenges, we propose Pan-LUT, a novel learnable look-up table (LUT) framework for pan-sharpening that strikes a balance between performance and computational efficiency for high-resolution remote sensing images. To finely control the spectral transformation, we devise the PAN-guided look-up table (PGLUT) for channel-wise spectral mapping. To effectively capture fine-grained spatial details, we introduce the spatial details look-up table (SDLUT). Furthermore, to adaptively aggregate channel information for generating high-resolution multispectral images, we design an adaptive output look-up table (AOLUT). Our smallest model variant, Pan-LUT-1, contains fewer than 1M parameters and processes a 4K-resolution image in under 1 ms using a single NVIDIA GeForce RTX 2080 Ti GPU, demonstrating significantly faster performance compared to other methods. Experiments reveal that Pan-LUT efficiently processes large remote sensing images in a lightweight manner, bridging the gap to real-world applications. Furthermore, our model surpasses SOTA methods in full-resolution scenes under real-world conditions, highlighting its effectiveness and efficiency.", "arxiv_id": "2503.23793v1", "arxiv_authors": ["Zhongnan Cai", "Yingying Wang", "Yunlong Lin", "Hui Zheng", "Ge Meng", "Zixu Lin", "Jiaxin Xie", "Junbin Lu", "Yue Huang", "Xinghao Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a380"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.578Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1741578, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ef"}, "filepath": "data/2505.16334v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990197136722735, "type": "Poster", "name": "Panoptic Captioning: Seeking An Equivalency Bridge for Image and Text", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118605", "abstract": "This work introduces panoptic captioning, a novel task striving to seek the minimum text equivalence of images. We take the first step towards panoptic captioning by formulating it as a task of generating a comprehensive textual description for an image, which encapsulates all entities, their respective locations and attributes, relationships among entities, as well as global image state. Through an extensive evaluation, our work reveals that state-of-the-art Multi-modal Large Language Models (MLLMs) have limited performance in solving panoptic captioning. To address this, we propose an effective data engine named PancapEngine to produce high-quality data and a novel method named PancapChain to improve panoptic captioning. Specifically, our PancapEngine first detects diverse categories of entities in images by an elaborate detection suite, and then generates required panoptic captions using entity-aware prompts. Additionally, our PancapChain explicitly decouples the challenging panoptic captioning task into multiple stages and generates panoptic captions step by step. More importantly, we contribute a comprehensive metric named PancapScore and a human-curated test set for reliable model evaluation. Experiments show that our PancapChain-13B model can beat state-of-the-art open-source MLLMs like InternVL-2.5-78B and even surpass proprietary models like GPT-4o and Gemini-2.0-Pro, demonstrating the effectiveness of our data engine and method.", "arxiv_id": "2505.16334v2", "arxiv_authors": ["Kun-Yu Lin", "Hongjun Wang", "Weining Ren", "Kai Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a381"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.579Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1122312, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f0"}, "filepath": "data/2505.22016v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991218906163867, "type": "Poster", "name": "PanoWan: Lifting Diffusion Video Generation Models to 360$^\\circ$ with Latitude/Longitude-aware Mechanisms", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119701", "abstract": "Panoramic video generation enables immersive 360$^\\circ$ content creation, valuable in applications that demand scene-consistent world exploration. However, existing panoramic video generation models struggle to leverage pre-trained generative priors from conventional text-to-video models for high-quality and diverse panoramic videos generation, due to limited dataset scale and the gap in spatial feature representations. In this paper, we introduce PanoWan to effectively lift pre-trained text-to-video models to the panoramic domain, equipped with minimal modules. PanoWan employs latitude-aware sampling to avoid latitudinal distortion, while its rotated semantic denoising and padded pixel-wise decoding ensure seamless transitions at longitude boundaries. To provide sufficient panoramic videos for learning these lifted representations, we contribute PanoVid, a high-quality panoramic video dataset with captions and diverse scenarios. Consequently, PanoWan achieves state-of-the-art performance in panoramic video generation and demonstrates robustness for zero-shot downstream tasks.", "arxiv_id": "2505.22016v2", "arxiv_authors": ["Yifei Xia", "Shuchen Weng", "Siqi Yang", "Jingqi Liu", "Chengxuan Zhu", "Minggui Teng", "Zijian Jia", "Han Jiang", "Boxin Shi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a382"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.579Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1055844, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f1"}, "filepath": "data/2507.01291v1.png", "tags": [], "_media_type": "image", "_rand": 0.999943949912774, "type": "Poster", "name": "PanTS: The Pancreatic Tumor Segmentation Dataset", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121872", "abstract": "PanTS is a large-scale, multi-institutional dataset curated to advance research in pancreatic CT analysis. It contains 36,390 CT scans from 145 medical centers, with expert-validated, voxel-wise annotations of over 993,000 anatomical structures, covering pancreatic tumors, pancreas head, body, and tail, and 24 surrounding anatomical structures such as vascular/skeletal structures and abdominal/thoracic organs. Each scan includes metadata such as patient age, sex, scan phase, diagnosis, voxel spacing, etc. AI models trained on PanTS achieve significantly better performance in pancreatic tumor detection, localization, and segmentation compared to those trained on existing public datasets. Our analysis indicates that these gains are directly attributable to the 12x larger-scale tumor annotations and indirectly supported by the 24 additional surrounding anatomical structures. As the largest and most comprehensive resource of its kind, PanTS offers a new benchmark for developing and evaluating AI models in pancreatic CT analysis.", "arxiv_id": "2507.01291v1", "arxiv_authors": ["Wenxuan Li", "Xinze Zhou", "Qi Chen", "Tianyu Lin", "Pedro R. A. S. Bassi", "Szymon Plotka", "Jaroslaw B. Cwikla", "Xiaoxi Chen", "Chen Ye", "Zheren Zhu", "Kai Ding", "Heng Li", "Kang Wang", "Yang Yang", "Yucheng Tang", "Daguang Xu", "Alan L. Yuille", "Zongwei Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a383"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.579Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032990, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f2"}, "filepath": "data/2506.16054v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999972447635886, "type": "Poster", "name": "PAROAttention: Pattern-Aware ReOrdering for Efficient Sparse and Quantized Attention in Visual Generation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117769", "abstract": "In visual generation, the quadratic complexity of attention mechanisms results in high memory and computational costs, especially for longer token sequences required in high-resolution image or multi-frame video generation. To address this, prior research has explored techniques such as sparsification and quantization.However, these techniques face significant challenges under low density and reduced bitwidths. Through systematic analysis, we identify that the core difficulty stems from the dispersed and irregular characteristics of visual attention patterns. Therefore, instead of introducing specialized sparsification and quantization design to accommodate such patterns, we propose an alternative strategy: \"reorganizing\" the attention pattern to alleviate the challenges.Inspired by the local aggregatin nature of visual feature extraction, we design a novel **P**attern-**A**ware token **R**e**O**rdering (**PARO**) technique, which unifies the diverse attention patterns into a hardware-friendly block-wise pattern. This unification substantially simplifies and enhances both sparsification and quantization.We evaluate the performance-efficiency trade-offs of various design choices and finalize a methodology tailored for the unified pattern.Our approach, **PAROAttention**, achieves video and image generation with lossless metrics, and nearly identical results from full-precision (FP) baselines, while operating at notably lower density (**20%-30%**) and bitwidth (**INT8/INT4**), achieving a **1.9 - 2.7x** end-to-end latency speedup.", "arxiv_id": "2506.16054v1", "arxiv_authors": ["Tianchen Zhao", "Ke Hong", "Xinhao Yang", "Xuefeng Xiao", "Huixia Li", "Feng Ling", "Ruiqi Xie", "Siqi Chen", "Hongyu Zhu", "Yichong Zhang", "Yu Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a384"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.579Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1719063, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f3"}, "filepath": "data/2510.20155v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992757674091072, "type": "Poster", "name": "PartNeXt: A Next-Generation Dataset for Fine-Grained and Hierarchical 3D Part Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121726", "abstract": "Understanding objects at the level of their constituent parts is fundamental to advancing computer vision, graphics, and robotics. While datasets like PartNet have driven progress in 3D part understanding, their reliance on untextured geometries and expert-dependent annotation limits scalability and usability. We introduce PartNeXt, a next-generation dataset addressing these gaps with over 23000 high-quality, textured 3D models annotated with fine-grained, hierarchical part labels across 50 categories. We benchmark PartNeXt on two tasks: (1) class-agnostic part segmentation, where state-of-the-art methods (e.g., PartField, SAMPart3D) struggle with fine-grained and leaf-level parts, and (2) 3D part-centric question answering, a new benchmark for 3D-LLMs that reveals significant gaps in open-vocabulary part grounding. Additionally, training Point-SAM on PartNeXt yields substantial gains over PartNet, underscoring the dataset\u2019s superior quality and diversity. By combining scalable annotation, texture-aware labels, and multi-task evaluation, PartNeXt opens new avenues for research in structured 3D understanding.", "arxiv_id": "2510.20155v1", "arxiv_authors": ["Penghao Wang", "Yiyang He", "Xin Lv", "Yukai Zhou", "Lan Xu", "Jingyi Yu", "Jiayuan Gu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a385"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3096809, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f4"}, "filepath": "data/2505.20759v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994115328419827, "type": "Poster", "name": "PARTONOMY: Large Multimodal Models with Part-Level Visual Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115115", "abstract": "Real-world objects are composed of distinctive, object-specific parts. Identifying these parts is key to performing fine-grained, compositional reasoning\u2014yet, large multimodal models (LMMs) struggle to perform this seemingly straightforward task. In this work, we introduce PARTONOMY, an LMM benchmark designed for pixel-level part grounding. We construct PARTONOMY from existing part datasets and our own rigorously annotated set of images, encompassing 862 parts and 5346objects for evaluation. Unlike existing datasets that simply ask models to identify generic parts, PARTONOMY utilizes highly technical concepts and challenges models to compare objects\u2019 parts, consider part-whole relationships, and justify textual predictions with visual segmentations. Our experiments demonstrate significant limitations in state-of-the-art LMMs (e.g., LISA-13B achieves only 5.9% gIoU), highlighting a critical gap in their part grounding abilities. We note that existing segmentation-enabled LMMs (segmenting LMMs) have two key architectural shortcomings: they use special [SEG] tokens not seen during pretraining which induce distribution shift, and they discard predicted segmentations instead of using past predictions to guide future ones. To address these deficiencies, we train several part-centric LMMs and propose PLUM, a novel segmenting LMM that utilizes span tagging instead of segmentation tokens and that conditions on prior predictions in a feedback loop. We find that pretrained PLUM dominates existing segmenting LMMs on reasoning segmentation, VQA, and visual hallucination benchmarks. In addition, PLUM finetuned on our proposed Explanatory Part Segmentation task is competitive with segmenting LMMs trained on significantly more segmentation data. Our work opens up new avenues towards enabling fine-grained, grounded visual understanding in LMMs.", "arxiv_id": "2505.20759v3", "arxiv_authors": ["Ansel Blume", "Jeonghwan Kim", "Hyeonjeong Ha", "Elen Chatikyan", "Xiaomeng Jin", "Khanh Duy Nguyen", "Nanyun Peng", "Kai-Wei Chang", "Derek Hoiem", "Heng Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a386"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1165116, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f5"}, "filepath": "data/2409.16953v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998827396499509, "type": "Poster", "name": "PASS: Path-selective State Space Model for Event-based Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117820", "abstract": "Event cameras are bio-inspired sensors that capture intensity changes asynchronously with distinct advantages, such as high temporal resolution. Existing methods for event-based object/action recognition predominantly sample and convert event representation at every fixed temporal interval (or frequency). However, they are constrained to processing a limited number of event lengths and show poor frequency generalization, thus not fully leveraging the event's high temporal resolution. In this paper, we present our PASS framework, exhibiting superior capacity for spatiotemporal event modeling towards a larger number of event lengths and generalization across varying inference temporal frequencies. Our key insight is to learn adaptively encoded event features via the state space models (SSMs), whose linear complexity and generalization on input frequency make them ideal for processing high temporal resolution events. Specifically, we propose a Path-selective Event Aggregation and Scan (PEAS) module to encode events into features with fixed dimensions by adaptively scanning and selecting aggregated event presentation. On top of it, we introduce a novel Multi-faceted Selection Guiding (MSG) loss to minimize the randomness and redundancy of the encoded features during the PEAS selection process. Our method outperforms prior methods on five public datasets and shows strong generalization across varying inference frequencies with less accuracy drop (ours -8.62% v.s. -20.69% for the baseline). Moreover, our model exhibits strong long spatiotemporal modeling for a broader distribution of event length (1-10^9), precise temporal perception, and effective generalization for real-world scenarios. Code and checkpoints will be released upon acceptance.", "arxiv_id": "2409.16953v2", "arxiv_authors": ["Jiazhou Zhou", "Kanghao Chen", "Lei Zhang", "Lin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a387"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039890, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f6"}, "filepath": "data/2503.06482v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990519376203525, "type": "Poster", "name": "PathVQ: Reforming Computational Pathology Foundation Model for Whole Slide Image Analysis via Vector Quantization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119104", "abstract": "Pathology whole slide image (WSI) analysis is vital for disease diagnosis and understanding. While foundation models (FMs) have driven recent advances, their scalability in pathology remains a key challenge. In particular, vision-language (VL) pathology FMs align visual features with language annotation for downstream tasks, but they rely heavily on large-scale image-text paired data, which is scarce thus limiting generalization. On the other hand, vision-only pathology FMs can leverage abundant unlabeled data via self-supervised learning (SSL). However, current approaches often use the [CLS] token from tile-level ViTs as slide-level input for efficiency (a tile with 224\u00d7224 pixels composed of 196 patches with 16\u00d716 pixels). This SSL pretrained [CLS] token lacks alignment with downstream objectives, limiting effectiveness. We find that spatial patch tokens retain a wealth of informative features beneficial for downstream tasks, but utilizing all of them incurs up to 200\u00d7 higher computation and storage costs compared [CLS] token only (e.g., 196 tokens per ViT$_{224}$). This highlights a fundamental trade-off between efficiency and representational richness to build scalable pathology FMs. To address this, we propose a feature distillation framework via vector-quantization (VQ) that compresses patch tokens into discrete indices and reconstructs them via a decoder, achieving 64\u00d7 compression (1024 \u2192 16 dimensions) while preserving fidelity. We further introduce a multi-scale VQ (MSVQ) strategy, enhancing both reconstruction and providing SSL supervision for slide-level pretraining. Built upon MSVQ features and supervision signals, we design a progressive convolutional module and a slide-level SSL objective to learn spatially rich representations for downstream WSI tasks. Extensive experiments across multiple datasets demonstrate that our approach achieves state-of-the-art performance, offering a scalable and effective solution for high-performing pathology FMs in WSI analysis.", "arxiv_id": "2503.06482v1", "arxiv_authors": ["Honglin Li", "Zhongyi Shui", "Yunlong Zhang", "Chenglu Zhu", "Lin Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a388"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1541227, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f7"}, "filepath": "data/2506.02846v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994507340616481, "type": "Poster", "name": "PBR-SR: Mesh PBR Texture Super Resolution from 2D Image Priors", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119211", "abstract": "We present PBR-SR, a novel method for physically based rendering (PBR) texture super resolution (SR). It outputs high-resolution, high-quality PBR textures from low-resolution (LR) PBR input in a zero-shot manner. PBR-SR leverages an off-the-shelf super-resolution model trained on natural images, and iteratively minimizes the deviations between super-resolution priors and differentiable renderings. These enhancements are then back-projected into the PBR map space in a differentiable manner to produce refined, high-resolution textures.To mitigate view inconsistencies and lighting sensitivity, which is common in view-based super-resolution, our method applies 2D prior constraints across multi-view renderings, iteratively refining the shared, upscaled textures. In parallel, we incorporate identity constraints directly in the PBR texture domain to ensure the upscaled textures remain faithful to the LR input. PBR-SR operates without any additional training or data requirements, relying entirely on pretrained image priors. We demonstrate that our approach produces high-fidelity PBR textures for both artist-designed and AI-generated meshes, outperforming both direct SR models application and prior texture optimization methods. Our results show high-quality outputs in both PBR and rendering evaluations, supporting advanced applications such as relighting.", "arxiv_id": "2506.02846v1", "arxiv_authors": ["Yujin Chen", "Yinyu Nie", "Benjamin Ummenhofer", "Reiner Birkl", "Michael Paulitsch", "Matthias Nie\u00dfner"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a389"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3708457, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f8"}, "filepath": "data/2506.05302v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994451299147253, "type": "Poster", "name": "Perceive Anything: Recognize, Explain, Caption, and Segment Anything in Images and Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115989", "abstract": "While powerful vision foundation models like Segment Anything Model 2 (SAM 2) excel at object segmentation in images and videos, how to achieve deep semantic understanding of these regions remains a critical challenge. To address this, we introduce Perceive Anything Model (PAM), an end-to-end vision-language model that efficiently integrates SAM 2 and Large Language Models (LLMs), enabling the simultaneous segmentation of objects while generating diverse semantic outputs for each region in both images and videos. Specifically, PAM introduces a Semantic Perceiver that acts as a crucial bridge. This component efficiently utilizes rich intermediate features from the SAM 2 backbone, thereby incorporating general vision, localization, and semantic priors into the visual tokens, which are subsequently fed into LLMs for understanding. To ensure PAM's robustness in understanding multi-dimensional semantic granularity, we develop a dedicated data augmentation and refinement pipeline, which yields 1.8M high-quality image data and 0.6M video data. Experimental results demonstrate that, even with a lightweight 1.5/3B LLM as the semantic decoder, PAM achieves strong performance across a diverse range of tasks, including category prediction, brief and detailed regional captioning, video captioning, and streaming region captioning. Furthermore, PAM exhibits significant inference efficiency, running 1.2$-$2.4$\\times$ faster while consuming less GPU memory compared to prior approaches, marking a significant advancement for real-world applications.", "arxiv_id": "2506.05302v1", "arxiv_authors": ["Weifeng Lin", "Xinyu Wei", "Ruichuan An", "Tianhe Ren", "Tingwei Chen", "Renrui Zhang", "Ziyu Guo", "Wentao Zhang", "Lei Zhang", "Hongsheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4756972, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7f9"}, "filepath": "data/2504.13181v2.png", "tags": [], "_media_type": "image", "_rand": 0.999595596598335, "type": "Poster", "name": "Perception Encoder: The best visual embeddings are not at the output of the network", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118805", "abstract": "We introduce Perception Encoder (PE), a family of state-of-the-art vision encoders for image and video understanding. Traditionally, vision encoders have relied on a variety of pretraining objectives, each excelling at different downstream tasks. Surprisingly, after scaling a carefully tuned image pretraining recipe and refining with a robust video data engine, we find that contrastive vision-language training alone can produce strong, general embeddings for all of these downstream tasks. There is only one caveat: these embeddings are hidden within the intermediate layers of the network. To draw them out, we introduce two alignment methods: language alignment for multimodal language modeling, and spatial alignment for dense prediction. Together, our PE family of models achieves state-of-the-art results on a wide variety of tasks, including zero-shot image and video classification and retrieval; document, image, and video Q&A; and spatial tasks such as detection, tracking, and depth estimation. To foster further research, we will release our models, code, and novel dataset of synthetically and human-annotated videos.", "arxiv_id": "2504.13181v2", "arxiv_authors": ["Daniel Bolya", "Po-Yao Huang", "Peize Sun", "Jang Hyun Cho", "Andrea Madotto", "Chen Wei", "Tengyu Ma", "Jiale Zhi", "Jathushan Rajasegaran", "Hanoona Rasheed", "Junke Wang", "Marco Monteiro", "Hu Xu", "Shiyu Dong", "Nikhila Ravi", "Daniel Li", "Piotr Doll\u00e1r", "Christoph Feichtenhofer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1333180, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7fa"}, "filepath": "data/2504.13180v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993915249888083, "type": "Poster", "name": "PerceptionLM: Open-Access Data and Models for Detailed Visual Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119876", "abstract": "Vision-language models are integral to computer vision research, yet many high-performing models remain closed-source, obscuring their data, design and training recipe. The research community has responded by using distillation from black-box models to label training data, achieving strong benchmark results, at the cost of measurable scientific progress. However, without knowing the details of the teacher model and its data sources, scientific progress remains difficult to measure. In this paper, we study building a Perception Language Model (PLM) in a fully open and reproducible framework for transparent research in image and video understanding. We analyze standard training pipelines without distillation from proprietary models and explore large-scale synthetic data to identify critical data gaps, particularly in detailed video understanding. To bridge these gaps, we release 2.8M human-labeled instances of fine-grained video question-answer pairs and spatio-temporally grounded video captions. Additionally, we introduce PLM\u2013VideoBench, a suite for evaluating challenging video understanding tasks focusing on the ability to reason about ''what'', ''where'', ''when'', and ''how'' of a video. We make our work fully reproducible by providing data, training recipes, code & models.", "arxiv_id": "2504.13180v3", "arxiv_authors": ["Jang Hyun Cho", "Andrea Madotto", "Effrosyni Mavroudi", "Triantafyllos Afouras", "Tushar Nagarajan", "Muhammad Maaz", "Yale Song", "Tengyu Ma", "Shuming Hu", "Suyog Jain", "Miguel Martin", "Huiyu Wang", "Hanoona Rasheed", "Peize Sun", "Po-Yao Huang", "Daniel Bolya", "Nikhila Ravi", "Shashank Jain", "Tammy Stark", "Shane Moon", "Babak Damavandi", "Vivian Lee", "Andrew Westbury", "Salman Khan", "Philipp Kr\u00e4henb\u00fchl", "Piotr Doll\u00e1r", "Lorenzo Torresani", "Kristen Grauman", "Christoph Feichtenhofer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087990, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7fb"}, "filepath": "data/2504.07954v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990067196173644, "type": "Poster", "name": "Perception-R1: Pioneering Perception Policy with Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119361", "abstract": "Inspired by the success of DeepSeek-R1, we explore the potential of rule-based reinforcement learning (RL) in MLLM post-training for perception policy learning. While promising, our initial experiments reveal that incorporating a thinking process through RL does not consistently lead to performance gains across all visual perception tasks. This leads us to delve into the essential role of RL in the context of visual perception. In this work, we return to the fundamentals and explore the effects of RL on different perception tasks. We observe that the perceptual perplexity is a major factor in determining the effectiveness of RL. We also observe that reward design plays a crucial role in further approaching the upper limit of model perception. To leverage these findings, we propose Perception-R1, a scalable RL framework using GRPO during MLLM post-training. With a standard Qwen2-VL-2B-Instruct, Perception-R1 achieves +4.2% on RefCOCO+, +17.9% on PixMo-Count, +4.2% on PageOCR, and notably, 31.9% AP on COCO2017 val for the first time, establishing a strong baseline for perception policy learning.", "arxiv_id": "2504.07954v1", "arxiv_authors": ["En Yu", "Kangheng Lin", "Liang Zhao", "Jisheng Yin", "Yana Wei", "Yuang Peng", "Haoran Wei", "Jianjian Sun", "Chunrui Han", "Zheng Ge", "Xiangyu Zhang", "Daxin Jiang", "Jingyu Wang", "Wenbing Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.633Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1005303, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7fc"}, "filepath": "data/2506.14907v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993372502623149, "type": "Poster", "name": "PeRL: Permutation-Enhanced Reinforcement Learning for Interleaved Vision-Language Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116647", "abstract": "Inspired by the impressive reasoning capabilities demonstrated by reinforcement learning approaches like DeepSeek-R1, recent emerging research has begun exploring the use of reinforcement learning (RL) to enhance vision-language models (VLMs) for multimodal reasoning tasks. However, most existing multimodal reinforcement learning approaches remain limited to spatial reasoning within single-image contexts, yet still struggle to generalize to more complex and real-world scenarios involving multi-image positional reasoning, where understanding the relationships across images is crucial. To address this challenge, we propose a general reinforcement learning approach PeRL tailored for interleaved multimodal tasks, and a multi-stage strategy designed to enhance the exploration-exploitation trade-off, thereby improving learning efficiency and task performance. Specifically, we introduce permutation of image sequences to simulate varied positional relationships to explore more spatial and positional diversity. Furthermore, we design a rollout filtering mechanism for resampling to focus on trajectories that contribute most to learning optimal behaviors to exploit learned policies effectively. We evaluate our model on 5 widely-used multi-image benchmarks and 3 single-image benchmarks. Our experiments confirm that PeRL trained model consistently surpasses R1-related and interleaved VLM baselines by a large margin, achieving state-of-the-art performance on multi-image benchmarks, while preserving comparable performance on single-image tasks.", "arxiv_id": "2506.14907v1", "arxiv_authors": ["Yizhen Zhang", "Yang Ding", "Shuoshuo Zhang", "Xinchen Zhang", "Haoling Li", "Zhong-zhi Li", "Peijie Wang", "Jie Wu", "Lei Ji", "Yelong Shen", "Yujiu Yang", "Yeyun Gong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039566, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7fd"}, "filepath": "data/2505.20655v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995805043270619, "type": "Poster", "name": "Photography Perspective Composition: Towards Aesthetic Perspective Recommendation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115849", "abstract": "Traditional photography composition approaches are dominated by 2D cropping-based methods. However, these methods fall short when scenes contain poorly arranged subjects. Professional photographers often employ perspective adjustment as a form of 3D recomposition, modifying the projected 2D relationships between subjects while maintaining their actual spatial positions to achieve better compositional balance. Inspired by this artistic practice, we propose photography perspective composition (PPC), extending beyond traditional cropping-based methods. However, implementing the PPC faces significant challenges: the scarcity of perspective transformation datasets and undefined assessment criteria for perspective quality. To address these challenges, we present three key contributions: (1) An automated framework for building PPC datasets through expert photographs. (2) A video generation approach that demonstrates the transformation process from suboptimal to optimal perspectives. (3) A perspective quality assessment (PQA) model constructed based on human performance. Our approach is concise and requires no additional prompt instructions or camera trajectories, helping and guiding ordinary users to enhance their composition skills.", "arxiv_id": "2505.20655v4", "arxiv_authors": ["Lujian Yao", "Siming Zheng", "Xinbin Yuan", "Zhuoxuan Cai", "Pu Wu", "Jinwei Chen", "Bo Li", "Peng-Tao Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a38f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1015697, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7fe"}, "filepath": "data/2506.08708v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992602427221872, "type": "Poster", "name": "PhyBlock: A Progressive Benchmark for Physical Understanding and Planning via 3D Block Assembly", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121375", "abstract": "While vision-language models (VLMs) have demonstrated promising capabilities in reasoning and planning for embodied agents, their ability to comprehend physical phenomena, particularly within structured 3D environments, remains severely limited. To close this gap, we introduce PhyBlock, a progressive benchmark designed to assess VLMs on physical understanding and planning through robotic 3D block assembly tasks. PhyBlock integrates a novel four-level cognitive hierarchy assembly task alongside targeted Visual Question Answering (VQA) samples, collectively aimed at evaluating progressive spatial reasoning and fundamental physical comprehension, including object properties, spatial relationships, and holistic scene understanding. PhyBlock includes 2600 block tasks (400 assembly tasks, 2200 VQA tasks) and evaluates models across three key dimensions: partial completion, failure diagnosis, and planning robustness. We benchmark 21 state-of-the-art VLMs, highlighting their strengths and limitations in physically grounded, multi-step planning. Our empirical findings indicate that the performance of VLMs exhibits pronounced limitations in high-level planning and reasoning capabilities, leading to a notable decline in performance for the growing complexity of the tasks.Error analysis reveals persistent difficulties in spatial orientation and dependency reasoning. Surprisingly, chain-of-thought prompting offers minimal improvements, suggesting spatial tasks heavily rely on intuitive model comprehension. We position PhyBlock as a unified testbed to advance embodied reasoning, bridging vision-language understanding and real-world physical problem-solving.", "arxiv_id": "2506.08708v1", "arxiv_authors": ["Liang Ma", "Jiajun Wen", "Min Lin", "Rongtao Xu", "Xiwen Liang", "Bingqian Lin", "Jun Ma", "Yongxin Wang", "Ziming Wei", "Haokun Lin", "Mingfei Han", "Meng Cao", "Bokui Chen", "Ivan Laptev", "Xiaodan Liang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a390"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100865, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a7ff"}, "filepath": "data/2509.20358v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992053156271156, "type": "Poster", "name": "PhysCtrl: Generative Physics for Controllable and Physics-Grounded Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119471", "abstract": "Existing video generation models excel at producing photo-realistic videos from text or images, but often lack physical plausibility and 3D controllability. To overcome these limitations, we introduce PhysCtrl, a novel framework for physics-grounded image-to-video generation with physical parameters and force control. At its core is a generative physics network that learns the distribution of physical dynamics across four materials (elastic, sand, plasticine, and rigid) via a diffusion model conditioned on physics parameters and applied forces. We represent physical dynamics as 3D point trajectories and train on a large-scale synthetic dataset of 550K animations generated by physics simulators. We enhance the diffusion model with a novel spatiotemporal attention block that emulates particle interactions and incorporates physics-based constraints during training to enforce physical plausibility. Experiments show that PhysCtrl generates realistic, physics-grounded motion trajectories which, when used to drive image-to-video models, yield high-fidelity, controllable videos that outperform existing methods in both visual quality and physical plausibility. Our code, model and data will be made publicly available upon publication.", "arxiv_id": "2509.20358v1", "arxiv_authors": ["Chen Wang", "Chuhao Chen", "Yiming Huang", "Zhiyang Dou", "Yuan Liu", "Jiatao Gu", "Lingjie Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a391"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3318526, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a800"}, "filepath": "data/2510.08073v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998815144150399, "type": "Poster", "name": "Physics-Driven Spatiotemporal Modeling for AI-Generated Video Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118877", "abstract": "AI-generated videos have achieved near-perfect visual realism (e.g., Sora), urgently necessitating reliable detection mechanisms. However, detecting such videos faces significant challenges in modeling high-dimensional spatiotemporal dynamics and identifying subtle anomalies that violate physical laws. In this paper, we propose a physics-driven AI-generated video detection paradigm based on probability flow conservation principles. Specifically, we propose a statistic called Normalized Spatiotemporal Gradient (NSG), which quantifies the ratio of spatial probability gradients to temporal density changes, explicitly capturing deviations from natural video dynamics. Leveraging pre-trained diffusion models, we develop an NSG estimator through spatial gradients approximation and motion-aware temporal modeling without complex motion decomposition while preserving physical constraints. Building on this, we propose an NSG-based video detection method (NSG-VD) that computes the Maximum Mean Discrepancy (MMD) between NSG features of the test and real videos as a detection metric. Last, we derive an upper bound of NSG feature distances between real and generated videos, proving that generated videos exhibit amplified discrepancies due to distributional shifts. Extensive experiments confirm that NSG-VD outperforms state-of-the-art baselines by 16.00\\% in Recall and 10.75\\% in F1-Score, validating the superior performance of NSG-VD.", "arxiv_id": "2510.08073v1", "arxiv_authors": ["Shuhai Zhang", "ZiHao Lian", "Jiahao Yang", "Daiyuan Li", "Guoxuan Pang", "Feng Liu", "Bo Han", "Shutao Li", "Mingkui Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a392"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1138397, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a801"}, "filepath": "data/2507.12465v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994219937886575, "type": "Poster", "name": "PhysX: Physical-Grounded 3D Asset Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116660", "abstract": "3D modeling is moving from virtual to physical. Existing 3D generation primarily emphasizes geometries and textures while neglecting physical-grounded modeling. Consequently, despite the rapid development of 3D generative models, the synthesized 3D assets often overlook rich and important physical properties, hampering their real-world application in physical domains like simulation and embodied AI. As an initial attempt to address this challenge, we propose \\textbf{PhysX}, an end-to-end paradigm for physical-grounded 3D asset generation.\\textbf{1)} To bridge the critical gap in physics-annotated 3D datasets, we present \\textbf{\\ourname}\\ - the first physics-grounded 3D dataset systematically annotated across five foundational dimensions:\\textbf{\\textcolor{color2}{absolute scale}}, \\textbf{\\textcolor{color3}{material}}, \\textbf{\\textcolor{color1}{affordance}}, \\textbf{\\textcolor{color4}{kinematics}}, and \\textbf{\\textcolor{color5}{function description}}. In particular, we devise a scalable human-in-the-loop annotation pipeline based on vision-language models, which enables efficient creation of physics-first assets from raw 3D assets. \\textbf{2)} Furthermore, we propose \\textbf{PhysXGen}, a feed-forward framework for physics-grounded 3D asset generation, injecting physical knowledge into the pre-trained 3D structural space.Specifically, PhysXGen employs a dual-branch architecture to explicitly model the latent correlations between 3D structures and physical properties, thereby producing 3D assets with plausible physical predictions while preserving the native geometry quality. Extensive experiments validate the superior performance and promising generalization capability of our framework. All the code, data, and models will be released to facilitate future research in generative physical AI.", "arxiv_id": "2507.12465v3", "arxiv_authors": ["Ziang Cao", "Zhaoxi Chen", "Liang Pan", "Ziwei Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a393"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2625911, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a802"}, "filepath": "data/2405.14430v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993422850784935, "type": "Poster", "name": "PipeFusion: Patch-level Pipeline Parallelism for Diffusion Transformers Inference", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119821", "abstract": "This paper presents PipeFusion, an innovative parallel methodology to tackle the high latency issues associated with generating high-resolution images using diffusion transformers (DiTs) models. PipeFusion partitions images into patches and the model layers across multiple GPUs. It employs a patch-level pipeline parallel strategy to orchestrate communication and computation efficiently. By capitalizing on the high similarity between inputs from successive diffusion steps, PipeFusion reuses one-step stale feature maps to provide context for the current pipeline step. This approach notably reduces communication costs compared to existing DiTs inference parallelism, including tensor parallel, sequence parallel and DistriFusion. PipeFusion also exhibits superior memory efficiency, because it can distribute model parameters across multiple devices, making it more suitable for DiTs with large parameter sizes, such as Flux.1. Experimental results demonstrate that PipeFusion achieves state-of-the-art performance on 8$\\times$L40 PCIe GPUs for Pixart, Stable-Diffusion 3 and Flux.1 models.", "arxiv_id": "2405.14430v3", "arxiv_authors": ["Jiarui Fang", "Jinzhe Pan", "Jiannan Wang", "Aoyu Li", "Xibo Sun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a394"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1525184, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a803"}, "filepath": "data/2510.07316v1.png", "tags": [], "_media_type": "image", "_rand": 0.999568698798745, "type": "Poster", "name": "Pixel-Perfect Depth with Semantics-Guided Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115793", "abstract": "This paper presents **Pixel-Perfect Depth**, a monocular depth estimation model based on pixel-space diffusion generation that produces high-quality, flying-pixel-free point clouds from estimated depth maps. Current generative depth estimation models fine-tune Stable Diffusion and achieve impressive performance. However, they require a VAE to compress depth maps into latent space, which inevitably introduces flying pixels at edges and details. Our model addresses this challenge by directly performing diffusion generation in the pixel space, avoiding VAE-induced artifacts. To tackle the resulting complexity of high-resolution generation, we introduce two novel designs: 1) **Semantics-Guided Diffusion Transformers (DiT)** that extracts high-level semantic representations from vision foundation models to guide the diffusion process, enabling accurate modeling of both global image structures and fine-grained details; and 2) **Cascade DiT Design** that progressively increases the number of patches to further enhance efficiency and accuracy. Our model achieves the best performance among all published generative models across five benchmarks, and significantly outperforms all other models in edge-aware point cloud evaluation. Code will be released for reproducibility.", "arxiv_id": "2510.07316v1", "arxiv_authors": ["Gangwei Xu", "Haotong Lin", "Hongcheng Luo", "Xianqi Wang", "Jingfeng Yao", "Lianghui Zhu", "Yuechuan Pu", "Cheng Chi", "Haiyang Sun", "Bing Wang", "Guang Chen", "Hangjun Ye", "Sida Peng", "Xin Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a395"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4627290, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a804"}, "filepath": "data/2505.15966v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994733858117285, "type": "Poster", "name": "Pixel Reasoner: Incentivizing Pixel Space Reasoning via Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117667", "abstract": "Chain-of-thought reasoning has significantly improved the performance of LargeLanguage Models (LLMs) across various domains. However, this reasoning process has been confined exclusively to textual space, limiting its effectiveness in visually intensive tasks. To address this limitation, we introduce the concept of pixel-space reasoning. Within this novel framework, Vision-Language Models (VLMs) are equipped with a suite of visual reasoning operations, such as zoom-in and select-frame. These operations enable VLMs to directly inspect, interrogate, and infer from visual evidences, thereby enhancing reasoning fidelity for visual tasks. Cultivating such pixel-space reasoning capabilities in VLMs presents notable challenges, including the model\u2019s initially imbalanced competence and its reluctance to adopt the newly introduced pixel-space operations. We address these challenges through a two-phase training approach. The first phase employs instruction tuning on synthesized reasoning traces to familiarize the model with the novel visual operations. Following this, a reinforcement learning (RL) phase leverages a curiosity-driven reward scheme to balance exploration between pixel-space reasoning and textual reasoning. With these visual operations, VLMs can interact with complex visual inputs, such as information-rich images or videos to proactively gather necessary information. We demonstrate that this approach significantly improves VLM performance across diverse visual reasoning benchmarks. Our 7B model, Pixel-Reasoner, achieves 84% on V* bench, 74% on TallyQA-Complex, and 84% on InfographicsVQA, marking the highest accuracy achieved by any open-source model to date. These results highlight the importance of pixel-space reasoning and the effectiveness of our framework.", "arxiv_id": "2505.15966v3", "arxiv_authors": ["Haozhe Wang", "Alex Su", "Weiming Ren", "Fangzhen Lin", "Wenhu Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a396"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1129617, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a805"}, "filepath": "data/2510.18714v1.png", "tags": [], "_media_type": "image", "_rand": 0.999493923378779, "type": "Poster", "name": "PLANA3R: Zero-shot Metric Planar 3D Reconstruction via Feed-forward Planar Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117418", "abstract": "This paper addresses metric 3D reconstruction of indoor scenes by exploiting their inherent geometric regularities with compact representations. Using planar 3D primitives -- a well-suited representation for man-made environments -- we introduce PLANA3R, a pose-free framework for metric $\\underline{Plana}$r $\\underline{3}$D $\\underline{R}$econstruction from unposed two-view images. Our approach employs Vision Transformers to extract a set of sparse planar primitives, estimate relative camera poses, and supervise geometry learning via planar splatting, where gradients are propagated through high-resolution rendered depth and normal maps of primitives. Unlike prior feedforward methods that require 3D plane annotations during training, PLANA3R learns planar 3D structures without explicit plane supervision, enabling scalable training on large-scale stereo datasets using only depth and normal annotations. We validate PLANA3R on multiple indoor-scene datasets with metric supervision and demonstrate strong generalization to out-of-domain indoor environments across diverse tasks under metric evaluation protocols, including 3D surface reconstruction, depth estimation, and relative pose estimation. Furthermore, by formulating with planar 3D representation, our method emerges with the ability for accurate plane segmentation.", "arxiv_id": "2510.18714v1", "arxiv_authors": ["Changkun Liu", "Bin Tan", "Zeran Ke", "Shangzhan Zhang", "Jiachen Liu", "Ming Qian", "Nan Xue", "Yujun Shen", "Tristan Braud"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a397"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2526254, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a806"}, "filepath": "data/2506.09995v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992404187832405, "type": "Poster", "name": "PlayerOne: Egocentric World Simulator", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118937", "abstract": "We introduce PlayerOne, the first egocentric realistic world simulator, facilitating immersive and unrestricted exploration within vividly dynamic environments. Given an egocentric scene image from the user, PlayerOne can accurately construct the corresponding world and generate egocentric videos that are strictly aligned with the real-scene human motion of the user captured by an exocentric camera. PlayerOne is trained in a coarse-to-fine pipeline that first performs pretraining on large-scale egocentric text-video pairs for coarse-level egocentric understanding, followed by finetuning on synchronous motion-video data extracted from egocentric-exocentric video datasets with our automatic construction pipeline. Besides, considering the varying importance of different components, we design a part-disentangled motion injection scheme, enabling precise control of part-level movements. In addition, we devise a joint reconstruction framework that progressively models both the 4D scene and video frames, ensuring scene consistency in the long-form video generation. Experimental results demonstrate its great generalization ability in precise control of varying human movements and world-consistent modeling of diverse scenarios. It marks the first endeavor into egocentric real-world simulation and can pave the way for the community to delve into fresh frontiers of world modeling and its diverse applications.", "arxiv_id": "2506.09995v1", "arxiv_authors": ["Yuanpeng Tu", "Hao Luo", "Xi Chen", "Xiang Bai", "Fan Wang", "Hengshuang Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a398"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4482596, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a807"}, "filepath": "data/2505.21258v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994768606470327, "type": "Poster", "name": "Plenodium: UnderWater 3D Scene Reconstruction with Plenoptic Medium Representation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117833", "abstract": "We present *Plenodium* (*plenoptic medium*), an effective and efficient 3D representation framework capable of jointly modeling both objects and participating media.In contrast to existing medium representations that rely solely on view-dependent modeling, our novel plenoptic medium representation incorporates both directional and positional information through spherical harmonics encoding, enabling highly accurate underwater scene reconstruction.To address the initialization challenge in degraded underwater environments, we propose the pseudo-depth Gaussian complementation to augment COLMAP-derived point clouds with robust depth priors.In addition, a depth ranking regularized loss is developed to optimize the geometry of the scene and improve the ordinal consistency of the depth maps.Extensive experiments on real-world underwater datasets demonstrate that our method achieves significant improvements in 3D reconstruction.Furthermore, we conduct a simulated dataset with ground truth and the controllable scattering medium to demonstrate the restoration capability of our method in underwater scenarios. Our code is available at: https://anonymous.4open.science/r/plenodium-1119.", "arxiv_id": "2505.21258v1", "arxiv_authors": ["Changguanng Wu", "Jiangxin Dong", "Chengjian Li", "Jinhui Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a399"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.634Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1326353, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a808"}, "filepath": "data/2505.19089v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992828871156376, "type": "Poster", "name": "Plug-and-Play Context Feature Reuse for Efficient Masked Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116393", "abstract": "Masked generative models (MGMs) have emerged as a powerful framework for image synthesis, combining parallel decoding with strong bidirectional context modeling. However, generating high-quality samples typically requires many iterative decoding steps, resulting in high inference costs. A straightforward way to speed up generation is by decoding more tokens in each step, thereby reducing the total number of steps. However, when many tokens are decoded simultaneously, the model can only estimate the univariate marginal distributions independently, failing to capture the dependency among them. As a result, reducing the number of steps significantly compromises generation fidelity. In this work, we introduce ReCAP (Reused Context-Aware Prediction), a plug-and-play module that accelerates inference in MGMs by constructing low-cost steps via reusing feature embeddings from previously decoded context tokens. ReCAP interleaves standard full evaluations with lightweight steps that cache and reuse context features, substantially reducing computation while preserving the benefits of fine-grained, iterative generation. We demonstrate its effectiveness on top of three representative MGMs (MaskGIT, MAGE, and MAR), including both discrete and continuous token spaces and covering diverse architectural designs. In particular, on ImageNet256 class-conditional generation, ReCAP achieves up to 2.4$\\times$ faster inference than the base model with minimal performance drop, and consistently delivers better efficiency\u2013fidelity trade-offs under various generation settings.", "arxiv_id": "2505.19089v1", "arxiv_authors": ["Xuejie Liu", "Anji Liu", "Guy Van den Broeck", "Yitao Liang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1088027, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a809"}, "filepath": "data/2505.12266v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994260138953276, "type": "Poster", "name": "PMQ-VE: Progressive Multi-Frame Quantization for Video Enhancement", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118383", "abstract": "Multi-frame video enhancement tasks aim to improve the spatial and temporal resolution and quality of video sequences by leveraging temporal information from multiple frames, which are widely used in streaming video processing, surveillance, and generation. Although numerous Transformer-based enhancement methods have achieved impressive performance, their computational and memory demands hinder deployment on edge devices. Quantization offers a practical solution by reducing the bit-width of weights and activations to improve efficiency. However, directly applying existing quantization methods to video enhancement tasks often leads to significant performance degradation and loss of fine details. This stems from two limitations: (a) inability to allocate varying representational capacity across frames, which results in suboptimal dynamic range adaptation; (b) over-reliance on full-precision teachers, which limits the learning of low-bit student models. To tackle these challenges, we propose a novel quantization method for video enhancement: Progressive Multi-Frame Quantization for Video Enhancement (PMQ-VE). This framework features a coarse-to-fine two-stage process: Backtracking-based Multi-Frame Quantization (BMFQ) and Progressive Multi-Teacher Distillation (PMTD). BMFQ utilizes a percentile-based initialization and iterative search with pruning and backtracking for robust clipping bounds. PMTD employs a progressive distillation strategy with both full-precision and multiple high-bit (INT) teachers to enhance low-bit models' capacity and quality. Extensive experiments demonstrate that our method outperforms existing approaches, achieving state-of-the-art performance across multiple tasks and benchmarks. The code will be made publicly available.", "arxiv_id": "2505.12266v2", "arxiv_authors": ["ZhanFeng Feng", "Long Peng", "Xin Di", "Yong Guo", "Wenbo Li", "Yulun Zhang", "Renjing Pei", "Yang Wang", "Yang Cao", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115862, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80a"}, "filepath": "data/2510.03012v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995298101288805, "type": "Poster", "name": "PocketSR: The Super-Resolution Expert in Your Pocket Mobiles", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119842", "abstract": "Real-world image super-resolution (RealSR) aims to enhance the visual quality of in-the-wild images, such as those captured by mobile phones. While existing methods leveraging large generative models demonstrate impressive results, the high computational cost and latency make them impractical for edge deployment. In this paper, we introduce PocketSR, an ultra-lightweight, single-step model that brings generative modeling capabilities to RealSR while maintaining high fidelity. To achieve this, we design LiteED, a highly efficient alternative to the original computationally intensive VAE in SD, reducing parameters by 97.5\\% while preserving high-quality encoding and decoding. Additionally, we propose online annealing pruning for the U-Net, which progressively shifts generative priors from heavy modules to lightweight counterparts, ensuring effective knowledge transfer and further optimizing efficiency. To mitigate the loss of prior knowledge during pruning, we incorporate a multi-layer feature distillation loss. Through an in-depth analysis of each design component, we provide valuable insights for future research. PocketSR, with a model size of 146M parameters, processes 4K images in just 0.8 seconds, achieving a remarkable speedup over previous methods. Notably, it delivers performance on par with state-of-the-art single-step and even multi-step RealSR models, making it a highly practical solution for edge-device applications.", "arxiv_id": "2510.03012v1", "arxiv_authors": ["Haoze Sun", "Linfeng Jiang", "Fan Li", "Renjing Pei", "Zhixin Wang", "Yong Guo", "Jiaqi Xu", "Haoyu Chen", "Jin Han", "Fenglong Song", "Yujiu Yang", "Wenbo Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1095907, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80b"}, "filepath": "data/2507.02863v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996090264886641, "type": "Poster", "name": "Point3R: Streaming 3D Reconstruction with Explicit Spatial Pointer Memory", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115114", "abstract": "Dense 3D scene reconstruction from an ordered sequence or unordered image collections is a critical step when bringing research in computer vision into practical scenarios. Following the paradigm introduced by DUSt3R, which unifies an image pair densely into a shared coordinate system, subsequent methods maintain an implicit memory to achieve dense 3D reconstruction from more images. However, such implicit memory is limited in capacity and may suffer from information loss of earlier frames. We propose Point3R, an online framework targeting dense streaming 3D reconstruction. To be specific, we maintain an explicit spatial pointer memory directly associated with the 3D structure of the current scene. Each pointer in this memory is assigned a specific 3D position and aggregates scene information nearby in the global coordinate system into a changing spatial feature. Information extracted from the latest frame interacts explicitly with this pointer memory, enabling dense integration of the current observation into the global coordinate system. We design a 3D hierarchical position embedding to promote this interaction and design a simple yet effective fusion mechanism to ensure that our pointer memory is uniform and efficient. Our method achieves competitive or state-of-the-art performance on various tasks with low training costs.", "arxiv_id": "2507.02863v1", "arxiv_authors": ["Yuqi Wu", "Wenzhao Zheng", "Jie Zhou", "Jiwen Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033631, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80c"}, "filepath": "data/2410.18987v4.png", "tags": [], "_media_type": "image", "_rand": 0.9994020654071881, "type": "Poster", "name": "Point Cloud Synthesis Using Inner Product Transforms", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115942", "abstract": "Point-cloud synthesis, i.e. the generation of novel point clouds from an input distribution, remains a challenging task, for which numerous complex machine-learning models have been devised. We develop a novel method that encodes geometrical-topological characteristics of point clouds using inner products, leading to a highly-efficient point cloud representation with provable expressivity properties. Integrated into deep learning models, our encoding exhibits high quality in typical tasks like reconstruction, generation, and interpolation, with inference times orders of magnitude faster than existing methods.", "arxiv_id": "2410.18987v4", "arxiv_authors": ["Ernst R\u00f6ell", "Bastian Rieck"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050718, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80d"}, "filepath": "data/2510.10365v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998959447825148, "type": "Poster", "name": "PointMAC: Meta-Learned Adaptation for Robust Test-Time Point Cloud Completion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119905", "abstract": "Point cloud completion is essential for robust 3D perception in safety-critical applications such as robotics and augmented reality. However, existing models perform static inference and rely heavily on inductive biases learned during training, limiting their ability to adapt to novel structural patterns and sensor-induced distortions at test time.To address this limitation, we propose PointMAC, a meta-learned framework for robust test-time adaptation in point cloud completion. It enables sample-specific refinement without requiring additional supervision.Our method optimizes the completion model under two self-supervised auxiliary objectives that simulate structural and sensor-level incompleteness.A meta-auxiliary learning strategy based on Model-Agnostic Meta-Learning (MAML) ensures that adaptation driven by auxiliary objectives is consistently aligned with the primary completion task.During inference, we adapt the shared encoder on-the-fly by optimizing auxiliary losses, with the decoder kept fixed. To further stabilize adaptation, we introduce Adaptive $\\lambda$-Calibration, a meta-learned mechanism for balancing gradients between primary and auxiliary objectives. Extensive experiments on synthetic, simulated, and real-world datasets demonstrate that PointMAC achieves state-of-the-art results by refining each sample individually to produce high-quality completions. To the best of our knowledge, this is the first work to apply meta-auxiliary test-time adaptation to point cloud completion.", "arxiv_id": "2510.10365v1", "arxiv_authors": ["Linlian Jiang", "Rui Ma", "Li Gu", "Ziqiang Wang", "Xinxin Zuo", "Yang Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a39f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074091, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80e"}, "filepath": "data/2510.20406v1.png", "tags": [], "_media_type": "image", "_rand": 0.999127735764738, "type": "Poster", "name": "PointMapPolicy: Structured Point Cloud Processing for Multi-Modal Imitation Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117358", "abstract": "Robotic manipulation systems benefit from complementary sensing modalities, where each provides unique environmental information.Point clouds capture detailed geometric structure, while RGB images provide rich semantic context. Current point cloud methods struggle to capture fine-grained detail, especially for complex tasks, which RGB methods lack geometric awareness, which hinders their precision and generalization. We introduce PointMapPolicy, a novel approach that conditions diffusion policies on structured grids of points without downsampling. The resulting data type makes it easier to extract shape and spatial relationships from observations, and can be transformed between reference frames. Yet due to their structure in a regular grid, we enable the use of established computer vision techniques directly to 3D data. Using xLSTM as a backbone, our model efficiently fuses the point maps with RGB data for enhanced multi-modal perception.Through extensive experiments on the RoboCasa and CALVIN benchmarks and real robot evaluations, we demonstrate that our method achieves state-of-the-art performance across diverse manipulation tasks. The overview and demos are available on our project page: https://point-map.github.io/Point-Map/", "arxiv_id": "2510.20406v1", "arxiv_authors": ["Xiaogang Jia", "Qian Wang", "Anrui Wang", "Han A. Wang", "Bal\u00e1zs Gyenes", "Emiliyan Gospodinov", "Xinkai Jiang", "Ge Li", "Hongyi Zhou", "Weiran Liao", "Xi Huang", "Maximilian Beck", "Moritz Reuss", "Rudolf Lioutikov", "Gerhard Neumann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a0"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047668, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a80f"}, "filepath": "data/2505.23395v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992474403302403, "type": "Poster", "name": "Point or Line? Using Line-based Representation for Panoptic Symbol Spotting in CAD Drawings", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116604", "abstract": "We study the task of panoptic symbol spotting, which involves identifying both individual instances of countable \\textit{things} and the semantic regions of uncountable \\textit{stuff} in computer-aided design (CAD) drawings composed of vector graphical primitives.Existing methods typically rely on image rasterization, graph construction, or point-based representation, but these approaches often suffer from high computational costs, limited generality, and loss of geometric structural information. In this paper, we propose \\textit{VecFormer}, a novel method that addresses these challenges through \\textit{line-based representation} of primitives. This design preserves the geometric continuity of the original primitive, enabling more accurate shape representation while maintaining a computation-friendly structure, making it well-suited for vector graphic understanding tasks. To further enhance prediction reliability, we introduce a \\textit{Branch Fusion Refinement} module that effectively integrates instance and semantic predictions, resolving their inconsistencies for more coherent panoptic outputs. Extensive experiments demonstrate that our method establishes a new state-of-the-art, achieving 91.1 PQ, with Stuff-PQ improved by 9.6 and 21.2 points over the second-best results under settings with and without prior information, respectively\u2014highlighting the strong potential of line-based representation as a foundation for vector graphic understanding.", "arxiv_id": "2505.23395v3", "arxiv_authors": ["Xingguang Wei", "Haomin Wang", "Shenglong Ye", "Ruifeng Luo", "Yanting Zhang", "Lixin Gu", "Jifeng Dai", "Yu Qiao", "Wenhai Wang", "Hongjie Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1102886, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a810"}, "filepath": "data/2505.19702v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990024188209728, "type": "Poster", "name": "Point-RFT: Improving Multimodal Reasoning with Visually Grounded Reinforcement Finetuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115309", "abstract": "Recent advances in large language models have significantly improved textual reasoning through the effective use of Chain-of-Thought (CoT) and reinforcement learning. However, extending these successes to vision-language tasks remains challenging due to inherent limitations in text-only CoT, such as visual hallucinations and insufficient multimodal integration. In this paper, we introduce Point-RFT, a multimodal reasoning framework explicitly designed to leverage visually grounded CoT reasoning for visual document understanding. Our approach consists of two stages: First, we conduct format finetuning using a curated dataset of 71K diverse visual reasoning problems, each annotated with detailed, step-by-step rationales explicitly grounded to corresponding visual elements. Second, we employ reinforcement finetuning targeting visual document understanding. On ChartQA, our approach improves accuracy from 70.88% (format-finetuned baseline) to 90.04%, surpassing the 83.92% accuracy achieved by reinforcement finetuning relying solely on text-based CoT. The result shows that our grounded CoT is more effective for multimodal reasoning compared with the text-only CoT. Moreover, Point-RFT exhibits superior generalization capability across several out-of-domain visual document reasoning benchmarks, including CharXiv, PlotQA, IconQA, TabMWP, etc., and highlights its potential in complex real-world scenarios.", "arxiv_id": "2505.19702v1", "arxiv_authors": ["Minheng Ni", "Zhengyuan Yang", "Linjie Li", "Chung-Ching Lin", "Kevin Lin", "Wangmeng Zuo", "Lijuan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1415824, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a811"}, "filepath": "data/2501.19164v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993195502689818, "type": "Poster", "name": "Poison as Cure: Visual Noise for Mitigating Object Hallucinations in LVMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115828", "abstract": "Large vision-language models (LVMs) extend large language models (LLMs) with visual perception capabilities, enabling them to process and interpret visual information. A major challenge compromising their reliability is object hallucination that LVMs may generate plausible but factually inaccurate information. We propose a novel \\textit{visual adversarial perturbation (VAP)} method to mitigate this hallucination issue. VAP alleviates LVM hallucination by applying strategically optimized visual noise without altering the base model. Our approach formulates hallucination suppression as an optimization problem, leveraging adversarial strategies to generate beneficial visual perturbations that enhance the model's factual grounding and reduce parametric knowledge bias. Extensive experimental results demonstrate that our method consistently reduces object hallucinations across 8 state-of-the-art LVMs, validating its efficacy across diverse evaluations. Code is available at https://anonymous.4open.science/r/VAP-744.", "arxiv_id": "2501.19164v2", "arxiv_authors": ["Kejia Zhang", "Keda Tao", "Jiasheng Tang", "Huan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1993755, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a812"}, "filepath": "data/2505.21478v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999023743517128, "type": "Poster", "name": "Policy Optimized Text-to-Image Pipeline Design", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115171", "abstract": "Text-to-image generation has evolved beyond single monolithic models to complex multi-component pipelines that combine various enhancement tools. While these pipelines significantly improve image quality, their effective design requires substantial expertise. Recent approaches automating this process through large language models (LLMs) have shown promise but suffer from two critical limitations: extensive computational requirements from generating images with hundreds of predefined pipelines, and poor generalization beyond memorized training examples. We introduce a novel reinforcement learning-based framework that addresses these inefficiencies. Our approach first trains an ensemble of reward models capable of predicting image quality scores directly from prompt-workflow combinations, eliminating the need for costly image generation during training. We then implement a two-phase training strategy: initial workflow prediction training followed by GRPO-based optimization that guides the model toward higher-performing regions of the workflow space. Additionally, we incorporate a classifier-free guidance based enhancement technique that extrapolates along the path between the initial and GRPO-tuned models, further improving output quality. We validate our approach through a set of comparisons, showing that it can successfully create new flows with greater diversity and lead to superior image quality compared to existing baselines.", "arxiv_id": "2505.21478v1", "arxiv_authors": ["Uri Gadot", "Rinon Gal", "Yftah Ziser", "Gal Chechik", "Shie Mannor"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1075176, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a813"}, "filepath": "data/2506.15940v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999000801529371, "type": "Poster", "name": "Polyline Path Masked Attention for Vision Transformer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119105", "abstract": "Global dependency modeling and spatial position modeling are two core issues of the foundational architecture design in current deep learning frameworks. Recently, Vision Transformers (ViTs) have achieved remarkable success in computer vision, leveraging the powerful global dependency modeling capability of the self-attention mechanism. Furthermore, Mamba2 has demonstrated its significant potential in natural language processing tasks by explicitly modeling the spatial adjacency prior through the structured mask. In this paper, we propose Polyline Path Masked Attention (PPMA) that integrates the self-attention mechanism of ViTs with an enhanced structured mask of Mamba2, harnessing the complementary strengths of both architectures.Specifically, we first ameliorate the traditional structured mask of Mamba2 by introducing a 2D polyline path scanning strategy and derive its corresponding structured mask, polyline path mask, which better preserves the adjacency relationships among image tokens. Notably, we conduct a thorough theoretical analysis of the structural characteristics of the proposed polyline path mask and design an efficient algorithm for the computation of the polyline path mask. Next, we embed the polyline path mask into the self-attention mechanism of ViTs, enabling explicit modeling of spatial adjacency prior. Extensive experiments on standard benchmarks, including image classification, object detection, and segmentation, demonstrate that our model outperforms previous state-of-the-art approaches based on both state-space models and Transformers. For example, our proposed PPMA-T/S/B models achieve 48.7\\%/51.1\\%/52.3\\% mIoU on the ADE20K semantic segmentation task, surpassing RMT-T/S/B by 0.7\\%/1.3\\%/0.3\\%, respectively. Code is available at \\url{https://anonymous.4open.science/r/PPMA-3948}.", "arxiv_id": "2506.15940v2", "arxiv_authors": ["Zhongchen Zhao", "Chaodong Xiao", "Hui Lin", "Qi Xie", "Lei Zhang", "Deyu Meng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1156221, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a814"}, "filepath": "data/2506.07848v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994039921739823, "type": "Poster", "name": "PolyVivid: Vivid Multi-Subject Video Generation with Cross-Modal Interaction and Enhancement", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115924", "abstract": "Despite recent advances in video generation, existing models still lack fine-grained controllability, especially for multi-subject customization with consistent identity and interaction.In this paper, we propose PolyVivid, a multi-subject video customization framework that enables flexible and identity-consistent generation. To establish accurate correspondences between subject images and textual entities, we design a VLLM-based text-image fusion module that embeds visual identities into the textual space for precise grounding. To further enhance identity preservation and subject interaction, we propose a 3D-RoPE-based enhancement module that enables structured bidirectional fusion between text and image embeddings. Moreover, we develop an attention-inherited identity injection module to effectively inject fused identity features into the video generation process, mitigating identity drift. Finally, we construct an MLLM-based data pipeline that combines MLLM-based grounding, segmentation, and a clique-based subject consolidation strategy to produce high-quality multi-subject data, effectively enhancing subject distinction and reducing ambiguity in downstream video generation.Extensive experiments demonstrate that PolyVivid achieves superior performance in identity fidelity, video realism, and subject alignment, outperforming existing open-source and commercial baselines. More comprehensive video results and comparisons are shown on the project page in the supplementary material.", "arxiv_id": "2506.07848v1", "arxiv_authors": ["Teng Hu", "Zhentao Yu", "Zhengguang Zhou", "Jiangning Zhang", "Yuan Zhou", "Qinglin Lu", "Ran Yi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.635Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4345707, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a815"}, "filepath": "data/2510.19527v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996986824396359, "type": "Poster", "name": "PoseCrafter: Extreme Pose Estimation with Hybrid Video Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119117", "abstract": "Pairwise camera pose estimation from sparsely overlapping image pairs remains a critical and unsolved challenge in 3D vision. Most existing methods struggle with image pairs that have small or no overlap. Recent approaches attempt to address this by synthesizing intermediate frames using video interpolation and selecting key frames via a self-consistency score. However, the generated frames are often blurry due to small overlap inputs, and the selection strategies are slow and not explicitly aligned with pose estimation.To solve these cases, we propose Hybrid Video Generation (HVG) to synthesize clearer intermediate frames by coupling a video interpolation model with a pose-conditioned novel view synthesis model, where we also propose a Feature Matching Selector (FMS) based on feature correspondence to select intermediate frames appropriate for pose estimation from the synthesized results. Extensive experiments on Cambridge Landmarks, ScanNet, DL3DV-10K, and NAVI demonstrate that, compared to existing SOTA methods, PoseCrafter can obviously enhance the pose estimation performances, especially on examples with small or no overlap.", "arxiv_id": "2510.19527v1", "arxiv_authors": ["Qing Mao", "Tianxin Huang", "Yu Zhu", "Jinqiu Sun", "Yanning Zhang", "Gim Hee Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040336, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a816"}, "filepath": "data/2505.18342v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996276372995082, "type": "Poster", "name": "Pose Splatter: A 3D Gaussian Splatting Model for Quantifying Animal Pose and Appearance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118596", "abstract": "Accurate and scalable quantification of animal pose and appearance is crucial for studying behavior. Current 3D pose estimation techniques, such as keypoint- and mesh-based techniques, often face challenges including limited representational detail, labor-intensive annotation requirements, and expensive per-frame optimization. These limitations hinder the study of subtle movements and can make large-scale analyses impractical. We propose \"Pose Splatter\", a novel framework leveraging shape carving and 3D Gaussian splatting to model the complete pose and appearance of laboratory animals without prior knowledge of animal geometry, per-frame optimization, or manual annotations. We also propose a novel rotation-invariant visual embedding technique for encoding pose and appearance, designed to be a plug-in replacement for 3D keypoint data in downstream behavioral analyses. Experiments on datasets of mice, rats, and zebra finches show Pose Splatter learns accurate 3D animal geometries. Notably, Pose Splatter represents subtle variations in pose, provides better low-dimensional pose embeddings over state-of-the-art as evaluated by humans, and generalizes to unseen data. By eliminating annotation and per-frame optimization bottlenecks, Pose Splatter enables analysis of large-scale, longitudinal behavior needed to map genotype, neural activity, and micro-behavior at unprecedented resolution.", "arxiv_id": "2505.18342v1", "arxiv_authors": ["Jack Goffinet", "Youngjo Min", "Carlo Tomasi", "David E. Carlson"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1013048, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a817"}, "filepath": "data/2510.20178v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999321097950922, "type": "Poster", "name": "PPMStereo: Pick-and-Play Memory Construction for Consistent Dynamic Stereo Matching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118454", "abstract": "Temporally consistent depth estimation from stereo video is critical for real-world applications such as augmented reality, where inconsistent depth estimation disrupts the immersion of users.Despite its importance, this task remains challenging due to the difficulty in modeling long-term temporal consistency in a computationally efficient manner.Previous methods attempt to address this by aggregating spatio-temporal information but face a fundamental trade-off: limited temporal modeling provides only modest gains, whereas capturing long-range dependencies significantly increases computational cost.To address this limitation, we introduce a memory buffer for modeling long-range spatio-temporal consistency while achieving efficient dynamic stereo matching.Inspired by the two-stage decision-making process in humans, we propose a Pick-and-Play Memory (PPM) construction module for dynamic Stereo matching, dubbed as PPMStereo. PPM consists of a pick process that identifies the most relevant frames and a play process that weights the selected frames adaptively for spatio-temporal aggregation.This two-stage collaborative process maintains a compact yet highly informative memory buffer while achieving temporally consistent information aggregation.Extensive experiments validate the effectiveness of PPMStereo, demonstrating state-of-the-art performance in both accuracy and temporal consistency.", "arxiv_id": "2510.20178v1", "arxiv_authors": ["Yun Wang", "Junjie Hu", "Qiaole Dong", "Yongjian Zhang", "Yanwei Fu", "Tin Lun Lam", "Dapeng Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3a9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1045502, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a818"}, "filepath": "data/2510.19618v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994034581297049, "type": "Poster", "name": "Pragmatic Heterogeneous Collaborative Perception via Generative Communication Mechanism", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119282", "abstract": "Multi-agent collaboration enhances the perception capabilities of individual agents through information sharing. However, in real-world applications, differences in sensors and models across heterogeneous agents inevitably lead to domain gaps during collaboration. Existing approaches based on adaptation and reconstruction fail to support *pragmatic heterogeneous collaboration* due to two key limitations: (1) retraining the encoder or core modules of agents disrupts the consistency of the semantic space of each agent; and (2) accommodating new agents incurs high computational costs, limiting scalability. To address these challenges, we present a novel **Gen**erative **Comm**unication mechanism (GenComm) that facilitates seamless perception across heterogeneous multi-agent systems through feature generation, without altering the original network, and employs lightweight numerical alignment of spatial information to efficiently integrate new agents at minimal cost. Specifically, a tailored Deformable Message Extractor is used to extract spatial information for each collaborator, which is then transmitted in place of intermediate features. The Spatial-Aware Feature Generator, utilizing a conditional diffusion model, generates features aligned with the ego agent\u2019s semantic space while preserving the spatial information of the collaborators. These features are further refined by a Channel Enhancer before fusion. Experiments conducted on the OPV2V-H and DAIR-V2X datasets demonstrate that GenComm outperforms existing state-of-the-art methods, achieving an 81\\% reduction in both computational cost and parameter count when incorporating new agents.", "arxiv_id": "2510.19618v2", "arxiv_authors": ["Junfei Zhou", "Penglin Dai", "Quanmin Wei", "Bingyi Liu", "Xiao Wu", "Jianping Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3aa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096277, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a819"}, "filepath": "data/2412.03409v4.png", "tags": [], "_media_type": "image", "_rand": 0.9997837152798311, "type": "Poster", "name": "PrefixKV: Adaptive Prefix KV Cache is What Vision Instruction-Following Models Need for Efficient Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115615", "abstract": "Recently, large vision-language models (LVLMs) have rapidly gained popularity for their strong generation and reasoning capabilities given diverse multimodal inputs. However, these models incur significant computational and memory overhead during inference, which greatly hinders the efficient deployment in practical scenarios. The extensive key-value (KV) cache, necessitated by the lengthy input and output sequences, notably contributes to the high inference cost. Based on this, recent works have investigated ways to reduce the KV cache size for higher efficiency. Although effective, they generally overlook the distinct importance distributions of KV vectors across layers and maintain the same cache size for each layer during the next token prediction. This results in the significant contextual information loss for certain layers, leading to notable performance decline. To address this, we present PrefixKV. It reframes the challenge of determining KV cache sizes for all layers into the task of searching for the optimal global prefix configuration. With an adaptive layer-wise KV retention recipe based on binary search, the maximum contextual information can thus be preserved in each layer, facilitating the generation. Extensive experiments demonstrate that our method achieves the state-of-the-art performance compared with others. It exhibits superior inference efficiency and generation quality trade-offs, showing promising potential for practical applications. Code will be publicly available.", "arxiv_id": "2412.03409v4", "arxiv_authors": ["Ao Wang", "Hui Chen", "Jiaxin Li", "Jianchao Tan", "Kefeng Zhang", "Xunliang Cai", "Zijia Lin", "Jungong Han", "Guiguang Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ab"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042167, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81a"}, "filepath": "data/2505.23155v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994223412712454, "type": "Poster", "name": "PreFM: Online Audio-Visual Event Parsing via Predictive Future Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116375", "abstract": "Audio-visual event parsing plays a crucial role in understanding multimodal video content, but existing methods typically rely on offline processing of entire videos with huge model sizes, limiting their real-time applicability. We introduce Online Audio-Visual Event Parsing (On-AVEP), a novel paradigm for parsing audio, visual, and audio-visual events by sequentially analyzing incoming video streams. The On-AVEP task necessitates models with two key capabilities: (1) Accurate online inference, to effectively distinguish events with unclear and limited context in online settings, and (2) Real-time efficiency, to balance high performance with computational constraints. To cultivate these, we propose the $\\textbf{Pre}$dictive $\\textbf{F}$uture $\\textbf{M}$odeling (PreFM) framework featured by (a) predictive multimodal future modeling to infer and integrate beneficial future audio-visual cues, thereby enhancing contextual understanding and (b) modality-agnostic robust representation along with focal temporal prioritization to improve precision and generalization. Extensive experiments on the UnAV-100 and LLP datasets show PreFM significantly outperforms state-of-the-art methods by a large margin with significantly fewer parameters, offering an insightful approach for real-time multimodal video understanding.", "arxiv_id": "2505.23155v2", "arxiv_authors": ["Xiao Yu", "Yan Fang", "Xiaojie Jin", "Yao Zhao", "Yunchao Wei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ac"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1109573, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81b"}, "filepath": "data/2510.20887v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999790006134903, "type": "Poster", "name": "Preventing Shortcuts in Adapter Training via .. well .. Providing the Shortcuts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117083", "abstract": "Adapter-based training has emerged as a key mechanism for extending the capabilities of powerful foundation image generators, enabling personalized and stylized text-to-image synthesis.These adapters are typically trained to capture a specific target attribute, such as subject identity, using single-image reconstruction objectives. However, because the input image inevitably contains a mixture of visual factors, adapters are prone to entangling the target attribute with incidental ones such as pose, expression, or lighting. This spurious correlation problem limits generalization and undermines modularity: an identity adapter unwittingly encodes pose and expression, failing to adhere to prompts that require changing the subject's pose and expression. In this work, we uncover a simple yet effective solution: provide the very shortcuts we wish to eliminate. In *Shortcut-Rerouted Adapter Training*, confounding factors are routed through auxiliary modules, such as ControlNet or LoRA, during training, eliminating the incentive for the adapter to internalize them. Applied to tasks like facial and full-body identity injection, our approach improves generation quality, diversity, and prompt adherence. These results point to a general design principle in the era of large models: when seeking disentangled representations, the most effective path may be to establish shortcuts for what should *not* be learned.", "arxiv_id": "2510.20887v1", "arxiv_authors": ["Anujraaj Argo Goyal", "Guocheng Gordon Qian", "Huseyin Coskun", "Aarush Gupta", "Himmy Tam", "Daniil Ostashev", "Ju Hu", "Dhritiman Sagar", "Sergey Tulyakov", "Kfir Aberman", "Kuan-Chieh Jackson Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ad"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031252, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81c"}, "filepath": "data/2509.15607v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994046469134357, "type": "Poster", "name": "PRIMT: Preference-based Reinforcement Learning with Multimodal Feedback and Trajectory Synthesis from Foundation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119903", "abstract": "Preference-based reinforcement learning (PbRL) has emerged as a promising paradigm for teaching robots complex behaviors without reward engineering. However, its effectiveness is often limited by two critical challenges: the reliance on extensive human input and the inherent difficulties in resolving query ambiguity and credit assignment during reward learning. In this paper, we introduce PRIMT, a PbRL framework designed to overcome these challenges by leveraging foundation models (FMs) for multimodal synthetic feedback and trajectory synthesis. Unlike prior approaches that rely on single-modality FM evaluations, PRIMT employs a hierarchical neuro-symbolic fusion strategy, integrating the complementary strengths of vision-language models (VLMs) and large language models (LLMs) in evaluating robot behaviors for more reliable and comprehensive feedback. PRIMT also incorporates foresight trajectory generation to warm-start the trajectory buffer with bootstrapped samples, reducing early-stage query ambiguity, and hindsight trajectory augmentation for counterfactual reasoning with a causal auxiliary loss to improve credit assignment. We evaluate PRIMT on 2 locomotion and 6 manipulation tasks on various benchmarks, demonstrating superior performance over FM-based and scripted baselines. Website at https://sites.google.com/view/PRIMT.", "arxiv_id": "2509.15607v1", "arxiv_authors": ["Ruiqi Wang", "Dezhong Zhao", "Ziqin Yuan", "Tianyu Shao", "Guohua Chen", "Dominic Kao", "Sungeun Hong", "Byung-Cheol Min"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ae"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1082357, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81d"}, "filepath": "data/2506.00512v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993728181248019, "type": "Poster", "name": "Pro3D-Editor: A Progressive Framework for Consistent and Precise 3D Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118793", "abstract": "Text-guided 3D editing aims to locally modify 3D objects based on editing prompts, which has significant potential for applications in 3D game and film domains. Existing methods typically follow a view-agnostic paradigm: editing 2D view images indiscriminately and projecting them back into 3D space. However, the view-agnostic paradigm neglects view consistency and view-specific characteristics, resulting in spatial inconsistencies and imprecise control over edited regions. In this study, we argue that progressive view-oriented paradigm can effectively address these issues, which projects the editing information from a editing-sensitive view to other editing-insensitive views. Based on this paradigm, we design Pro3D-Editor, a new framework. Extensive experiments demonstrate that our method outperforms existing approaches in terms of editing accuracy and spatial consistency.", "arxiv_id": "2506.00512v2", "arxiv_authors": ["Yang Zheng", "Mengqi Huang", "Nan Chen", "Zhendong Mao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3af"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1181207, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81e"}, "filepath": "data/2501.01999v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997521432677554, "type": "Poster", "name": "Probing Equivariance and Symmetry Breaking in Convolutional Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116715", "abstract": "In this work, we explore the trade-offs of explicit structural priors, particularly group-equivariance. We address this through theoretical analysis and a comprehensive empirical study. To enable controlled and fair comparisons, we introduce \\texttt{Rapidash}, a unified group convolutional architecture that allows for different variants of equivariant and non-equivariant models. Our results suggest that more constrained equivariant models outperform less constrained alternatives when aligned with the geometry of the task, and increasing representation capacity does not fully eliminate performance gaps. We see improved performance of models with equivariance and symmetry-breaking through tasks like segmentation, regression, and generation across diverse datasets. Explicit \\textit{symmetry breaking} via geometric reference frames consistently improves performance, while \\textit{breaking equivariance} through geometric input features can be helpful when aligned with task geometry. Our results provide task-specific performance trends that offer a more nuanced way for model selection.", "arxiv_id": "2501.01999v3", "arxiv_authors": ["Sharvaree Vadgama", "Mohammad Mohaiminul Islam", "Domas Buracas", "Christian Shewmake", "Artem Moskalev", "Erik Bekkers"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1003228, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a81f"}, "filepath": "data/2509.17864v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993066062553583, "type": "Poster", "name": "ProDyG: Progressive Dynamic Scene Reconstruction via Gaussian Splatting from Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120014", "abstract": "Achieving truly practical dynamic 3D reconstruction requires online operation, global pose and map consistency, detailed appearance modeling, and the flexibility to handle both RGB and RGB-D inputs. However, existing SLAM methods typically merely remove the dynamic parts or require RGB-D input, while offline methods are not scalable to long video sequences, and current transformer-based feedforward methods lack global consistency and appearance details. To this end, we achieve online dynamic scene reconstruction by disentangling the static and dynamic parts within a SLAM system. The poses are tracked robustly with a novel motion masking strategy, and dynamic parts are reconstructed leveraging a progressive adaptation of a Motion Scaffolds graph. Our method yields novel view renderings competitive to offline methods and achieves on-par tracking with state-of-the-art dynamic SLAM methods.", "arxiv_id": "2509.17864v1", "arxiv_authors": ["Shi Chen", "Erik Sandstr\u00f6m", "Sandro Lombardi", "Siyuan Li", "Martin R. Oswald"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060031, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a820"}, "filepath": "data/2412.01930v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995456302428612, "type": "Poster", "name": "PROFIT: A Specialized Optimizer for Deep Fine Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117387", "abstract": "Fine-tuning pre-trained models has become invaluable in computer vision and robotics. Recent fine-tuning approaches focus on improving efficiency rather than accuracy by using a mixture of smaller learning rates or frozen backbones. To return the spotlight to model accuracy, we present PROFIT (Proximally Restricted Optimizer For Iterative Training), one of the first optimizers specifically designed for incrementally fine-tuning converged models on new tasks or datasets. Unlike traditional optimizers such as SGD or Adam, which make minimal assumptions due to random initialization, PROFIT leverages the structure of a converged model to regularize the optimization process, leading to improved results. By employing a simple temporal gradient orthogonalization process, PROFIT outperforms traditional fine-tuning methods across various tasks: image classification, representation learning, and large-scale motion prediction. Moreover, PROFIT is encapsulated within the optimizer logic, making it easily integrated into any training pipeline with minimal engineering effort. A new class of fine-tuning optimizers like PROFIT can drive advancements as fine-tuning and incremental training become increasingly prevalent, reducing reliance on costly model training from scratch.", "arxiv_id": "2412.01930v2", "arxiv_authors": ["Anirudh S Chakravarthy", "Shuai Kyle Zheng", "Xin Huang", "Sachithra Hemachandra", "Xiao Zhang", "Yuning Chai", "Zhao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1587632, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a821"}, "filepath": "data/2505.22342v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996678891321139, "type": "Poster", "name": "Progressive Data Dropout: An Embarrassingly Simple Approach to Train Faster", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117780", "abstract": "Training deep neural networks is computationally expensive, often requiring significant time and energy to converge on large datasets. In this paper, we propose Progressive Data Dropout, an embarrassingly simple yet effective approach to improve training efficiency by progressively discarding subsets of data across epochs. We explore three variants: (1) a curriculum-inspired method that initially focuses on hard examples\u2014those misclassified or predicted with low confidence; (2) a scalar-based decay method that randomly drops a fixed proportion of data each epoch; and (3) a hybrid approach that mimics the schedule of the first method but drops data at random rather than based on difficulty. All three approaches use the full training dataset only in the last epochs. Despite its simplicity, the third variant achieves the best performance in our experiments. Remarkably, our approach reduces the number of effective epochs to as little as 12.4\\% of the baseline measured in 'Effective Epochs', a hardware-independent proxy for backpropagation effort while improving accuracy by up to 4.82\\%, depending on the model and dataset. Our approach requires no changes to model architecture or optimizer, and can be applied across standard training pipelines. These results demonstrate that carefully designed data dropout strategies can substantially reduce training costs while enhancing generalization. Code: https://anonymous.4open.science/r/LearningWithRevision-1B1B.", "arxiv_id": "2505.22342v3", "arxiv_authors": ["Shriram M Sathiyanarayanan", "Xinyue Hao", "Shihao Hou", "Yang Lu", "Laura Sevilla-Lara", "Anurag Arnab", "Shreyank N Gowda"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.636Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031988, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a822"}, "filepath": "data/2502.01218v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997552131644557, "type": "Poster", "name": "Provable Ordering and Continuity in Vision-Language Pretraining for Generalizable Embodied Agents", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120019", "abstract": "Pre-training vision-language representations on human action videos has emergedas a promising approach to reduce reliance on large-scale expert demonstrationsfor training embodied agents. However, prior methods often employ time con-trastive learning based on goal-reaching heuristics, progressively aligning languageinstructions from the initial to the final frame. This overemphasis on future framescan result in erroneous vision-language associations, as actions may terminateearly or include irrelevant moments in the end. To address this issue, we proposeAction Temporal Coherence Learning (AcTOL) to learn ordered and continuousvision-language representations without rigid goal-based constraint. AcTOL treatsa video as a continuous trajectory where it (1) contrasts semantic differences be-tween frames to reflect their natural ordering, and (2) imposes a local Brownianbridge constraint to ensure smooth transitions across intermediate frames. Exten-sive imitation learning experiments on both simulated and real robots show that thepretrained features significantly enhance downstream manipulation tasks with highrobustness to different linguistic styles of instructions, offering a viable pathwaytoward generalized embodied agents. We provide source code and demo videos inthe supplemental material for reference.", "arxiv_id": "2502.01218v2", "arxiv_authors": ["Zhizhen Zhang", "Lei Zhu", "Zhen Fang", "Zi Huang", "Yadan Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b4"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1386070, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a823"}, "filepath": "data/2508.10898v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998903006702087, "type": "Poster", "name": "Puppeteer: Rig and Animate Your 3D Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117585", "abstract": "Modern interactive applications increasingly demand dynamic 3D content, yet the transformation of static 3D models into animated assets constitutes a significant bottleneck in content creation pipelines. While recent advances in generative AI have revolutionized static 3D model creation, rigging and animation continue to depend heavily on expert intervention. We present Puppeteer, a comprehensive framework that addresses both automatic rigging and animation for diverse 3D objects. Our system first predicts plausible skeletal structures via an auto-regressive transformer that introduces a joint-based tokenization strategy for compact representation and a hierarchical ordering methodology with stochastic perturbation that enhances bidirectional learning capabilities. It then infers skinning weights via an attention-based architecture incorporating topology-aware joint attention that explicitly encodes skeletal hierarchical relationships. Finally, we complement these rigging advances with a differentiable optimization-based animation pipeline that generates stable, high-fidelity animations while requiring only a fraction of the computational resources demanded by existing approaches. Extensive evaluations across multiple benchmarks demonstrate that our method significantly outperforms state-of-the-art techniques in both skeletal prediction accuracy and skinning quality. The system robustly processes diverse 3D content, ranging from professionally designed game assets to AI-generated shapes, producing temporally coherent animations devoid of jittering prevalent in existing methods.", "arxiv_id": "2508.10898v1", "arxiv_authors": ["Chaoyue Song", "Xiu Li", "Fan Yang", "Zhongcong Xu", "Jiacheng Wei", "Fayao Liu", "Jiashi Feng", "Guosheng Lin", "Jianfeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2041561, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a824"}, "filepath": "data/2506.23863v1.png", "tags": [], "_media_type": "image", "_rand": 0.999776994668079, "type": "Poster", "name": "Puzzles: Unbounded Video-Depth Augmentation for Scalable End-to-End 3D Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119850", "abstract": "Multi-view 3D reconstruction remains a core challenge in computer vision. Recent methods, such as DUSt3R and its successors, directly regress pointmaps from image pairs without relying on known scene geometry or camera parameters. However, the performance of these models is constrained by the diversity and scale of available training data. In this work, we introduce Puzzles, a data augmentation strategy that synthesizes an unbounded volume of high-quality, posed video-depth data from just a single image or video clip. By simulating diverse camera trajectories and realistic scene geometry through targeted image transformations, Puzzles significantly enhances data variety. Extensive experiments show that integrating Puzzles into existing video\u2011based 3D reconstruction pipelines consistently boosts performance, all without modifying the underlying network architecture. Notably, models trained on only 10% of the original data, augmented with Puzzles, achieve accuracy comparable to those trained on the full dataset.", "arxiv_id": "2506.23863v1", "arxiv_authors": ["Jiahao Ma", "Lei Wang", "Miaomiao liu", "David Ahmedt-Aristizabal", "Chuong Nguyen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1135241, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a825"}, "filepath": "data/2503.22679v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999910793715879, "type": "Poster", "name": "Q-Insight: Understanding Image Quality via Visual Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119363", "abstract": "Image quality assessment (IQA) focuses on the perceptual visual quality of images, playing a crucial role in downstream tasks such as image reconstruction, compression, and generation. The rapid advancement of multi-modal large language models (MLLMs) has significantly broadened the scope of IQA, moving toward comprehensive image quality understanding that incorporates content analysis, degradation perception, and comparison reasoning beyond mere numerical scoring. Previous MLLM-based methods typically either generate numerical scores lacking interpretability or heavily rely on supervised fine-tuning (SFT) using large-scale annotated datasets to provide descriptive assessments, limiting their flexibility and applicability. In this paper, we propose Q-Insight, a reinforcement learning-based model built upon group relative policy optimization (GRPO), which demonstrates strong visual reasoning capability for image quality understanding while requiring only a limited amount of rating scores and degradation labels. By jointly optimizing score regression and degradation perception tasks with carefully designed reward functions, our approach effectively exploits their mutual benefits for enhanced performance. Extensive experiments demonstrate that Q-Insight substantially outperforms existing state-of-the-art methods on both score regression and degradation perception tasks, while exhibiting impressive zero-shot generalization and superior comparison reasoning capability. The code and models will be made available.", "arxiv_id": "2503.22679v2", "arxiv_authors": ["Weiqi Li", "Xuanyu Zhang", "Shijie Zhao", "Yabin Zhang", "Junlin Li", "Li Zhang", "Jian Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1224120, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a826"}, "filepath": "data/2506.10977v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994688567041913, "type": "Poster", "name": "QuadricFormer: Scene as Superquadrics for 3D Semantic Occupancy Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116903", "abstract": "3D occupancy prediction is crucial for robust autonomous driving systems due to its comprehensive perception of environmental structures and semantics. Most existing methods employ dense grid-based scene representations, which ignore the inherent sparsity of driving scenes and suffer from low efficiency. Recent works explore object-centric representations based on sparse Gaussians, but the ellipsoidal shape prior of Gaussians limits their ability to model diverse structures. In real-world driving scenes, objects exhibit rich geometries (e.g., planes, cuboids, and irregular shapes), requiring excessive overlapping ellipsoidal Gaussians for accurate representation, which leads to inefficient scene representation. To address this, we propose to use geometrically expressive superquadrics as scene representation primitives instead of Gaussians, which can efficiently represent complex structures without much overlap. We develop a probabilistic superquadric mixture model, which interprets each superquadric as an occupancy probability distribution of its neighborhood with corresponding geometry, and calculates semantics through probabilistic mixture. We then employ a superquadric-based model (QuadricFormer) for efficient 3D occupancy prediction, and design a pruning-and-splitting module to further improve modeling efficiency by concentrating superquadrics in occupied regions. Extensive experiments on the nuScenes dataset demonstrate that QuadricFormer achieves state-of-the-art performance while maintaining superior efficiency.", "arxiv_id": "2506.10977v1", "arxiv_authors": ["Sicheng Zuo", "Wenzhao Zheng", "Xiaoyong Han", "Longchao Yang", "Yong Pan", "Jiwen Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3091213, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a827"}, "filepath": "data/2503.00743v2.png", "tags": [], "_media_type": "image", "_rand": 0.999721659535619, "type": "Poster", "name": "Quality-Driven Curation of Remote Sensing Vision-Language Data via Learned Scoring Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118657", "abstract": "Vision-Language Models (VLMs) have demonstrated great potential in interpreting remote sensing (RS) images through language-guided semantic. However, the effectiveness of these VLMs critically depends on high-quality image-text training data that captures rich semantic relationships between visual content and language descriptions. Unlike natural images, RS lacks large-scale interleaved image-text pairs from web data, making data collection challenging. While current approaches rely primarily on rule-based methods or flagship VLMs for data synthesis, a systematic framework for automated quality assessment of such synthetically generated RS vision-language data is notably absent. To fill this gap, we propose a novel score model trained on large-scale RS vision-language preference data for automated quality assessment. Our empirical results demonstrate that fine-tuning CLIP or advanced VLMs (e.g., Qwen2-VL) with the top 30% of data ranked by our score model achieves superior accuracy compared to both full-data fine-tuning and CLIP-score-based ranking approaches. Furthermore, we demonstrate applications of our scoring model for reinforcement learning (RL) training and best-of-N (BoN) test-time scaling, enabling significant improvements in VLM performance for RS tasks. Our code, model, and dataset will be made publicly available.", "arxiv_id": "2503.00743v2", "arxiv_authors": ["Dilxat Muhtar", "Enzhuo Zhang", "Zhenshi Li", "Feng Gu", "Yanglangxing He", "Pengfeng Xiao", "Xueliang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3b9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001197, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a828"}, "filepath": "data/2508.12720v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997481076714456, "type": "Poster", "name": "Quantifying and Alleviating Co-Adaptation in Sparse-View 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118933", "abstract": "3D Gaussian Splatting (3DGS) has demonstrated impressive performance in novel view synthesis under dense-view settings. However, in sparse-view scenarios, despite the realistic renderings in training views, 3DGS occasionally manifests appearance artifacts in novel views. This paper investigates the appearance artifacts in sparse-view 3DGS and uncovers a core limitation of current approaches: the optimized Gaussians are overly-entangled with one another to aggressively fit the training views, which leads to a neglect of the real appearance distribution of the underlying scene and results in appearance artifacts in novel views. The analysis is based on a proposed metric, termed Co-Adaptation Score (CA), which quantifies the entanglement among Gaussians, i.e., co-adaptation, by computing the pixel-wise variance across multiple renderings of the same viewpoint, with different random subsets of Gaussians. The analysis reveals that the degree of co-adaptation is naturally alleviated as the number of training views increases. Based on the analysis, we propose two lightweight strategies to explicitly mitigate the co-adaptation in sparse-view 3DGS: (1) random gaussian dropout; (2) multiplicative noise injection to the opacity. Both strategies are designed to be plug-and-play, and their effectiveness is validated across various methods and benchmarks. We hope that our insights into the co-adaptation effect will inspire the community to achieve a more comprehensive understanding of sparse-view 3DGS.", "arxiv_id": "2508.12720v3", "arxiv_authors": ["Kangjie Chen", "Yingji Zhong", "Zhihao Li", "Jiaqi Lin", "Youyu Chen", "Minghan Qin", "Haoqian Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ba"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076676, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a829"}, "filepath": "data/2506.05198v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994469523372429, "type": "Poster", "name": "Quantifying Cross-Modality Memorization in Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117170", "abstract": "Understanding what and how neural networks memorize during training is crucial, both from the perspective of unintentional memorization of potentially sensitive information and from the standpoint of effective knowledge acquisition for real-world, knowledge-intensive tasks. While previous studies primarily investigate memorization within a single modality, such as text memorization in large language models or image memorization in diffusion models, unified multimodal models are becoming increasingly prevalent in practical applications. In this work, we focus on the unique characteristics of cross-modality memorization and conduct a systematic study centered on vision-language models. To facilitate controlled experiments, we first introduce a synthetic persona dataset comprising diverse synthetic person images and textual descriptions. We quantify factual knowledge memorization and cross-modal transferability by training models on a single modality and evaluating their performance in the other. Our results reveal that facts learned in one modality transfer to the other, but a significant gap exists between recalling information in the source and target modalities. Furthermore, we observe that this gap exists across various scenarios, including more capable models, machine unlearning, and the multi-hop case. At the end, we propose a baseline method to mitigate this challenge. We hope our study can inspire future research on developing more robust multimodal learning techniques to enhance cross-modal transferability.", "arxiv_id": "2506.05198v1", "arxiv_authors": ["Yuxin Wen", "Yangsibo Huang", "Tom Goldstein", "Ravi Kumar", "Badih Ghazi", "Chiyuan Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3bb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050492, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82a"}, "filepath": "data/2506.02164v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995229355361459, "type": "Poster", "name": "Quantifying Task-relevant Similarities in Representations Using Decision Variable Correlations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117081", "abstract": "Previous studies have compared the brain and deep neural networks trained on image classification. Intriguingly, while some suggest that their representations are highly similar, others argued the opposite. Here, we propose a new approach to characterize the similarity of the decision strategies of two observers (models or brains) using decision variable correlation (DVC). DVC quantifies the correlation between decoded decisions on individual samples in a classification task and thus can capture task-relevant information rather than general representational alignment. We evaluate this method using monkey V4/IT recordings and models trained on image classification tasks.We find that model\u2013model similarity is comparable to monkey-monkey similarity, whereas model\u2013monkey similarity is consistently lower and, surprisingly, decreases with increasing ImageNet-1k performance. While adversarial training enhances robustness, it does not improve model\u2013monkey similarity in task-relevant dimensions; however, it markedly increases model\u2013model similarity. Similarly, pre-training on larger datasets does not improve model\u2013monkey similarity. These results suggest a fundamental divergence between the task-relevant representations in monkey V4/IT and those learned by models trained on image classification tasks.", "arxiv_id": "2506.02164v1", "arxiv_authors": [" Yu", " Qian", "Wilson S. Geisler", "Xue-Xin Wei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3bc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 933353, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82b"}, "filepath": "data/2505.21647v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999794618178862, "type": "Poster", "name": "QuARI: Query Adaptive Retrieval Improvement", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118001", "abstract": "Massive-scale pretraining has made vision-language models increasingly popular for image-to-image and text-to-image retrieval across a broad collection of domains. However, these models do not perform well when used for challenging retrieval tasks, such as instance retrieval in very large-scale image collections. Recent work has shown that linear transformations of VLM features trained for instance retrieval can improve performance by emphasizing subspaces that relate to the domain of interest. In this paper, we explore a more extreme version of this specialization by learning to map a given query to a query-specific feature space transformation. Because this transformation is linear, it can be applied with minimal computational cost to millions of image embeddings, making it effective for large-scale retrieval or re-ranking. Results show that this method consistently outperforms state-of-the-art alternatives, including those that require many orders of magnitude more computation at query time.", "arxiv_id": "2505.21647v1", "arxiv_authors": ["Eric Xing", "Abby Stylianou", "Robert Pless", "Nathan Jacobs"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3bd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1021037, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82c"}, "filepath": "data/2505.16673v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996320151593414, "type": "Poster", "name": "R1-Share: Incentivizing Reasoning Capabilities of Multimodal Large Language Models via Share-GRPO", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116391", "abstract": "In this work, we aim to incentivize the reasoning ability of Multimodal Large Language Models (MLLMs) via reinforcement learning (RL) and develop an effective approach that mitigates the sparse reward and advantage vanishing issues during RL. To this end, we propose Share-GRPO, a novel RL approach that tackle these issues by exploring and sharing diverse reasoning trajectories over expanded question space. Specifically, Share-GRPO first expands the question space for a given question via data transformation techniques, and then encourages MLLM to effectively explore diverse reasoning trajectories over the expanded question space and shares the discovered reasoning trajectories across the expanded questions during RL. In addition, Share-GRPO also shares reward information during advantage computation, which estimates solution advantages hierarchically across and within question variants, allowing more accurate estimation of relative advantages and improving the stability of policy training. Extensive evaluations over 6 widely-used reasoning benchmarks showcase the superior performance of our method.", "arxiv_id": "2505.16673v1", "arxiv_authors": ["Huanjin Yao", "Qixiang Yin", "Jingyi Zhang", "Min Yang", "Yibo Wang", "Wenhao Wu", "Fei Su", "Li Shen", "Minghui Qiu", "Dacheng Tao", "Jiaxing Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3be"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.637Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1077575, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82d"}, "filepath": "data/2502.13144v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994061939401884, "type": "Poster", "name": "RAD: Training an End-to-End Driving Policy via Large-Scale 3DGS-based Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119535", "abstract": "Existing end-to-end autonomous driving (AD) algorithms typically follow the Imitation Learning (IL) paradigm, which faces challenges such as causal confusion and an open-loop gap. In this work, we establish a 3DGS-based closed-loop Reinforcement Learning (RL) training paradigm. By leveraging 3DGS techniques, we construct a photorealistic digital replica of the real physical world, enabling the AD policy to extensively explore the state space and learn to handle out-of-distribution scenarios through large-scale trial and error. To enhance safety, we design specialized rewards to guide the policy in effectively responding to safety-critical events and understanding real-world causal relationships. To better align with human driving behavior, we incorporate IL into RL training as a regularization term. We introduce a closed-loop evaluation benchmark consisting of diverse, previously unseen 3DGS environments. Compared to IL-based methods, RAD achieves stronger performance in most closed-loop metrics, particularly exhibiting a 3x lower collision rate. Abundant closed-loop results are presented in the supplementary material. Code will be released to facilitate future research.", "arxiv_id": "2502.13144v2", "arxiv_authors": ["Hao Gao", "Shaoyu Chen", "Bo Jiang", "Bencheng Liao", "Yiang Shi", "Xiaoyang Guo", "Yuechuan Pu", "Haoran Yin", "Xiangyu Li", "Xinbang Zhang", "Ying Zhang", "Wenyu Liu", "Qian Zhang", "Xinggang Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3bf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110673, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82e"}, "filepath": "data/2504.07416v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991225157652558, "type": "Poster", "name": "RadZero: Similarity-Based Cross-Attention for Explainable Vision-Language Alignment in Radiology with Zero-Shot Multi-Task Capability", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117595", "abstract": "Recent advancements in multi-modal models have significantly improved vision-language (VL) alignment in radiology. However, existing approaches struggle to effectively utilize complex radiology reports for learning and offer limited interpretability through attention probability visualizations. To address these challenges, we introduce $\\textbf{RadZero}$, a novel framework for VL alignment in radiology with zero-shot multi-task capability. A key component of our approach is $\\textbf{VL-CABS}$ ($\\textbf{V}$ision-$\\textbf{L}$anguage $\\textbf{C}$ross-$\\textbf{A}$ttention $\\textbf{B}$ased on $\\textbf{S}$imilarity),which aligns text embeddings with local image features for interpretable, fine-grained VL reasoning.RadZero leverages large language models to extract concise semantic sentences from radiology reports and employs multi-positive contrastive training to effectively capture relationships between images and multiple relevant textual descriptions.It uses a pre-trained vision encoder with additional trainable Transformer layers, allowing efficient high-resolution image processing. By computing similarity between text embeddings and local image patch features, VL-CABS enables zero-shot inference with similarity probability for classification, and pixel-level VL similarity maps for grounding and segmentation. Experimental results on public chest radiograph benchmarks show that RadZero outperforms state-of-the-art methods in zero-shot classification, grounding, and segmentation. Furthermore, VL similarity map analysis highlights the potential of VL-CABS for improving explainability in VL alignment. Additionally, qualitative evaluation demonstrates RadZero\u2019s capability for open-vocabulary semantic segmentation, further validating its effectiveness in medical imaging.", "arxiv_id": "2504.07416v2", "arxiv_authors": ["Jonggwon Park", "Soobum Kim", "Byungmu Yoon", "Kyoyun Choi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039173, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a82f"}, "filepath": "data/2507.05193v3.png", "tags": [], "_media_type": "image", "_rand": 0.999376289398474, "type": "Poster", "name": "RAM-W600: A Multi-Task Wrist Dataset and Benchmark for Rheumatoid Arthritis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121569", "abstract": "Rheumatoid arthritis (RA) is a common autoimmune disease that has been the focus of research in computer-aided diagnosis (CAD) and disease monitoring. In clinical settings, conventional radiography (CR) is widely used for the screening and evaluation of RA due to its low cost and accessibility. The wrist is a critical region for the diagnosis of RA. However, CAD research in this area remains limited, primarily due to the challenges in acquiring high-quality instance-level annotations. (i) The wrist comprises numerous small bones with narrow joint spaces, complex structures, and frequent overlaps, requiring detailed anatomical knowledge for accurate annotation. (ii) Disease progression in RA often leads to osteophyte, bone erosion (BE), and even bony ankylosis, which alter bone morphology and increase annotation difficulty, necessitating expertise in rheumatology.This work presents a multi-task dataset for wrist bone in CR, including two tasks: (i) wrist bone instance segmentation and (ii) Sharp/van der Heijde (SvdH) BE scoring, which is the first public resource for wrist bone instance segmentation. This dataset comprises 621 wrist conventional radiographs of 227 patients from four medical centers, with pixel-level instance segmentation annotations for 443 images and SvdH BE scores for 548 images. This dataset can potentially support a wide range of research tasks related to RA, including joint space narrowing (JSN) progression quantification, BE detection, bone deformity evaluation, and osteophyte detection. It may also be applied to other wrist-related tasks, such as carpal bone fracture localization.We hope this dataset will significantly lower the barrier to research on wrist RA and accelerate progress in CAD research within the RA-related domain.Benchmark \\& Code: https://github.com/YSongxiao/RAM-W600Data \\& Dataset Card: https://huggingface.co/datasets/TokyoTechMagicYang/RAM-W600", "arxiv_id": "2507.05193v3", "arxiv_authors": ["Songxiao Yang", "Haolin Wang", "Yao Fu", "Ye Tian", "Tamotsu Kamishima", "Masayuki Ikebe", "Yafei Ou", "Masatoshi Okutomi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c1"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1107314, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a830"}, "filepath": "data/2510.18353v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993102196601626, "type": "Poster", "name": "Ranking-based Preference Optimization for Diffusion Models from Implicit User Feedback", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118987", "abstract": "Direct preference optimization (DPO) methods have shown strong potential in aligning text-to-image diffusion models with human preferences by training on paired comparisons. These methods improve training stability by avoiding the REINFORCE algorithm but still struggle with challenges such as accurately estimating image probabilities due to the non-linear nature of the sigmoid function and the limited diversity of offline datasets. In this paper, we introduce Diffusion Denoising Ranking Optimization (Diffusion-DRO), a new preference learning framework grounded in inverse reinforcement learning. Diffusion-DRO removes the dependency on a reward model by casting preference learning as a ranking problem, thereby simplifying the training objective into a denoising formulation and overcoming the non-linear estimation issues found in prior methods. Moreover, Diffusion-DRO uniquely integrates offline expert demonstrations with online policy-generated negative samples, enabling it to effectively capture human preferences while addressing the limitations of offline data. Comprehensive experiments show that Diffusion-DRO delivers improved generation quality across a range of challenging and unseen prompts, outperforming state-of-the-art baselines in both both quantitative metrics and user studies.", "arxiv_id": "2510.18353v1", "arxiv_authors": ["Yi-Lun Wu", "Bo-Kai Ruan", "Chiang Tseng", "Hong-Han Shuai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087432, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a831"}, "filepath": "data/2506.07490v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998753563287096, "type": "Poster", "name": "RAPID Hand: Robust, Affordable, Perception-Integrated, Dexterous Manipulation Platfrom for Embodied Intelligence", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117884", "abstract": "This paper addresses the scarcity of low-cost but high-dexterity platforms for collecting real-world multi-fingered robot manipulation data towards generalist robot autonomy. To achieve it, we propose the RAPID Hand, a co-optimized hardware and software platform where the compact 20-DoF hand, robust whole-hand perception, and high-DoF teleoperation interface are jointly designed.Specifically, we introduce RAPID Hand, a robust, affordable, and perception-integrated 20-DoF robotic hand co-designed to meet both data and algorithmic requirements for training general-purpose manipulation policies. Specifically, RAPID Hand adopts a compact and practical hand ontology and a hardware-level perception framework that stably integrates wrist-mounted vision, fingertip tactile sensing, and proprioception with sub-7 ms latency and spatial alignment. Collecting high-quality demonstrations on high-DoF hands is challenging, as existing teleoperation methods struggle with precision and stability on complex multi-fingered systems. Collecting high-quality demonstrations on high-DoF hands is challenging. Existing teleoperation methods, primarily designed for under-actuated or low-DoF hands, struggle with precision and stability when applied to complex multi-fingered systems. We address this by co-optimizing hand design, perception integration, and teleoperation interface through a universal actuation scheme, custom perception electronics, and two retargeting constraints. We evaluate the platform\u2019s hardware, perception, and teleoperation interface. Training a diffusion policy on collected data shows superior performance over prior works, validating the system\u2019s capability for reliable, high-quality data collection. The platform is constructed from low-cost and off-the-shelf components and will be made public to ensure reproducibility and ease of adoption.", "arxiv_id": "2506.07490v1", "arxiv_authors": ["Zhaoliang Wan", "Zetong Bi", "Zida Zhou", "Hao Ren", "Yiming Zeng", "Yihan Li", "Lu Qi", "Xu Yang", "Ming-Hsuan Yang", "Hui Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c3"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2912682, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a832"}, "filepath": "data/2505.16394v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995752930002428, "type": "Poster", "name": "Raw2Drive: Reinforcement Learning with Aligned World Models for End-to-End Autonomous Driving (in CARLA v2)", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119310", "abstract": "Reinforcement Learning (RL) can mitigate the causal confusion and distribution shift inherent to imitation learning (IL). However, applying RL to end-to-end autonomous driving (E2E-AD) remains an open problem for its training difficulty, and IL is still the mainstream paradigm in both academia and industry. Recently Model-based Reinforcement Learning (MBRL) have demonstrated promising results in neural planning; however, these methods typically require privileged information as input rather than raw sensor data. We fill this gap by designing Raw2Drive, a dual-stream MBRL approach. Initially, we efficiently train an auxiliary privileged world model paired with a neural planner that uses privileged information as input. Subsequently, we introduce a raw sensor world model trained via our proposed Guidance Mechanism, which ensures consistency between the raw sensor world model and the privileged world model during rollouts. Finally, the raw sensor world model combines the prior knowledge embedded in the heads of the privileged world model to effectively guide the training of the raw sensor policy. Raw2Drive is so far the only RL based end-to-end method on CARLA Leaderboard 2.0, and Bench2Drive and it achieves state-of-the-art performance.", "arxiv_id": "2505.16394v2", "arxiv_authors": ["Zhenjie Yang", "Xiaosong Jia", "Qifeng Li", "Xue Yang", "Maoqing Yao", "Junchi Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c4"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1132182, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a833"}, "filepath": "data/2510.08017v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991610800844313, "type": "Poster", "name": "RayFusion: Ray Fusion Enhanced Collaborative Visual Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116928", "abstract": "Collaborative visual perception methods have gained widespread attention in the autonomous driving community in recent years due to their ability to address sensor limitation problems. However, the absence of explicit depth information often makes it difficult for camera-based perception systems, e.g., 3D object detection, to generate accurate predictions. To alleviate the ambiguity in depth estimation, we propose RayFusion, a ray-based fusion method for collaborative visual perception. Using ray occupancy information from collaborators, RayFusion reduces redundancy and false positive predictions along camera rays, enhancing the detection performance of purely camera-based collaborative perception systems. Comprehensive experiments show that our method consistently outperforms existing state-of-the-art models, substantially advancing the performance of collaborative visual perception. Our code will be made publicly available.", "arxiv_id": "2510.08017v1", "arxiv_authors": ["Shaohong Wang", "Bin Lu", "Xinyu Xiao", "Hanzhi Zhong", "Bowen Pang", "Tong Wang", "Zhiyu Xiang", "Hangguan Shan", "Eryun Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1334170, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a834"}, "filepath": "data/2506.05285v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993866898377952, "type": "Poster", "name": "RaySt3R: Predicting Novel Depth Maps for Zero-Shot Object Completion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118332", "abstract": "3D shape completion has broad applications in robotics, digital twin reconstruction, and extended reality (XR). Although recent advances in 3D object and scene completion have achieved impressive results, existing methods lack 3D consistency, are computationally expensive, and struggle to capture sharp object boundaries. Our work (RaySt3R) addresses these limitations by recasting 3D shape completion as a novel view synthesis problem. Specifically, given a single RGB-D image, and a novel viewpoint (encoded as a collection of query rays),we train a feedforward transformer to predict depth maps, object masks, and per-pixel confidence scores for those query rays. RaySt3R fuses these predictions across multiple query views to reconstruct complete 3D shapes. We evaluate RaySt3R on synthetic and real-world datasets, and observe it achieves state-of-the-art performance,outperforming the baselines on all datasets by up to 44% in 3D chamfer distance.", "arxiv_id": "2506.05285v1", "arxiv_authors": ["Bardienus P. Duisterhof", "Jan Oberst", "Bowen Wen", "Stan Birchfield", "Deva Ramanan", "Jeffrey Ichnowski"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2625328, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a835"}, "filepath": "data/2505.16770v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995088298922973, "type": "Poster", "name": "RBench-V: A Primary Assessment for Visual Reasoning Models with Multimodal Outputs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121474", "abstract": "The rapid advancement of native multi-modal models and omni-models, exemplified by GPT-4o, Gemini and o3 with their capability to process and generate content across modalities such as text and images, marks a significant milestone in the evolution of intelligence. Systematic evaluation of their multi-modal output capabilities in visual thinking process (a.k.a., multi-modal chain of thought, M-CoT) becomes critically important. However, existing benchmarks for evaluating multi-modal models primarily focus on assessing multi-modal inputs and text-only reasoning process while neglecting the importance of reasoning through multi-modal outputs. In this paper, we present a benchmark, dubbed as RBench-V, designed to assess models\u2019 vision-indispensable reasoning. To conduct RBench-V, we carefully hand-pick 803 questions covering math, physics, counting and games. Unlike problems in previous benchmarks, which typically specify certain input modalities, RBench-V presents problems centered on multi-modal outputs, which require image manipulation, such as generating novel images and constructing auxiliary lines to support reasoning process. We evaluate numerous open- and closed-source models on RBench-V, including o3, Gemini 2.5 pro, Qwen2.5-VL, etc. Even the best-performing model, o3, achieves only 25.8% accuracy on RBench-V, far below the human score of 82.3%, which shows current models struggle to leverage multi-modal reasoning. Data and code are available at https://evalmodels.github.io/rbenchv.", "arxiv_id": "2505.16770v2", "arxiv_authors": ["Meng-Hao Guo", "Xuanyu Chu", "Qianrui Yang", "Zhe-Han Mo", "Yiqing Shen", "Pei-lin Li", "Xinjie Lin", "Jinnian Zhang", "Xin-Sheng Chen", "Yi Zhang", "Kiyohiro Nakayama", "Zhengyang Geng", "Houwen Peng", "Han Hu", "Shi-Min Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 997663, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a836"}, "filepath": "data/2510.14968v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990355887440866, "type": "Poster", "name": "RDD: Retrieval-Based Demonstration Decomposer for Planner Alignment in Long-Horizon Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115042", "abstract": "To enable robots to achieve long-horizon tasks, recent hierarchical vision-language-action (VLAs) frameworks typically adopt vision-language model (VLM)-based planners to decompose complex manipulation tasks into simple sub-tasks that low-level visuomotor policies can easily handle. Typically, to finetune the VLM planner and let it learn to decompose the target task, a few human demonstrations are provided and will be segmented into sub-tasks by either human annotation or heuristic rules, which are less efficient, and the heuristic sub-tasks could largely deviate from the training data of visuomotor policy, which degrades the task performance. To address these issues, we propose a Retrieval-based Demonstration Decomposer (RDD) that automatically decomposes demonstrations into sub-tasks by aligning the visual features of decomposed sub-task intervals with the training data of low-level visuomotor policies to fully exploit its capability. Our method shows superior performance compared to the state-of-the-art sub-task decomposer on the RLBench benchmark and demonstrates robustness under various settings. Our code and demo videos are available in the supplementary materials.", "arxiv_id": "2510.14968v1", "arxiv_authors": ["Mingxuan Yan", "Yuping Wang", "Zechun Liu", "Jiachen Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c8"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068870, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a837"}, "filepath": "data/2505.24848v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994232728498558, "type": "Poster", "name": "Reading Recognition in the Wild", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117940", "abstract": "To enable egocentric contextual AI in always-on smart glasses, it is crucial to be able to keep a record of the user's interactions with the world, including during reading. In this paper, we introduce a new task of reading recognition to determine when the user is reading. We first introduce the first-of-its-kind large-scale multimodal Reading in the Wild dataset, containing 100 hours of reading and non-reading videos in diverse and realistic scenarios. We then identify three modalities (egocentric RGB, eye gaze, head pose) that can be used to solve the task, and present a flexible transformer model that performs the task using these modalities, either individually or combined. We show that these modalities are relevant and complementary to the task, and investigate how to efficiently and effectively encode each modality. Additionally, we show the usefulness of this dataset towards classifying types of reading, extending current reading understanding studies conducted in constrained settings to larger scale, diversity and realism. Code, model, and data will be public.", "arxiv_id": "2505.24848v2", "arxiv_authors": ["Charig Yang", "Samiul Alam", "Shakhrul Iman Siam", "Michael J. Proulx", "Lambert Mathias", "Kiran Somasundaram", "Luis Pesqueira", "James Fort", "Sheroze Sheriffdeen", "Omkar Parkhi", "Carl Ren", "Mi Zhang", "Yuning Chai", "Richard Newcombe", "Hyo Jin Kim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3c9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2517688, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a838"}, "filepath": "data/2506.01300v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998981526919481, "type": "Poster", "name": "ReAgent-V: A Reward-Driven Multi-Agent Framework for Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119251", "abstract": "Video understanding is fundamental to tasks such as action recognition, video reasoning, and robotic control. Early video understanding methods based on large vision-language models (LVLMs) typically adopt a single-pass reasoning paradigm without dynamic feedback, limiting the model\u2019s capacity to self-correct and adapt in complex scenarios. Recent efforts have attempted to address this limitation by incorporating reward models and reinforcement learning to enhance reasoning, or by employing tool-agent frameworks. However, these approaches face several challenges, including high annotation costs, reward signals that fail to capture real-time reasoning states, and low inference efficiency. To overcome these issues, we propose ReAgent-V, a novel agentic video understanding framework that integrates efficient frame selection with real-time reward generation during inference. These reward signals not only guide iterative answer refinement through a multi-perspective reflection mechanism\u2014adjusting predictions from conservative, neutral, and aggressive viewpoints\u2014but also enable automatic filtering of high-quality data for supervised fine-tuning (SFT), direct preference optimization (DPO), and group relative policy optimization (GRPO). ReAgent-V is lightweight, modular, and extensible, supporting flexible tool integration tailored to diverse tasks. Extensive experiments on 12 datasets across three core applications\u2014video understanding, video reasoning enhancement, and vision-language-action model alignment\u2014demonstrate significant gains in generalization and reasoning, with improvements of up to 6.9%, 2.1%, and 9.8%, respectively, highlighting the effectiveness and versatility of the proposed framework.", "arxiv_id": "2506.01300v1", "arxiv_authors": ["Yiyang Zhou", "Yangfan He", "Yaofeng Su", "Siwei Han", "Joel Jang", "Gedas Bertasius", "Mohit Bansal", "Huaxiu Yao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ca"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.638Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1007269, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a839"}, "filepath": "data/2506.07339v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996962879206778, "type": "Poster", "name": "Real-Time Execution of Action Chunking Flow Policies", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117747", "abstract": "Modern AI systems, especially those interacting with the physical world, increasingly require real-time performance. However, the high latency of state-of-the-art generalist models, including recent vision-language-action models (VLAs), poses a significant challenge. While action chunking has enabled temporal consistency in high-frequency control tasks, it does not fully address the latency problem, leading to pauses or out-of-distribution jerky movements at chunk boundaries. This paper presents a novel inference-time algorithm that enables smooth asynchronous execution of action chunking policies. Our method, real-time chunking (RTC), is applicable to any diffusion- or flow-based VLA out of the box with no retraining. It generates the next action chunk while executing the current one, \"freezing\" actions guaranteed to execute and \"inpainting\" the rest. To test RTC, we introduce a new benchmark of 12 highly dynamic tasks in the Kinetix simulator, as well as evaluate 6 challenging real-world bimanual manipulation tasks. Results demonstrate that RTC is fast, performant, and uniquely robust to inference delay, significantly improving task throughput and enabling success in precise tasks --- such as lighting a match --- even in the presence of extreme latency.", "arxiv_id": "2506.07339v1", "arxiv_authors": ["Kevin Black", "Manuel Y. Galliker", "Sergey Levine"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3cb"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3477171, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83a"}, "filepath": "data/2503.06677v4.png", "tags": [], "_media_type": "image", "_rand": 0.9998453747045155, "type": "Poster", "name": "REArtGS: Reconstructing and Generating Articulated Objects via 3D Gaussian Splatting with Geometric and Motion Constraints", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119798", "abstract": "Articulated objects, as prevalent entities in human life, their 3D representations play crucial roles across various applications. However, achieving both high-fidelity textured surface reconstruction and dynamic generation for articulated objects remains challenging for existing methods. In this paper, we present REArtGS, a novel framework that introduces additional geometric and motion constraints to 3D Gaussian primitives, enabling realistic surface reconstruction and generation for articulated objects. Specifically, given multi-view RGB images of arbitrary two states of articulated objects, we first introduce an unbiased Signed Distance Field (SDF) guidance to regularize Gaussian opacity fields, enhancing geometry constraints and improving surface reconstruction quality. Then we establish deformable fields for 3D Gaussians constrained by the kinematic structures of articulated objects, achieving unsupervised generation of surface meshes in unseen states. Extensive experiments on both synthetic and real datasets demonstrate our approach achieves high-quality textured surface reconstruction for given states, and enables high-fidelity surface generation for unseen states. Codes can be found in the supplementary materials and will be made publicly available.", "arxiv_id": "2503.06677v4", "arxiv_authors": ["Di Wu", "Liu Liu", "Zhou Linli", "Anran Huang", "Liangtu Song", "Qiaojun Yu", "Qi Wu", "Cewu Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3cc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032129, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83b"}, "filepath": "data/2505.24225v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994167873897323, "type": "Poster", "name": "Reasoning Can Hurt the Inductive Abilities of Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115140", "abstract": "Large Language Models (LLMs) have shown remarkable progress across domains, yet their ability to perform inductive reasoning\u2014inferring latent rules from sparse examples\u2014remains limited. It is often assumed that chain-of-thought (CoT) prompting, as used in Large Reasoning Models (LRMs), enhances such reasoning. We investigate this assumption with creating four controlled, diagnostic game-based tasks\u2014chess, Texas Hold\u2019em, dice games, and blackjack\u2014with hidden human-defined rules. We find that CoT reasoning can degrade inductive performance, with LRMs often underperforming their non-reasoning counterparts.To explain this, we present a theoretical framework that reveals how reasoning steps can amplify error through three failure modes: incorrect sub-task decomposition, incorrect sub-task solving, and incorrect final answer summarization. Based on our theoretical and empirical analysis, we introduce structured interventions that adapt CoT generation according to our identified failure types. These interventions improve inductive accuracy without retraining. Our findings suggest that effective (CoT) reasoning depends not only on taking more steps but also on ensuring those steps are well-structured.", "arxiv_id": "2505.24225v1", "arxiv_authors": ["Haibo Jin", "Peiyan Zhang", "Man Luo", "Haohan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3cd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 972393, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83c"}, "filepath": "data/2503.20752v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994594844553287, "type": "Poster", "name": "Reason-RFT: Reinforcement Fine-Tuning for Visual Reasoning of Vision Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118345", "abstract": "Visual reasoning abilities play a crucial role in understanding complex multimodal data, advancing both domain-specific applications and artificial general intelligence (AGI). Existing methods enhance Vision-Language Models (VLMs) through Chain-of-Thought (CoT) supervised fine-tuning using meticulously annotated data. However, this approach may lead to overfitting and cognitive rigidity, limiting the model\u2019s generalization ability under domain shifts and reducing real-world applicability. To overcome these limitations, we propose Reason-RFT, a two-stage reinforcement fine-tuning framework for visual reasoning. First, Supervised Fine-Tuning (SFT) with curated CoT data activates the reasoning potential of VLMs. This is followed by reinforcement learning based on Group Relative Policy Optimization (GRPO), which generates multiple reasoning-response pairs to enhance adaptability to domain shifts. To evaluate Reason-RFT, we reconstructed a comprehensive dataset covering visual counting, structural perception, and spatial transformation, serving as a benchmark for systematic assessment across three key dimensions. Experimental results highlight three advantages: (1) performance enhancement, with Reason-RFT achieving state-of-the-art results and outperforming both open-source and proprietary models; (2) generalization superiority, maintaining robust performance under domain shifts across various tasks; and (3) data efficiency, excelling in few-shot learning scenarios and surpassing full-dataset SFT baselines. Reason-RFT introduces a novel training paradigm for visual reasoning and marks a significant step forward in multimodal research.", "arxiv_id": "2503.20752v3", "arxiv_authors": ["Huajie Tan", "Yuheng Ji", "Xiaoshuai Hao", "Xiansheng Chen", "Pengwei Wang", "Zhongyuan Wang", "Shanghang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ce"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1155819, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83d"}, "filepath": "data/2505.12499v5.png", "tags": [], "_media_type": "image", "_rand": 0.9993057697212681, "type": "Poster", "name": "Rebalancing Contrastive Alignment with Learnable Semantic Gaps in Text-Video Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119628", "abstract": "Recent advances in text-video retrieval have been largely driven by contrastive learning frameworks. However, existing methods overlook a key source of optimization tension: the separation between text and video distributions in the representation space\u2014referred to as the modality gap\u2014and the prevalence of false negatives in batch sampling. These factors lead to conflicting gradients under the InfoNCE loss, impeding stable alignment. To mitigate this, we propose GARE\u2014a Gap-Aware Retrieval framework that introduces a learnable, pair-specific increment $\\Delta_{ij}$ between text $t_i$ and video $v_j$ to offload the tension from the global anchor representation. We first derive the ideal form of $\\Delta_{ij}$ via a coupling multivariate first-order Taylor approximation of the InfoNCE loss under a trust-region constraint, revealing it as a key mechanism for resolving gradient conflicts by guiding updates along a locally optimal descent direction in the coupled optimization landscape. Due to the expensive cost of directly approximate $\\Delta_{ij}$, we introduce a lightweight neural module conditioned on the semantic gap between each video-text pair, enabling structure-aware correction guided by gradient supervision. To further stabilize learning and promote interpretability, we regularize $\\Delta$ via three components: a trust-region constraint regularization to prevent oscillations, a directional diversity term to expand the semantic difference space, and an information bottleneck over $\\Delta$ to restrict redundant information. Experiments across four retrieval benchmarks show that GARE consistently improves alignment accuracy and robustness to noisy supervision, confirming the effectiveness of gap-aware tension unloading.", "arxiv_id": "2505.12499v5", "arxiv_authors": ["Jian Xiao", "Zijie Song", "Jialong Hu", "Hao Cheng", "Jia Li", "Zhenzhen Hu", "Richang Hong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3cf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1102449, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83e"}, "filepath": "data/2506.14674v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995294341475188, "type": "Poster", "name": "Recognition through Reasoning: Reinforcing Image Geo-localization with Large Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115287", "abstract": "Previous methods for image geo-localization have typically treated the task as either classification or retrieval, often relying on black-box decisions that lack interpretability. The rise of large vision-language models (LVLMs) has enabled a rethinking of geo-localization as a reasoning-driven task grounded in visual cues. However, two major challenges persist. On the data side, existing reasoning-focused datasets are primarily based on street-view imagery, offering limited scene diversity and constrained viewpoints. On the modeling side, current approaches predominantly rely on supervised fine-tuning, which yields only marginal improvements in reasoning capabilities. To address these challenges, we propose a novel pipeline that constructs a reasoning-oriented geo-localization dataset, $\\textit{MP16-Reason}$, using diverse social media images. We introduce $\\textit{GLOBE}$, $\\textbf{G}$roup-relative policy optimization for $\\textbf{L}$ocatability assessment and $\\textbf{O}$ptimized visual-clue reasoning, yielding $\\textbf{B}$i-objective geo-$\\textbf{E}$nhancement for the VLM in recognition and reasoning. $\\textit{GLOBE}$ incorporates task-specific rewards that jointly enhance locatability assessment, visual clue reasoning, and geolocation accuracy. Both qualitative and quantitative results demonstrate that $\\textit{GLOBE}$ outperforms state-of-the-art open-source LVLMs on geo-localization tasks, particularly in diverse visual scenes, while also generating more insightful and interpretable reasoning trajectories.", "arxiv_id": "2506.14674v2", "arxiv_authors": ["Ling Li", "Yao Zhou", "Yuxuan Liang", "Fugee Tsung", "Jiaheng Wei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112705, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a83f"}, "filepath": "data/2509.24325v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991871725865018, "type": "Poster", "name": "ReCon-GS: Continuum-Preserved Guassian Streaming for Fast and Compact Reconstruction of Dynamic Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117173", "abstract": "Online free-viewpoint video (FVV) reconstruction is challenged by slow per-frame optimization, inconsistent motion estimation, and unsustainable storage demands. To address these challenges, we propose the Reconfigurable Continuum Gaussian Stream, dubbed ReCon-GS, a novel storage-aware framework that enables high-fidelity online dynamic scene reconstruction and real-time rendering. Specifically, we dynamically allocate multi-level Anchor Gaussians in a density-adaptive fashion to capture inter-frame geometric deformations, thereby decomposing scene motion into compact coarse-to-fine representations. Then, we design a dynamic hierarchy reconfiguration strategy that preserves localized motion expressiveness through on-demand anchor re-hierarchization, while ensuring temporal consistency through intra-hierarchical deformation inheritance that confines transformation priors to their respective hierarchy levels. Furthermore, we introduce a storage-aware optimization mechanism that flexibly adjusts the density of Anchor Gaussians at different hierarchy levels, enabling a controllable trade-off between reconstruction fidelity and memory usage. Extensive experiments on three widely used datasets demonstrate that, compared to state\u2010of\u2010the\u2010art methods, ReCon-GS improves training efficiency by approximately 15% and achieves superior FVV synthesis quality with enhanced robustness and stability. Moreover, at equivalent rendering quality, ReCon-GS slashes memory requirements by over 50% compared to leading state\u2011of\u2011the\u2011art methods.", "arxiv_id": "2509.24325v1", "arxiv_authors": ["Jiaye Fu", "Qiankun Gao", "Chengxiang Wen", "Yanmin Wu", "Siwei Ma", "Jiaqi Zhang", "Jian Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d1"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1720997, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a840"}, "filepath": "data/2510.15783v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994371002552491, "type": "Poster", "name": "ReCon: Region-Controllable Data Augmentation with Rectification and Alignment for Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120077", "abstract": "The scale and quality of datasets are crucial for training robust perception models. However, obtaining large-scale annotated data is both costly and time-consuming. Generative models have emerged as a powerful tool for data augmentation by synthesizing samples that adhere to desired distributions. However, current generative approaches often rely on complex post-processing or extensive fine-tuning on massive datasets to achieve satisfactory results, and they remain prone to content\u2013position mismatches and semantic leakage. To overcome these limitations, we introduce ReCon, a novel augmentation framework that enhances the capacity of structure-controllable generative models for object detection. ReCon integrates region-guided rectification into the diffusion sampling process, using feedback from a pre-trained perception model to rectify misgenerated regions within diffusion sampling process. We further propose region-aligned cross-attention to enforce spatial\u2013semantic alignment between image regions and their textual cues, thereby improving both semantic consistency and overall image fidelity. Extensive experiments demonstrate that ReCon substantially improve the quality and trainability of generated data, achieving consistent performance gains across various datasets, backbone architectures, and data scales.", "arxiv_id": "2510.15783v1", "arxiv_authors": ["Haowei Zhu", "Tianxiang Pan", "Rui Qin", "Jun-Hai Yong", "Bin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054243, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a841"}, "filepath": "data/2507.12646v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996716383135307, "type": "Poster", "name": "Reconstruct, Inpaint, Finetune: Dynamic Novel-view Synthesis from Monocular Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117239", "abstract": "We explore novel-view synthesis for dynamic scenes from monocular videos. Prior approaches rely on costly test-time optimization of 4D representations or do not preserve scene geometry when trained in a feed-forward manner. Our approach is based on three key insights: (1) covisible pixels (that are visible in both the input and target views) can be rendered by first reconstructing the dynamic 3D scene and rendering the reconstruction from the novel-views and (2) hidden pixels in novel views can be ``inpainted\" with feed-forward 2D video diffusion models. Notably, our video inpainting diffusion model (CogNVS) can be self-supervised from 2D videos, allowing us to train it on a large corpus of in-the-wild videos. This in turn allows for (3) CogNVS to be applied zero-shot to novel test videos via test-time finetuning. We empirically verify that CogNVS outperforms almost all prior art for novel-view synthesis of dynamic scenes from monocular videos.", "arxiv_id": "2507.12646v1", "arxiv_authors": ["Kaihua Chen", "Tarasha Khurana", "Deva Ramanan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3465304, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a842"}, "filepath": "data/2510.07631v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999347705295034, "type": "Poster", "name": "Rectified CFG++ for Flow Based Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118333", "abstract": "Classifier\u2011free guidance (CFG) is the workhorse for steering large diffusion models toward text\u2011conditioned targets, yet its na\u00efve application to rectified flow (RF) based models provokes severe off\u2013manifold drift, yielding visual artifacts, text misalignment, and brittle behaviour. We present Rectified-CFG++, an adaptive predictor\u2013corrector guidance that couples the deterministic efficiency of rectified flows with a geometry\u2011aware conditioning rule. Each inference step first executes a conditional RF update that anchors the sample near the learned transport path, then applies a weighted conditional correction that interpolates between conditional and unconditional velocity fields. We prove that the resulting velocity field is marginally consistent and that its trajectories remain within a bounded tubular neighbourhood of the data manifold, ensuring stability across a wide range of guidance strengths. Extensive experiments on large\u2011scale text\u2011to\u2011image models (Flux, Stable Diffusion 3/3.5, Lumina) show that Rectified-CFG++ consistently outperforms standard CFG on benchmark datasets such as MS\u2011COCO, LAION\u2011Aesthetic, and T2I\u2011CompBench. The code will be released upon publication.", "arxiv_id": "2510.07631v1", "arxiv_authors": ["Shreshth Saini", "Shashank Gupta", "Alan C. Bovik"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2826172, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a843"}, "filepath": "data/2506.05282v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999895145386363, "type": "Poster", "name": "Rectified Point Flow: Generic Point Cloud Pose Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117185", "abstract": "We present Rectified Point Flow, a unified parameterization that formulates pairwise point cloud registration and multi-part shape assembly as a single conditional generative problem. Given unposed point clouds, our method learns a continuous point-wise velocity field that transports noisy points toward their target positions, from which part poses are recovered. In contrast to prior work that regresses part-wise poses with ad-hoc symmetry handling, our method intrinsically learns assembly symmetries without symmetry labels. Together with a self-supervised encoder focused on overlapping points, Rectified Point Flow achieves a new state-of-the-art performance on six benchmarks spanning pairwise registration and shape assembly. Notably, our unified formulation enables effective joint training on diverse datasets, facilitating the learning of shared geometric priors and consequently boosting accuracy. Our code and models will be made publicly available.", "arxiv_id": "2506.05282v2", "arxiv_authors": ["Tao Sun", "Liyuan Zhu", "Shengyu Huang", "Shuran Song", "Iro Armeni"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.639Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1001194, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a844"}, "filepath": "data/2510.17364v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991369692430734, "type": "Poster", "name": "Recurrent Attention-based Token Selection for Efficient Streaming Video-LLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120237", "abstract": "Video Large Language Models (Video-LLMs) excel at understanding videos in-context, assuming full access to the video when answering queries. However, these models face challenges in streaming scenarios where hour-long videos must be processed online, and questions need timely responses. In this work, we propose a training-free approach compatible with standard Video-LLMs, leveraging three key concepts: 1) LLM-informed selection of visual tokens to identify those that the LLM has attended to and contributed to its understanding of each short clip. Our attention-based selection allows us to discard up to ~95\\% of unimportant visual tokens with minimal performance loss; 2) Hierarchical selection of tokens combined with natural language understanding of each processed clip; 3) Caption-based question answering for lightweight and accurate responses. Our method achieves state-of-the-art performance on streaming video benchmarks, striking a balance between efficiency and effectiveness.", "arxiv_id": "2510.17364v1", "arxiv_authors": ["Vaggelis Dorovatas", "Soroush Seifi", "Gunshi Gupta", "Rahaf Aljundi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 995664, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a845"}, "filepath": "data/2505.18880v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992954561757639, "type": "Poster", "name": "REGen: Multimodal Retrieval-Embedded Generation for Long-to-Short Video Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119175", "abstract": "Short videos are an effective tool for promoting contents and improving knowledge accessibility. While existing extractive video summarization methods struggle to produce a coherent narrative, existing abstractive methods cannot `quote' from the input videos, i.e., inserting short video clips in their outputs. In this work, we explore novel video editing models for generating shorts that feature a coherent narrative with embedded video insertions extracted from a long input video. We propose a novel retrieval-embedded generation framework that allows a large language model to quote multimodal resources while maintaining a coherent narrative. Our proposed REGen system first generates the output story script with quote placeholders using a finetuned large language model, and then uses a novel retrieval model to replace the quote placeholders by selecting a video clip that best supports the narrative from a pool of candidate quotable video clips. We examine the proposed method on the task of documentary teaser generation, where short interview insertions are commonly used to support the narrative of a documentary. Our objective evaluations show that the proposed method can effectively insert short video clips while maintaining a coherent narrative. In a subjective survey, we show that our proposed method outperforms existing abstractive and extractive approaches in terms of coherence, alignment, and realism in teaser generation.", "arxiv_id": "2505.18880v1", "arxiv_authors": ["Weihan Xu", "Yimeng Ma", "Jingyue Huang", "Yang Li", "Wenye Ma", "Taylor Berg-Kirkpatrick", "Julian McAuley", "Paul Pu Liang", "Hao-Wen Dong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 997992, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a846"}, "filepath": "data/2505.05892v2.png", "tags": [], "_media_type": "image", "_rand": 0.999438483557598, "type": "Poster", "name": "Register and [CLS] tokens induce a decoupling of local and global features in large ViTs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118616", "abstract": "Recent work has shown that the attention maps of the widely popular DINOv2 model exhibit artifacts, which hurt both model interpretability and performance on dense image tasks. These artifacts emerge due to the model repurposing patch tokens with redundant local information for the storage of global image information. To address this problem, additional register tokens have been incorporated in which the model can store such information instead. We carefully examine the influence of these register tokens on the relationship between global and local image features, showing that while register tokens yield cleaner attention maps, these maps do not accurately reflect the integration of local image information in large models. Instead, global information is dominated by information extracted from register tokens, leading to a disconnect between local and global features. Inspired by these findings, we show that the [CLS] token itself, which can be interpreted as a register, leads to a very similar phenomenon in models without explicit register tokens. Our work shows that care must be taken when interpreting attention maps of large ViTs. Further, by clearly attributing the faulty behaviour to register and [CLS] tokens, we show a path towards more interpretable vision models.", "arxiv_id": "2505.05892v2", "arxiv_authors": ["Alexander Lappe", "Martin A. Giese"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1563998, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a847"}, "filepath": "data/2510.16865v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991865105305919, "type": "Poster", "name": "Registration is a Powerful Rotation-Invariance Learner for 3D Anomaly Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118075", "abstract": "3D anomaly detection in point-cloud data is critical for industrial quality control, aiming to identify structural defects with high reliability. However, current memory bank-based methods often suffer from inconsistent feature transformations and limited discriminative capacity, particularly in capturing local geometric details and achieving rotation invariance. These limitations become more pronounced when registration fails, leading to unreliable detection results. We argue that point-cloud registration plays an essential role not only in aligning geometric structures but also in guiding feature extraction toward rotation-invariant and locally discriminative representations. To this end, we propose a registration-induced, rotation-invariant feature extraction framework that integrates the objectives of point-cloud registration and memory-based anomaly detection. Our key insight is that both tasks rely on modeling local geometric structures and leveraging feature similarity across samples. By embedding feature extraction into the registration learning process, our framework jointly optimizes alignment and representation learning. This integration enables the network to acquire features that are both robust to rotations and highly effective for anomaly detection. Extensive experiments on the Anomaly-ShapeNet and Real3D-AD datasets demonstrate that our method consistently outperforms existing approaches in effectiveness and generalizability.", "arxiv_id": "2510.16865v1", "arxiv_authors": ["Yuyang Yu", "Zhengwei Chen", "Xuemiao Xu", "Lei Zhang", "Haoxin Yang", "Yongwei Nie", "Shengfeng He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3d9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1031725, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a848"}, "filepath": "data/2506.09385v2.png", "tags": [], "_media_type": "image", "_rand": 0.999040588532993, "type": "Poster", "name": "ReID5o: Achieving Omni Multi-modal Person Re-identification in a Single Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118107", "abstract": "In real-word scenarios, person re-identification (ReID) expects to identify a person-of-interest via the descriptive query, regardless of whether the query is a single modality or a combination of multiple modalities. However, existing methods and datasets remain constrained to limited modalities, failing to meet this requirement. Therefore, we investigate a new challenging problem called Omni Multi-modal Person Re-identification (OM-ReID), which aims to achieve effective retrieval with varying multi-modal queries. To address dataset scarcity, we construct ORBench, the first high-quality multi-modal dataset comprising 1,000 unique identities across five modalities: RGB, infrared, color pencil, sketch, and textual description. This dataset also has significant superiority in terms of diversity, such as the painting perspectives and textual information. It could serve as an ideal platform for follow-up investigations in OM-ReID. Moreover, we propose ReID5o, a novel multi-modal learning framework for person ReID. It enables synergistic fusion and cross-modal alignment of arbitrary modality combinations in a single model, with a unified encoding and multi-expert routing mechanism proposed. Extensive experiments verify the advancement and practicality of our ORBench. A wide range of possible models have been evaluated and compared on it, and our proposed ReID5o model gives the best performance. The dataset and code will be made publicly available.", "arxiv_id": "2506.09385v2", "arxiv_authors": ["Jialong Zuo", "Yongtai Deng", "Mengdan Tan", "Rui Jin", "Dongyue Wu", "Nong Sang", "Liang Pan", "Changxin Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3da"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1143864, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a849"}, "filepath": "data/2505.22094v5.png", "tags": [], "_media_type": "image", "_rand": 0.9993775530520882, "type": "Poster", "name": "ReinFlow: Fine-tuning Flow Matching Policy with Online Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119473", "abstract": "We propose ReinFlow, a simple and effective online reinforcement learning (RL) framework that fine-tunes a family of flow matching policies for continuous robotic control. ReinFlow injects a bounded, learnable noise into a flow policy's deterministic path, converting the flow into a discrete-time Markov Process for simple and tractable likelihood computation. This conversion aids exploration and secures training stability, allowing ReinFlow to stably fine-tune diverse flow model variants, including Rectified Flow and Shortcut Models, especially at very few or even one denoising step. We benchmark ReinFlow in representative locomotion and manipulation tasks, including long-horizon planning with visual input and sparse reward. The episode reward of Rectified Flow policies increased by an average of 162.28\\% after fine-tuning in challenging legged locomotion tasks, while saving 82.87\\% of wall-time compared to the state-of-the-art diffusion RL method DPPO. The success rate of the Shortcut Model policies in state and visual manipulation tasks increased by 39.86\\% on average after fine-tuning with ReinFlow at four or even one denoising step, achieving performance comparable to fine-tuned DDIM policies while saving 48.78\\% of the simulation time.", "arxiv_id": "2505.22094v5", "arxiv_authors": ["Tonghe Zhang", "Chao Yu", "Sichang Su", "Yu Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3db"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081848, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84a"}, "filepath": "data/2510.13418v1.png", "tags": [], "_media_type": "image", "_rand": 0.99906223153304, "type": "Poster", "name": "Reinforcement Learning Meets Masked Generative Models: Mask-GRPO for Text-to-Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119326", "abstract": "Reinforcement learning (RL) has garnered increasing attention in text-to-image (T2I) generation. However, most existing RL approaches are tailored to either diffusion models or autoregressive models, overlooking an important alternative: mask generative models. In this work, we propose Mask-GRPO, the first method to incorporate Group Relative Policy Optimization (GRPO)-based online RL into this overlooked paradigm. Our core insight is to redefine the transition probability, which is different from current approaches, and formulate the unmasking process as a multi-step decision-making problem. To further enhance our method, we explore several useful strategies, including removing the Kullback\u2013Leibler constraint, applying the reduction strategy, and filtering out low-quality samples. Using Mask-GRPO, we improve a base model, Show-o, with significant results, with a 38\\% improvement on the GenEval benchmark and 10\\% on MSCOCO-30K FID, outperforming existing state-of-the-art approaches.", "arxiv_id": "2510.13418v1", "arxiv_authors": ["Yifu Luo", "Xinhao Hu", "Keyu Fan", "Haoyuan Sun", "Zeyu Chen", "Bo Xia", "Tiantian Zhang", "Yongzhe Chang", "Xueqian Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3dc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1108157, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84b"}, "filepath": "data/2506.09965v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999724203040268, "type": "Poster", "name": "Reinforcing Spatial Reasoning in Vision-Language Models with Interwoven Thinking and Visual Drawing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115095", "abstract": "As textual reasoning with large language models (LLMs) has advanced significant, there has been growing interest in enhancing the multimodal reasoning capabilities of large vision-language models (LVLMs). However, existing methods primarily approach multimodal reasoning in a straightforward, text-centric manner, where both reasoning and answer derivation are conducted purely through text, with the only difference being the presence of multimodal input. As a result, these methods often encounter fundamental limitations in spatial reasoning tasks that demand precise geometric understanding and continuous spatial tracking\\textemdash capabilities that humans achieve through mental visualization and manipulation. To address the limitations, we propose drawing to reason in space, a novel paradigm that enables LVLMs to reason through elementary drawing operations in the visual space. By equipping models with basic drawing operations including annotating bounding boxes and drawing auxiliary lines, we empower them to express and analyze spatial relationships through direct visual manipulation, meanwhile avoiding the performance ceiling imposed by specialized perception tools in previous tool-integrated reasoning approaches. To cultivate this capability, we develop a three-stage training framework: cold-start training with synthetic data to establish basic drawing abilities, reflective rejection sampling to enhance self-reflection behaviors, and reinforcement learning to directly optimize for target rewards. Extensive experiments demonstrate that our model, named \\textsc{Spark}, consistently outperforms existing methods across diverse spatial reasoning benchmarks involving maze navigation, static spatial reasoning, video-based reasoning and multi-view-based reasoning tasks, with an average improvement of 11.5\\%. Ablation studies reveal the critical role of each training stage, with reflective rejection sampling particularly enhancing the model's self-correction capabilities and reasoning potential.", "arxiv_id": "2506.09965v2", "arxiv_authors": ["Junfei Wu", "Jian Guan", "Kaituo Feng", "Qiang Liu", "Shu Wu", "Liang Wang", "Wei Wu", "Tieniu Tan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3dd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1000173, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84c"}, "filepath": "data/2506.02528v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998976185969541, "type": "Poster", "name": "RelationAdapter: Learning and Transferring Visual Relation with Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119221", "abstract": "Inspired by the in-context learning mechanism of large language models (LLMs), a new paradigm of generalizable visual prompt-based image editing is emerging. Existing single-reference methods typically focus on style or appearance adjustments and struggle with non-rigid transformations. To address these limitations, we propose leveraging source-target image pairs to extract and transfer content-aware editing intent to novel query images. To this end, we introduce RelationAdapter, a lightweight module that enables Diffusion Transformer (DiT) based models to effectively capture and apply visual transformations from minimal examples. We also introduce Relation252K, a comprehensive dataset comprising 218 diverse editing tasks, to evaluate model generalization and adaptability in visual prompt-driven scenarios. Experiments on Relation252K show that RelationAdapter significantly improves the model\u2019s ability to understand and transfer editing intent, leading to notable gains in generation quality and overall editing performance.", "arxiv_id": "2506.02528v1", "arxiv_authors": ["Yan Gong", "Yiren Song", "Yicheng Li", "Chenglin Li", "Yin Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3de"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2825654, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84d"}, "filepath": "data/2506.09981v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990002180928894, "type": "Poster", "name": "Reliable World Simulation for Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117887", "abstract": "How can we reliably simulate future driving scenarios under a wide range of ego driving behaviors? Recent driving world models, developed exclusively on real-world driving data composed mainly of safe expert trajectories, struggle to follow hazardous or non-expert behaviors, which are rare in such data. This limitation restricts their applicability to tasks such as policy evaluation. In this work, we address this challenge by enriching real-world human demonstrations with diverse non-expert data collected from a driving simulator (e.g., CARLA), and building a controllable world model trained on this heterogeneous corpus. Starting with a video generator featuring diffusion transformer architecture, we devise several strategies to effectively integrate conditioning signals and improve prediction controllability and fidelity. The resulting model, ReSim, enables Reliable Simulation of diverse open-world driving scenarios under various actions, including hazardous non-expert ones. To close the gap between high-fidelity simulation and applications that require reward signals to judge different actions, we introduce a Video2Reward module that estimates reward from ReSim\u2019s simulated future. Our ReSim paradigm achieves up to 44% higher visual fidelity, improves controllability for both expert and non-expert actions by over 50%, and boosts planning and policy selection performance on NAVSIM by 2% and 25%, respectively. Our code, model, and dataset will be released.", "arxiv_id": "2506.09981v1", "arxiv_authors": ["Jiazhi Yang", "Kashyap Chitta", "Shenyuan Gao", "Long Chen", "Yuqian Shao", "Xiaosong Jia", "Hongyang Li", "Andreas Geiger", "Xiangyu Yue", "Li Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3df"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1119783, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84e"}, "filepath": "data/2505.20793v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997697955430465, "type": "Poster", "name": "Rendering-Aware Reinforcement Learning for Vector Graphics Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120143", "abstract": "Scalable Vector Graphics (SVG) offer a powerful format for representing visual designs as interpretable code. Recent advances in vision-language models (VLMs) have enabled high-quality SVG generation by framing the problem as a code generation task and leveraging large-scale pretraining. VLMs are particularly suitable for this task as they capture both global semantics and fine-grained visual patterns, while transferring knowledge across vision, natural language, and code domains. However, existing VLM approaches often struggle to produce faithful and efficient SVGs because they never observe the rendered images during training. Although differentiable rendering for autoregressive SVG code generation remains unavailable, rendered outputs can still be compared to original inputs, enabling evaluative feedback suitable for reinforcement learning (RL). In this work, we introduce RLVG, a reinforcement learning approach for SVG generation with autoregressive VLMs. Given an input image, the model generates SVG rollouts that are rendered and compared to the original image to compute a reward. This visual fidelity feedback guides the model toward producing more accurate, efficient, and semantically coherent SVGs. RLVG significantly outperforms supervised fine-tuning, addressing common failure modes and enabling precise, high-quality SVG generation with strong structural understanding and generalization.", "arxiv_id": "2505.20793v1", "arxiv_authors": ["Juan A. Rodriguez", "Haotian Zhang", "Abhay Puri", "Aarash Feizi", "Rishav Pramanik", "Pascal Wichmann", "Arnab Mondal", "Mohammad Reza Samsami", "Rabiul Awal", "Perouz Taslakian", "Spandana Gella", "Sai Rajeswar", "David Vazquez", "Christopher Pal", "Marco Pedersoli"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1177749, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a84f"}, "filepath": "data/2505.18153v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994745890568313, "type": "Poster", "name": "REN: Fast and Efficient Region Encodings from Patch-Based Image Encoders", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115859", "abstract": "We introduce the Region Encoder Network (REN), a fast and effective model for generating region-based image representations using point prompts. Recent methods combine class-agnostic segmenters (e.g., SAM) with patch-based image encoders (e.g., DINO) to produce compact and effective region representations, but they suffer from high computational cost due to the segmentation step. REN bypasses this bottleneck using a lightweight module that directly generates region tokens, enabling 60x faster token generation with 35x less memory, while also improving token quality. It uses a few cross-attention blocks that take point prompts as queries and features from a patch-based image encoder as keys and values to produce region tokens that correspond to the prompted objects. We train REN with three popular encoders\u2014DINO, DINOv2, and OpenCLIP\u2014and show that it can be extended to other encoders without dedicated training. We evaluate REN on semantic segmentation and retrieval tasks, where it consistently outperforms the original encoders in both performance and compactness, and matches or exceeds SAM-based region methods while being significantly faster. Notably, REN achieves state-of-the-art results on the challenging Ego4D VQ2D benchmark and outperforms proprietary LMMs on Visual Haystacks' single-needle challenge. We will release our models and code to support further research.", "arxiv_id": "2505.18153v1", "arxiv_authors": ["Savya Khosla", "Sethuraman TV", "Barnett Lee", "Alexander Schwing", "Derek Hoiem"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087720, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a850"}, "filepath": "data/2505.16793v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993086220747777, "type": "Poster", "name": "REOBench: Benchmarking Robustness of Earth Observation Foundation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121683", "abstract": "Earth observation foundation models have shown strong generalization across multiple Earth observation tasks, but their robustness under real-world perturbations remains underexplored. To bridge this gap, we introduce REOBench, the first comprehensive benchmark for evaluating the robustness of Earth observation foundation models across six tasks and twelve types of image corruptions, including both appearance-based and geometric perturbations. To ensure realistic and fine-grained evaluation, our benchmark focuses on high-resolution optical remote sensing images, which are widely used in critical applications such as urban planning and disaster response. We conduct a systematic evaluation of a broad range of models trained using masked image modeling, contrastive learning, and vision-language pre-training paradigms. Our results reveal that (1) existing Earth observation foundation models experience significant performance degradation when exposed to input corruptions. (2) The severity of degradation varies across tasks, model architectures, backbone sizes, and types of corruption, with performance drop varying from less than 1% to over 20%. (3) Vision-language models show enhanced robustness, particularly in multimodal tasks. REOBench underscores the vulnerability of current Earth observation foundation models to real-world corruptions and provides actionable insights for developing more robust and reliable models.", "arxiv_id": "2505.16793v2", "arxiv_authors": ["Xiang Li", "Yong Tao", "Siyuan Zhang", "Siwei Liu", "Zhitong Xiong", "Chunbo Luo", "Lu Liu", "Mykola Pechenizkiy", "Xiao Xiang Zhu", "Tianjin Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.640Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032869, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a851"}, "filepath": "data/2505.16792v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992308401021223, "type": "Poster", "name": "REPA Works Until It Doesn\u2019t: Early-Stopped, Holistic Alignment Supercharges Diffusion Training", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118901", "abstract": "Diffusion Transformers (DiTs) deliver state-of-the-art image quality, yet their training remains notoriously slow. A recent remedy---representation alignment (REPA) that matches DiT hidden features to those of a non-generative teacher (e.g. DINO)---dramatically accelerates the early epochs but plateaus or even degrades performance later. We trace this failure to a capacity mismatch: once the generative student begins modelling the joint data distribution, the teacher's lower-dimensional embeddings and attention patterns become a straitjacket rather than a guide. We then introduce HASTE (Holistic Alignment with Stage-wise Termination for Efficient training), a two-phase schedule that keeps the help and drops the hindrance. Phase I applies a holistic alignment loss that simultaneously distills attention maps (relational priors) and feature projections (semantic anchors) from the teacher into mid-level layers of the DiT, yielding rapid convergence. Phase II then performs one-shot termination that deactivates the alignment loss, once a simple trigger such as a fixed iteration is hit, freeing the DiT to focus on denoising and exploit its generative capacity. HASTE speeds up training of diverse DiTs without architecture changes. On ImageNet $256{\\times}256$, it reaches the vanilla SiT-XL/2 baseline FID in 50 epochs and matches REPA\u2019s best FID in 500 epochs, amounting to a $\\boldsymbol{28\\times}$ reduction in optimization steps. HASTE also improves text-to-image DiTs on MS-COCO, demonstrating to be a simple yet principled recipe for efficient diffusion training across various tasks.", "arxiv_id": "2505.16792v1", "arxiv_authors": ["Ziqiao Wang", "Wangbo Zhao", "Yuhao Zhou", "Zekai Li", "Zhiyuan Liang", "Mingjia Shi", "Xuanlei Zhao", "Pengfei Zhou", "Kaipeng Zhang", "Zhangyang Wang", "Kai Wang", "Yang You"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1135798, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a852"}, "filepath": "data/2506.18369v4.png", "tags": [], "_media_type": "image", "_rand": 0.9994864071898442, "type": "Poster", "name": "RePIC: Reinforced Post-Training for Personalizing Multi-Modal Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119231", "abstract": "Recent multi-modal large language models (MLLMs) often struggle to generate personalized image captions, even when trained on high-quality captions. In this work, we observe that such limitations persist in existing post-training-based MLLM personalization methods. Specifically, despite being post-tuned with large-scale caption data through supervised fine-tuning (SFT), these models frequently fail to produce faithful descriptions in real-world scenarios, such as multi-concept image captioning. However, acquiring large-scale, high-quality captions for such complex settings is both costly and difficult. To address the data-centric nature of SFT, we propose a reinforcement learning (RL)-based post-training framework. To the best of our knowledge, this is the first RL-based approach to post-train MLLMs for personalized image captioning. Our method significantly enhances both visual recognition and personalized generation capabilities of MLLMs, and consistently outperforms existing SFT-based baselines, especially in the challenging multi-concept image captioning task.", "arxiv_id": "2506.18369v4", "arxiv_authors": ["Yeongtak Oh", "Dohyun Chung", "Juhyeon Shin", "Sangha Park", "Johan Barthelemy", "Jisoo Mok", "Sungroh Yoon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1111228, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a853"}, "filepath": "data/2505.23917v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992102331275691, "type": "Poster", "name": "Representational Difference Explanations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116114", "abstract": "We propose a method for discovering and visualizing the differences between two learned representations, enabling more direct and interpretable model comparisons. We validate our method, which we call Representational Differences Explanations (RDX), by using it to compare models with known conceptual differences and demonstrate that it recovers meaningful distinctions where existing explainable AI (XAI) techniques fail. Applied to state-of-the-art models on challenging subsets of the ImageNet and iNaturalist datasets, RDX reveals both insightful representational differences and subtle patterns in the data. Although comparison is a cornerstone of scientific analysis, current tools in machine learning, namely post hoc XAI methods, struggle to support model comparison effectively. Our work addresses this gap by introducing an effective and explainable tool for contrasting model representations.", "arxiv_id": "2505.23917v2", "arxiv_authors": ["Neehar Kondapaneni", "Oisin Mac Aodha", "Pietro Perona"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106196, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a854"}, "filepath": "data/2507.01467v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995803867545268, "type": "Poster", "name": "Representation Entanglement for Generation: Training Diffusion Transformers Is Much Easier Than You Think", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116344", "abstract": "REPA and its variants effectively mitigate training challenges in diffusion models by incorporating external visual representations from pretrained models, through alignment between the noisy hidden projections of denoising networks and foundational clean image representations. We argue that the external alignment, which is absent during the entire denoising inference process, falls short of fully harnessing the potential of discriminative representations. In this work, we propose a straightforward method called \\textit{\\textbf{R}epresentation \\textbf{E}ntanglement for \\textbf{G}eneration} (\\textbf{REG}), which entangles low-level image latents with a single high-level class token from pretrained foundation models for denoising. REG acquires the capability to produce coherent image-class pairs directly from pure noise, substantially improving both generation quality and training efficiency.This is accomplished with negligible additional inference overhead, requiring only one single additional token for denoising (<0.5\\% increase in FLOPs and latency).The inference process concurrently reconstructs both image latents and their corresponding global semantics, where the acquired semantic knowledge actively guides and enhances the image generation process.On ImageNet 256$\\times$256, SiT-XL/2 + REG demonstrates remarkable convergence acceleration, achieving $\\textbf{63}\\times$ and $\\textbf{23}\\times$ faster training than SiT-XL/2 and SiT-XL/2 + REPA, respectively. More impressively, SiT-L/2 + REG trained for merely 400K iterations outperforms SiT-XL/2 + REPA trained for 4M iterations ($\\textbf{10}\\times$ longer). Code is available at: \\url{https://anonymous.4open.science/r/REG-6C5B}.", "arxiv_id": "2507.01467v2", "arxiv_authors": ["Ge Wu", "Shen Zhang", "Ruijing Shi", "Shanghua Gao", "Zhenyuan Chen", "Lei Wang", "Zhaowei Chen", "Hongcheng Gao", "Yao Tang", "Jian Yang", "Ming-Ming Cheng", "Xiang Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5136096, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a855"}, "filepath": "data/2505.17358v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990292074407421, "type": "Poster", "name": "Repurposing Marigold for Zero-Shot Metric Depth Estimation via Defocus Blur Cues", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116529", "abstract": "Recent monocular metric depth estimation (MMDE) methods have made notable progress towards zero-shot generalization. However, they still exhibit a significant performance drop on out-of-distribution datasets. We address this limitation by injecting defocus blur cues at inference time into Marigold, a \\textit{pre-trained} diffusion model for zero-shot, scale-invariant monocular depth estimation (MDE). Our method effectively turns Marigold into a metric depth predictor in a training-free manner. To incorporate defocus cues, we capture two images with a small and a large aperture from the same viewpoint. To recover metric depth, we then optimize the metric depth scaling parameters and the noise latents of Marigold at inference time using gradients from a loss function based on the defocus-blur image formation model. We compare our method against existing state-of-the-art zero-shot MMDE methods on a self-collected real dataset, showing quantitative and qualitative improvements.", "arxiv_id": "2505.17358v1", "arxiv_authors": ["Chinmay Talegaonkar", "Nikhil Gandudi Suresh", "Zachary Novack", "Yash Belhe", "Priyanka Nagasamudra", "Nicholas Antipa"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113050, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a856"}, "filepath": "data/2505.02867v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992929522630946, "type": "Poster", "name": "RESAnything: Attribute Prompting for Arbitrary Referring Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119834", "abstract": "We present an open-vocabulary and zero-shot method for arbitrary referring expression segmentation (RES), targeting input expressions that are more general than what prior works were designed to handle. Specifically, our inputs encompass both object- and part-level labels as well as implicit references pointing to properties or qualities of object/part function, design, style, material, etc. Our model, coined RESAnything, leverages Chain-of-Thoughts (CoT) reasoning, where the key idea is attribute prompting. We generate detailed descriptions of object/part attributes including shape, color, and location for potential segment proposals through systematic prompting of a large language model (LLM), where the proposals are produced by a foundational image segmentation model. Our approach encourages deep reasoning about object or part attributes related to function, style, design, etc., enabling the system to handle implicit queries without any part annotations for training or fine-tuning. As the first zero-shot and LLM-based RES method, RESAnything achieves clearly superior performance among zero-shot methods on traditional RES benchmarks and significantly outperforms existing methods on challenging scenarios involving implicit queries and complex part-level relations. Finally, we contribute a new benchmark dataset to offer ~3K carefully curated RES instances to assess part-level, arbitrary RES solutions.", "arxiv_id": "2505.02867v1", "arxiv_authors": ["Ruiqi Wang", "Hao Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4976250, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a857"}, "filepath": "data/2505.14511v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996024928431634, "type": "Poster", "name": "ReservoirTTA: Prolonged Test-time Adaptation for Evolving and Recurring Domains", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117490", "abstract": "This paper introduces **ReservoirTTA**, a novel plug\u2013in framework designed for prolonged test\u2013time adaptation (TTA) in scenarios where the test domain continuously shifts over time, including cases where domains recur or evolve gradually. At its core, ReservoirTTA maintains a reservoir of domain-specialized models\u2014an adaptive test-time model ensemble\u2014that both detects new domains via online clustering over style features of incoming samples and routes each sample to the appropriate specialized model, and thereby enables domain-specific adaptation. This multi-model strategy overcomes key limitations of single model adaptation, such as catastrophic forgetting, inter-domain interference, and error accumulation, ensuring robust and stable performance on sustained non-stationary test distributions. Our theoretical analysis reveals key components that bound parameter variance and prevent model collapse, while our plug\u2013in TTA module mitigates catastrophic forgetting of previously encountered domains. Extensive experiments on the classification corruption benchmarks, including ImageNet-C and CIFAR-10/100-C, as well as the Cityscapes\u2192ACDC semantic segmentation task, covering recurring and continuously evolving domain shifts, demonstrate that ReservoirTTA significantly improves adaptation accuracy and maintains stable performance across prolonged, recurring shifts, outperforming state-of-the-art methods. The code will be released upon acceptance.", "arxiv_id": "2505.14511v3", "arxiv_authors": ["Guillaume Vray", "Devavrat Tomar", "Xufeng Gao", "Jean-Philippe Thiran", "Evan Shelhamer", "Behzad Bozorgtabar"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3e9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1090838, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a858"}, "filepath": "data/2509.15257v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998862246734181, "type": "Poster", "name": "RespoDiff: Dual-Module Bottleneck Transformation for Responsible & Faithful T2I Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120008", "abstract": "The rapid advancement of diffusion models has enabled high-fidelity and semantically rich text-to-image generation; however, ensuring fairness and safety remains an open challenge. Existing methods typically improve fairness and safety at the expense of semantic fidelity and image quality. In this work, we propose RespoDiff, a novel framework for responsible text-to-image generation that incorporates a dual-module transformation on the intermediate bottleneck representations of diffusion models. Our approach introduces two distinct learnable modules: one focused on capturing and enforcing responsible concepts, such as fairness and safety, and the other dedicated to maintaining semantic alignment with neutral prompts. To facilitate the dual learning process, we introduce a novel score-matching objective that enables effective coordination between the modules. Our method outperforms state-of-the-art methods in responsible generation by ensuring semantic alignment while optimizing both objectives without compromising image fidelity. Our approach improves responsible and semantically coherent generation by $\\textasciitilde20\\%$ across diverse, unseen prompts. Moreover, it integrates seamlessly into large-scale models like SDXL, enhancing fairness and safety. Code will be released upon acceptance.", "arxiv_id": "2509.15257v2", "arxiv_authors": ["Silpa Vadakkeeveetil Sreelatha", "Sauradip Nag", "Muhammad Awais", "Serge Belongie", "Anjan Dutta"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ea"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047636, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a859"}, "filepath": "data/2508.06715v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993533624736137, "type": "Poster", "name": "Restage4D: Reanimating Deformable 3D Reconstruction from a Single Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116957", "abstract": "Motion is one of the key components in deformable 3D scenes. Generative video models allow users to animate static scenes with text prompts for novel motion, but when it comes to 4D reconstruction, such reanimations often fall apart. The generated videos often suffer from geometric artifacts, implausible motion, and occlusions, which hinder physically consistent 4D reanimation. In this work, we introduce \\textbf{Restage4D}, a geometry-preserving pipeline for deformable scene reconstruction from a single edited video. Our key insight is to leverage the unedited original video as an additional source of supervision, allowing the model to propagate accurate structure into occluded and disoccluded regions.To achieve this, we propose a video-rewinding training scheme that temporally bridges the edited and original sequences via a shared motion representation. We further introduce an occlusion-aware ARAP regularization to preserve local rigidity, and a disocclusion backtracing mechanism that supplements missing geometry in the canonical space. Together, these components enable robust reconstruction even when the edited input contains hallucinated content or inconsistent motion.We validate Restage4D on DAVIS and PointOdyssey, demonstrating improved geometry consistency, motion quality, and 3D tracking performance. Our method not only preserves deformable structure under novel motion, but also automatically corrects errors introduced by generative models, bridging the gap between flexible video synthesis and physically grounded 4D reconstruction.", "arxiv_id": "2508.06715v1", "arxiv_authors": ["Jixuan He", "Chieh Hubert Lin", "Lu Qi", "Ming-Hsuan Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3eb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053704, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85a"}, "filepath": "data/2509.16888v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999392368401769, "type": "Poster", "name": "Rethinking Evaluation of Infrared Small Target Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121695", "abstract": "As an essential vision task, infrared small target detection (IRSTD) has seen significant advancements through deep learning. However, critical limitations in current evaluation protocols impede further progress. First, existing methods rely on fragmented pixel- and target-level specific metrics, which fails to provide a comprehensive view of model capabilities. Second, an excessive emphasis on overall performance scores obscures crucial error analysis, which is vital for identifying failure modes and improving real-world system performance. Third, the field predominantly adopts dataset-specific training-testing paradigms, hindering the understanding of model robustness and generalization across diverse infrared scenarios. This paper addresses these issues by introducing a hybrid-level metric incorporating pixel- and target-level performance, proposing a systematic error analysis method, and emphasizing the importance of cross-dataset evaluation. These aim to offer a more thorough and rational hierarchical analysis framework, ultimately fostering the development of more effective and robust IRSTD models. An open-source toolkit has be released to facilitate standardized benchmarking.", "arxiv_id": "2509.16888v2", "arxiv_authors": ["Youwei Pang", "Xiaoqi Zhao", "Lihe Zhang", "Huchuan Lu", "Georges El Fakhri", "Xiaofeng Liu", "Shijian Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ec"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039626, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85b"}, "filepath": "data/2502.20120v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993648862427064, "type": "Poster", "name": "Rethinking Multimodal Learning from the Perspective of Mitigating Classification Ability Disproportion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118166", "abstract": "Multimodal learning (MML) is significantly constrained by modality imbalance, leading to suboptimal performance in practice. While existing approaches primarily focus on balancing the learning of different modalities to address this issue, they fundamentally overlook the inherent disproportion in model classification ability, which serves as the primary cause of this phenomenon. In this paper, we propose a novel multimodal learning approach to dynamically balance the classification ability of weak and strong modalities by designing a sustained boosting algorithm. Concretely, we first propose a sustained boosting algorithm in multimodal learning by simultaneously optimizing the classification and residual errors. Subsequently, we introduce an adaptive classifier assignment strategy to dynamically facilitate the classification performance of the weak modality. Furthermore, we theoretically analyze the convergence property of the cross-modal gap function, ensuring the effectiveness of the proposed boosting scheme. To this end, the classification ability of strong and weak modalities is expected to be balanced, thereby mitigating the imbalance issue. Empirical experiments on widely used datasets reveal the superiority of our method through comparison with various state-of-the-art~(SOTA) multimodal learning baselines. The source code is available at https://anonymous.4open.science/r/Our_NeurIPS25-A4C7.", "arxiv_id": "2502.20120v3", "arxiv_authors": ["QingYuan Jiang", "Longfei Huang", "Yang Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ed"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061510, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85c"}, "filepath": "data/2510.17440v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992164794519408, "type": "Poster", "name": "Rethinking Nighttime Image Deraining via Learnable Color Space Transformation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118470", "abstract": "Compared to daytime image deraining, nighttime image deraining poses significant challenges due to inherent complexities of nighttime scenarios and the lack of high-quality datasets that accurately represent the coupling effect between rain and illumination. In this paper, we rethink the task of nighttime image deraining and contribute a new high-quality benchmark, HQ-NightRain, which offers higher harmony and realism compared to existing datasets. In addition, we develop an effective color space transformation framework (CST-Net) for better removing complex rain from nighttime scenes. Specifically, we propose a learnable color space converter (CSC) to better facilitate rain removal in the Y channel, as nighttime rain is more pronounced in the Y channel compared to the RGB color space. To capture illumination information for guiding nighttime deraining, implicit illumination guidance is introduced enabling the learned features to improve the model's robustness in complex scenarios. Extensive experiments show the value of our dataset and the effectiveness of our method. The code will be released soon.", "arxiv_id": "2510.17440v1", "arxiv_authors": ["Qiyuan Guan", "Xiang Chen", "Guiyue Jin", "Jiyu Jin", "Shumin Fan", "Tianyu Song", "Jinshan Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ee"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 947191, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85d"}, "filepath": "data/2506.05872v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995107736050922, "type": "Poster", "name": "Retrieval-Guided Compositional Image Generation for Cross-Domain Few-Shot Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115994", "abstract": "Cross-Domain Few-Shot Object Detection (CD-FSOD) aims to detect novel objects with only a handful of labeled samples from previously unseen domains. While data augmentation and generative methods have shown promise in few-shot learning, their effectiveness for CD-FSOD remains unclear due to the need for both visual realism and domain alignment. Existing strategies, such as copy-paste augmentation and text-to-image generation, often fail to preserve the correct object category or produce backgrounds coherent with the target domain, making them non-trivial to apply directly to CD-FSOD. To address these challenges, we propose Domain-RAG, a training-free, retrieval-guided compositional image generation framework tailored for CD-FSOD. Domain-RAG consists of three stages: domain-aware background retrieval, domain-guided background generation, and foreground-background composition. Specifically, the input image is first decomposed into foreground and background regions. We then retrieve semantically and stylistically similar images to guide a generative model in synthesizing a new background, conditioned on both the original and retrieved contexts. Finally, the preserved foreground is composed with the newly generated domain-aligned background to form the generated image. Without requiring any additional supervision or training, Domain-RAG produces high-quality, domain-consistent samples across diverse tasks, including CD-FSOD, remote sensing FSOD, and camouflaged FSOD. Extensive experiments show consistent improvements over strong baselines and establish new state-of-the-art results. Codes will be released upon acceptance.", "arxiv_id": "2506.05872v1", "arxiv_authors": ["Yu Li", "Xingyu Qiu", "Yuqian Fu", "Jie Chen", "Tianwen Qian", "Xu Zheng", "Danda Pani Paudel", "Yanwei Fu", "Xuanjing Huang", "Luc Van Gool", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ef"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057787, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85e"}, "filepath": "data/2510.02745v2.png", "tags": [], "_media_type": "image", "_rand": 0.999176502085946, "type": "Poster", "name": "Retrv-R1: A Reasoning-Driven MLLM Framework for Universal and Efficient Multimodal Retrieval", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119582", "abstract": "The success of DeepSeek-R1 demonstrates the immense potential of using reinforcement learning (RL) to enhance LLMs' reasoning capabilities. This paper introduces Retrv-R1, the first R1-style MLLM specifically designed for multimodal universal retrieval, achieving higher performance by employing step-by-step reasoning to produce more accurate retrieval results. We find that directly applying the methods of DeepSeek-R1 to retrieval tasks is not feasible, mainly due to (1) the high computational cost caused by the large token consumption required for multiple candidates with reasoning processes, and (2) the instability and suboptimal results when directly applying RL to train for retrieval tasks. To address these issues, Retrv-R1 introduces an information compression module with a details inspection mechanism, which enhances computational efficiency by reducing the number of tokens while ensuring that critical information for challenging candidates is preserved. Additionally, a new training paradigm is proposed, including an activation stage using a retrieval-tailored synthetic CoT dataset for more effective optimization, followed by RL with a novel curriculum reward to improve both performance and efficiency. Incorporating these novel designs, Retrv-R1 achieves SOTA performance, high efficiency, and strong generalization ability, as demonstrated by extensive experiments across multiple benchmarks and tasks.", "arxiv_id": "2510.02745v2", "arxiv_authors": ["Lanyun Zhu", "Deyi Ji", "Tianrun Chen", "Haiyang Wu", "Shiqi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.641Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1101650, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a85f"}, "filepath": "data/2505.22918v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996939329121411, "type": "Poster", "name": "Re-ttention: Ultra Sparse Visual Generation via Attention Statistical Reshape", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118971", "abstract": "Diffusion Transformers (DiT) have become the de-facto model for generating high-quality visual content like videos and images.A huge bottleneck is the attention mechanism where complexity scales quadratically with resolution and video length. One logical way to lessen this burden is sparse attention, where only a subset of tokens or patches are included in the calculation. However, existing techniques fail to preserve visual quality at extremely high sparsity levels and might even incur non-negligible compute overheads. To address this concern, we propose Re-ttention, which implements very high sparse attention for visual generation models by leveraging the temporal redundancy of Diffusion Models to overcome the probabilistic normalization shift within the attention mechanism. Specifically, Re-ttention reshapes attention scores based on the prior softmax distribution history in order to preserve the visual quality of the full quadratic attention at very high sparsity levels. Experimental results on T2V/T2I models such as CogVideoX and the PixArt DiTs demonstrate that Re-ttention requires as few as 3.1\\% of the tokens during inference, outperforming contemporary methods like FastDiTAttn, Sparse VideoGen and MInference. Further, we measure latency to show that our method can attain over 45\\% end-to-end and over 92\\% self-attention latency reduction on an H100 GPU at negligible overhead cost.", "arxiv_id": "2505.22918v3", "arxiv_authors": ["Ruichen Chen", "Keith G. Mills", "Liyao Jiang", "Chao Gao", "Di Niu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 988813, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a860"}, "filepath": "data/2506.02408v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992436690774402, "type": "Poster", "name": "Revisiting End-to-End Learning with Slide-level Supervision in Computational Pathology", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119778", "abstract": "Pre-trained encoders for offline feature extraction followed by multiple instance learning (MIL) aggregators have become the dominant paradigm in computational pathology (CPath), benefiting cancer diagnosis and prognosis.However, performance limitations arise from the absence of encoder fine-tuning for downstream tasks and disjoint optimization with MIL. While slide-level supervised end-to-end (E2E) learning is an intuitive solution to this issue, it faces challenges such as high computational demands and suboptimal results.These limitations motivate us to revisit E2E learning. We argue that prior work neglects inherent E2E optimization challenges, leading to performance disparities compared to traditional two-stage methods.In this paper, we pioneer the elucidation of optimization challenge caused by sparse-attention MIL and propose a novel MIL called ABMILX. ABMILX mitigates this problem through global correlation-based attention refinement and multi-head mechanisms.With the efficient multi-scale random patch sampling strategy, an E2E trained ResNet with ABMILX surpasses SOTA foundation models under the two-stage paradigm across multiple challenging benchmarks,while remaining computationally efficient ($<$ 10 RTX3090 GPU hours).We demonstrate the potential of E2E learning in CPath and calls for greater research focus in this area.The code is~\\href{https://anonymous.4open.science/r/ABMILX-E480}{here}.", "arxiv_id": "2506.02408v2", "arxiv_authors": ["Wenhao Tang", "Rong Qin", "Heng Fang", "Fengtao Zhou", "Hao Chen", "Xiang Li", "Ming-Ming Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 987864, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a861"}, "filepath": "data/2510.20134v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990253397162274, "type": "Poster", "name": "Revisiting Logit Distributions for Reliable Out-of-Distribution Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119045", "abstract": "Out-of-distribution (OOD) detection is critical for ensuring the reliability of deep learning models in open-world applications. While post-hoc methods are favored for their efficiency and ease of deployment, existing approaches often underexploit the rich information embedded in the model\u2019s logits space. In this paper, we propose LogitGap, a novel post-hoc OOD detection method that explicitly exploits the relationship between the maximum logit and the remaining logits to enhance the separability between in-distribution (ID) and OOD samples. To further improve its effectiveness, we refine LogitGap by focusing on a more compact and informative subset of the logit space. Specifically, we introduce a training-free strategy that automatically identifies the most informative logits for scoring. We provide both theoretical analysis and empirical evidence to validate the effectiveness of our approach. Extensive experiments on both vision-language and vision-only models demonstrate that LogitGap consistently achieves state-of-the-art performance across diverse OOD detection scenarios and benchmarks.", "arxiv_id": "2510.20134v1", "arxiv_authors": ["Jiachen Liang", "Ruibing Hou", "Minyang Hu", "Hong Chang", "Shiguang Shan", "Xilin Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1020423, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a862"}, "filepath": "data/2505.11881v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993415546093689, "type": "Poster", "name": "Revisiting Residual Connections: Orthogonal Updates for Stable and Efficient Deep Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118548", "abstract": "Residual connections are pivotal for deep neural networks, enabling greater depth by mitigating vanishing gradients. However, in standard residual updates, the module's output is directly added to the input stream. This can lead to updates that predominantly reinforce or modulate the existing stream direction, potentially underutilizing the module's capacity for learning entirely novel features.In this work, we introduce _Orthogonal Residual Update_: we decompose the module's output relative to the input stream and add only the component orthogonal to this stream.This design aims to guide modules to contribute primarily new representational directions, fostering richer feature learning and more efficient training.We demonstrate that our orthogonal update strategy improves generalization accuracy and training stability across diverse architectures (ResNetV2, Vision Transformers) and datasets (CIFARs, TinyImageNet, ImageNet-1k), achieving, for instance, a +4.3\\%p top-1 accuracy gain for ViT-B on ImageNet-1k.", "arxiv_id": "2505.11881v2", "arxiv_authors": ["Giyeong Oh", "Woohyun Cho", "Siyeol Kim", "Suhwan Choi", "Youngjae Yu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065854, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a863"}, "filepath": "data/2503.13070v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999671579471684, "type": "Poster", "name": "Reward-Instruct: A Reward-Centric Approach to Fast Photo-Realistic Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115833", "abstract": "This paper addresses the challenge of achieving high-quality and fast image generation that aligns with complex human preferences. While recent advancements in diffusion models and distillation have enabled rapid generation, the effective integration of reward feedback for improved abilities like controllability and preference alignment remains a key open problem. Existing reward-guided post-training approaches targeting accelerated few-step generation often deem diffusion distillation losses indispensable.However, in this paper, we identify an interesting yet fundamental paradigm shift: as conditions become more specific, well-designed reward functions emerge as the primary driving force in training strong, few-step image generative models. Motivated by this insight, we introduce Reward-Instruct, a novel and surprisingly simple reward-centric approach for converting pre-trained base diffusion models into reward-enhanced few-step generators. Unlike existing methods, Reward-Instruct does not rely on expensive yet tricky diffusion distillation losses. Instead, it iteratively updates the few-step generator's parameters by directly sampling from a reward-tilted parameter distribution. Such a training approach entirely bypasses the need for expensive diffusion distillation losses, making it favorable to scale in high image resolutions. Despite its simplicity, Reward-Instruct yields surprisingly strong performance. Our extensive experiments on text-to-image generation have demonstrated that Reward-Instruct achieves state-of-the-art results in visual quality and quantitative metrics compared to distillation-reliant methods, while also exhibiting greater robustness to the choice of reward function.", "arxiv_id": "2503.13070v2", "arxiv_authors": ["Yihong Luo", "Tianyang Hu", "Weijian Luo", "Kenji Kawaguchi", "Jing Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043777, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a864"}, "filepath": "data/2505.13050v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996713856470447, "type": "Poster", "name": "RGB-to-Polarization Estimation: A New Task and Benchmark Study", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121777", "abstract": "Polarization images provide rich physical information that is fundamentally absent from standard RGB images, benefiting a wide range of computer vision applications such as reflection separation and material classification. However, the acquisition of polarization images typically requires additional optical components, which increases both the cost and the complexity of the applications. To bridge this gap, we introduce a new task: RGB-to-polarization image estimation, which aims to infer polarization information directly from RGB images. In this work, we establish the first comprehensive benchmark for this task by leveraging existing polarization datasets and evaluating a diverse set of state-of-the-art deep learning models, including both restoration-oriented and generative architectures. Through extensive quantitative and qualitative analysis, our benchmark not only establishes the current performance ceiling of RGB-to-polarization estimation, but also systematically reveals the respective strengths and limitations of different model families \u2014 such as direct reconstruction versus generative synthesis, and task-specific training versus large-scale pre-training. In addition, we provide some potential directions for future research. This benchmark is intended to serve as a foundational resource to facilitate the design and evaluation of future methods for polarization estimation from standard RGB inputs.", "arxiv_id": "2505.13050v2", "arxiv_authors": ["Beibei Lin", "Zifeng Yuan", "Tingting Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1025748, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a865"}, "filepath": "data/2506.02265v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998530244179691, "type": "Poster", "name": "Rig3R: Rig-Aware Conditioning and Discovery for 3D Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115429", "abstract": "Estimating agent pose and 3D scene structure from multi-camera rigs is a central task in embodied AI applications such as autonomous driving. Recent learned approaches such as DUSt3R have shown impressive performance in multiview settings. However, these models treat images as unstructured collections, limiting effectiveness in scenarios where frames are captured from synchronized rigs with known or inferable structure. To this end, we introduce Rig3R, a generalization of prior multiview reconstruction models that incorporates rig structure when available, and learns to infer it when not. Rig3R conditions on optional rig metadata including camera ID, time, and rig poses to develop a rig-aware latent space that remains robust to missing information. It jointly predicts pointmaps and two types of raymaps: a pose raymap relative to a global frame, and a rig raymap relative to a rig-centric frame consistent across time. Rig raymaps allow the model to infer rig structure directly from input images when metadata is missing. Rig3R achieves state-of-the-art performance in 3D reconstruction, camera pose estimation, and rig discovery -- outperforming both traditional and learned methods by 17-45% mAA across diverse real-world rig datasets, all in a single forward pass without post-processing or iterative refinement.", "arxiv_id": "2506.02265v1", "arxiv_authors": ["Samuel Li", "Pujith Kachana", "Prajwal Chidananda", "Saurabh Nair", "Yasutaka Furukawa", "Matthew Brown"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2063414, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a866"}, "filepath": "data/2505.22535v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991793999902532, "type": "Poster", "name": "RiverMamba: A State Space Model for Global River Discharge and Flood Forecasting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118456", "abstract": "Recent deep learning approaches for river discharge forecasting have improved the accuracy and efficiency in flood forecasting, enabling more reliable early warning systems for risk management. Nevertheless, existing deep learning approaches in hydrology remain largely confined to local-scale applications and do not leverage the inherent spatial connections of bodies of water. Thus, there is a strong need for new deep learning methodologies that are capable of modeling spatio-temporal relations to improve river discharge and flood forecasting for scientific and operational applications. To address this, we present RiverMamba, a novel deep learning model that is pretrained with long-term reanalysis data and that can forecast global river discharge and floods on a $0.05^\\circ$ grid up to 7 days lead time, which is of high relevance in early warning. To achieve this, RiverMamba leverages efficient Mamba blocks that enable the model to capture global-scale channel network routing and enhance its forecast capability for longer lead times. The forecast blocks integrate ECMWF HRES meteorological forecasts, while accounting for their inaccuracies through spatio-temporal modeling. Our analysis demonstrates that RiverMamba delivers reliable predictions of river discharge, including extreme floods across return periods and lead times, surpassing both operational AI- and physics-based models.", "arxiv_id": "2505.22535v3", "arxiv_authors": ["Mohamad Hakam Shams Eddin", "Yikui Zhang", "Stefan Kollet", "Juergen Gall"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 982976, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a867"}, "filepath": "data/2509.16500v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993079913084296, "type": "Poster", "name": "RLGF: Reinforcement Learning with Geometric Feedback for Autonomous Driving Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119158", "abstract": "Synthetic data is crucial for advancing autonomous driving (AD) systems, yet current state-of-the-art video generation models, despite their visual realism, suffer from subtle geometric distortions that limit their utility for downstream perception tasks. We identify and quantify this critical issue, demonstrating a significant performance gap in 3D object detection when using synthetic versus real data.To address this, we introduce Reinforcement Learning with Geometric Feedback (RLGF), RLGF uniquely refines video diffusion models by incorporating rewards from specialized latent-space AD perception models. Its core components include an efficient Latent-Space Windowing Optimization technique for targeted feedback during diffusion, and a Hierarchical Geometric Reward (HGR) system providing multi-level rewards for point-line-plane alignment, and scene occupancy coherence. To quantify these distortions, we propose GeoScores. Applied to models like DiVE on nuScenes, RLGF substantially reduces geometric errors (e.g., VP error by 21\\%, Depth error by 57\\%) and dramatically improves 3D object detection mAP by 12.7\\%, narrowing the gap to real-data performance. RLGF offers a plug-and-play solution for generating geometrically sound and reliable synthetic videos for AD development.", "arxiv_id": "2509.16500v2", "arxiv_authors": ["Tianyi Yan", "Wencheng Han", "Xia Zhou", "Xueyang Zhang", "Kun Zhan", "Cheng-zhong Xu", "Jianbing Shen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3f9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1150234, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a868"}, "filepath": "data/2505.15517v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997572207172953, "type": "Poster", "name": "Robo2VLM: Visual Question Answering from Large-Scale In-the-Wild Robot Manipulation Datasets", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121678", "abstract": "Vision-Language Models (VLMs) acquire real-world knowledge and general reasoning ability through internet-scale image-text corpora. They have potential to augment robotic systems with scene understanding and task planning, and to assist visuomotor policies trained on robot trajectory data. We explore the reverse paradigm \u2014 using rich, real, multi-modal robot trajectory data to enhance and evaluate VLMs. In this paper, we present Robo2VLM, a Visual Question Answering (VQA) dataset generation framework for VLMs. Given a human tele-operated robot trajectory, Robo2VLM derives ground-truth from non-visual and non-descriptive sensory modalities, such as end-effector pose, gripper aperture, and force sensing. Based on these modalities, it segments the robot trajectory into a sequence of manipulation phases. At each phase, Robo2VLM uses scene and interaction understanding to identify 3D properties of the robot, task goal, and the target object. The properties are used to generate representative VQA queries \u2013 images with textural multiple-choice questions \u2013 based on spatial, goal-conditioned, and interaction reasoning question templates. We create Robo2VLM-1, a large-scale in-the-wild dataset with 684,710 questions covering 463 distinct scenes and 3,396 robotic manipulation tasks from 176k real robot trajectories. Results suggest that Robo2VLM-1 can benchmark and improve VLM capabilities in spatial and interaction reasoning.", "arxiv_id": "2505.15517v2", "arxiv_authors": ["Kaiyuan Chen", "Shuangyu Xie", "Zehan Ma", "Pannag R Sanketi", "Ken Goldberg"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3fa"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083339, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a869"}, "filepath": "data/2506.06677v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992640867825054, "type": "Poster", "name": "RoboCerebra: A Large-scale Benchmark for Long-horizon Robotic Manipulation Evaluation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121871", "abstract": "Recent advances in vision-language models (VLMs) have enabled instruction-conditioned robotic systems with improved generalization. However, most existing work focuses on reactive System 1 policies, underutilizing VLMs\u2019 strengths in semantic reasoning and long-horizon planning. These System 2 capabilities\u2014characterized by deliberative, goal-directed thinking\u2014remain underexplored due to the limited temporal scale and structural complexity of current benchmarks. To address this gap, we introduce RoboCerebra, a benchmark for evaluating high-level reasoning in long-horizon robotic manipulation. RoboCerebra includes: (1) a large-scale simulation dataset with extended task horizons and diverse subtask sequences in household environments; (2) a hierarchical framework combining a high-level VLM planner with a low-level vision-language-action (VLA) controller; and (3) an evaluation protocol targeting planning, reflection, and memory through structured System 1\u2013System 2 interaction. The dataset is constructed via a top-down pipeline, where GPT generates task instructions and decomposes them into subtask sequences. Human operators execute the subtasks in simulation, yielding high-quality trajectories with dynamic object variations. Compared to prior benchmarks, RoboCerebra features significantly longer action sequences and denser annotations. We further benchmark state-of-the-art VLMs as System 2 modules and analyze their performance across key cognitive dimensions, advancing the development of more capable and generalizable robotic planners.", "arxiv_id": "2506.06677v1", "arxiv_authors": ["Songhao Han", "Boxiang Qiu", "Yue Liao", "Siyuan Huang", "Chen Gao", "Shuicheng Yan", "Si Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3fb"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.642Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1787201, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86a"}, "filepath": "data/2505.20612v4.png", "tags": [], "_media_type": "image", "_rand": 0.999591138161997, "type": "Poster", "name": "Roboflow100-VL: A Multi-Domain Object Detection Benchmark for Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121741", "abstract": "Vision-language models (VLMs) trained on internet-scale data achieve remarkable zero-shot detection performance on common objects like car, truck, and pedestrian. However, state-of-the-art models still struggle to generalize to out-of-distribution classes, tasks and imaging modalities not typically found in their pre-training. Rather than simply re-training VLMs on more visual data, we argue that one should align VLMs to new concepts with annotation instructions containing a few visual examples and rich textual descriptions. To this end, we introduce Roboflow100-VL, a large-scale collection of 100 multi-modal object detection datasets with diverse concepts not commonly found in VLM pre-training. We evaluate state-of-the-art models on our benchmark in zero-shot, few-shot, semi-supervised, and fully-supervised settings, allowing for comparison across data regimes. Notably, we find that VLMs like GroundingDINO and Qwen2.5-VL achieve less than 2% zero-shot accuracy on challenging medical imaging datasets within Roboflow100-VL, demonstrating the need for few-shot concept alignment. Our code and dataset are available on GitHub and Roboflow.", "arxiv_id": "2505.20612v4", "arxiv_authors": ["Peter Robicheaux", "Matvei Popov", "Anish Madan", "Isaac Robinson", "Joseph Nelson", "Deva Ramanan", "Neehar Peri"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3fc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1103244, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86b"}, "filepath": "data/2506.04308v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990071065432244, "type": "Poster", "name": "RoboRefer: Towards Spatial Referring with Reasoning in Vision-Language Models for Robotics", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118306", "abstract": "Spatial referring is a fundamental capability of embodied robots to interact with the 3D physical world. However, even with the powerful pretrained VLMs, recent approaches are still not qualified to accurately understand the complex 3D scenes and dynamically reason about the instruction-indicated locations for interaction. To this end, we propose RoboRefer, a 3D-aware vision language model (VLM) that can first achieve precise spatial understanding by integrating a disentangled but dedicated depth encoder via supervised fine-tuning (SFT). Moreover, RoboRefer advances generalized multi-step spatial reasoning via reinforcement fine-tuning (RFT), with metric-sensitive process reward functions tailored for spatial referring tasks. To support SFT and RFT training, we introduce RefSpatial, a large-scale dataset of 20M QA pairs (2x prior), covering 31 spatial relations (vs. 15 prior) and supporting complex reasoning processes (up to 5 steps). In addition, we introduce RefSpatial-Bench, a challenging benchmark filling the gap in evaluating spatial referring with multi-step reasoning. Experiments show that SFT-trained RoboRefer achieves state-of-the-art spatial understanding, with an average success rate of 89.6%. RFT-trained RoboRefer further outperforms all other baselines by a large margin, even surpassing Gemini-2.5-Pro by 12.4% in average accuracy on RefSpatial-Bench. Notably, RoboRefer can be integrated with various control policies to execute long-horizon, dynamic tasks across diverse robots (e,g., UR5, G1 humanoid) in cluttered real-world scenes.", "arxiv_id": "2506.04308v3", "arxiv_authors": ["Enshen Zhou", "Jingkun An", "Cheng Chi", "Yi Han", "Shanyu Rong", "Chi Zhang", "Pengwei Wang", "Zhongyuan Wang", "Tiejun Huang", "Lu Sheng", "Shanghang Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3fd"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4141286, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86c"}, "filepath": "data/2506.23135v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993721124140658, "type": "Poster", "name": "RoboScape: Physics-informed Embodied World Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115313", "abstract": "World models have become indispensable tools for embodied intelligence, serving as powerful simulators capable of generating realistic robotic videos while addressing critical data scarcity challenges. However, current embodied world models exhibit limited physical awareness, particularly in modeling 3D geometry and motion dynamics, resulting in unrealistic video generation for contact-rich robotic scenarios. In this paper, we present RoboScape, a unified physics-informed world model that jointly learns RGB video generation and physics knowledge within an integrated framework. We introduce two key physics-informed joint training tasks: temporal depth prediction that enhances 3D geometric consistency in video rendering, and keypoint dynamics learning that implicitly encodes physical properties (e.g., object shape and material characteristics) while improving complex motion modeling. Extensive experiments demonstrate that RoboScape generates videos with superior visual fidelity and physical plausibility across diverse robotic scenarios. We further validate its practical utility through downstream applications including robotic policy training with synthetic data and policy evaluation. We hope this work provides new insights for building efficient physics-informed world models to advance embodied intelligence research. Our code is available at: https://anonymous.4open.science/r/RoboScape-3652.", "arxiv_id": "2506.23135v1", "arxiv_authors": ["Yu Shang", "Xin Zhang", "Yinzhou Tang", "Lei Jin", "Chen Gao", "Wei Wu", "Yong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3fe"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1006874, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86d"}, "filepath": "data/2506.07127v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995236373725804, "type": "Poster", "name": "Robotic Policy Learning via Human-assisted Action Preference Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116985", "abstract": "Establishing a reliable and iteratively refined robotic system is essential for deploying real-world applications. While Vision-Language-Action (VLA) models are widely recognized as the foundation model for such robotic deployment, their dependence on expert demonstrations hinders the crucial capabilities of correction and learning from failures. To mitigate this limitation, we introduce a Human-assisted Action Preference Optimization method named HAPO, designed to correct deployment failures and foster effective adaptation through preference alignment for VLA models. This method begins with a human-robot collaboration framework for reliable failure correction and interaction trajectory collection through human intervention. These human-intervention trajectories are further employed within the action preference optimization process, facilitating VLA models to mitigate failure action occurrences while enhancing corrective action adaptation. Specifically, we propose an adaptive reweighting algorithm to address the issues of irreversible interactions and token probability mismatch when introducing preference optimization into VLA models, facilitating model learning from binary desirability signals derived from interactions. Through combining these modules, our human-assisted action preference optimization method ensures reliable deployment and effective learning from failure for VLA models. The experiments conducted in simulation and real-world scenarios prove superior generalization and robustness of our framework across a variety of manipulation tasks.", "arxiv_id": "2506.07127v2", "arxiv_authors": ["Wenke Xia", "Yichu Yang", "Hongtao Wu", "Xiao Ma", "Tao Kong", "Di Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a3ff"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1003975, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86e"}, "filepath": "data/2506.00070v1.png", "tags": [], "_media_type": "image", "_rand": 0.99916235478993, "type": "Poster", "name": "Robot-R1: Reinforcement Learning for Enhanced Embodied Reasoning in Robotics", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118410", "abstract": "Large Vision-Language Models (LVLMs) have recently shown great promise in advancing robotics by combining embodied reasoning with robot control. A common approach involves training on embodied reasoning tasks related to robot control using Supervised Fine-Tuning (SFT). However, SFT datasets are often heuristically constructed and not explicitly optimized for improving robot control. Furthermore, SFT often leads to issues such as catastrophic forgetting and reduced generalization performance. To address these limitations, we introduce Robot-R1, a novel framework that leverages reinforcement learning to enhance embodied reasoning specifically for robot control. Robot-R1 learns to predict the next keypoint state required for task completion, conditioned on the current scene image and environment metadata derived from expert demonstrations. Inspired by the DeepSeek-R1 learning approach, Robot-R1 samples reasoning-based responses and reinforces those that lead to more accurate predictions. Our experiments show that models trained with Robot-R1 outperform SFT methods on embodied reasoning tasks. Despite having only 7B parameters, Robot-R1 even surpasses GPT-4o on reasoning tasks related to low-level action control, such as spatial and primitive movement reasoning.", "arxiv_id": "2506.00070v1", "arxiv_authors": ["Dongyoung Kim", "Sumin Park", "Huiwon Jang", "Jinwoo Shin", "Jaehyung Kim", "Younggyo Seo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a400"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1029787, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a86f"}, "filepath": "data/2506.14763v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999806659035717, "type": "Poster", "name": "RobotSmith: Generative Robotic Tool Design for Acquisition of Complex Manipulation Skills", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117675", "abstract": "Endowing robots with tool design abilities is critical for enabling them to solve complex manipulation tasks that would otherwise be intractable. While recent generative frameworks can automatically synthesize task settings\u2014such as 3D scenes and reward functions\u2014they have not yet addressed the challenge of tool-use scenarios. Simply retrieving human-designed tools might not be ideal since many tools (e.g., a rolling pin) are difficult for robotic manipulators to handle. Furthermore, existing tool design approaches either rely on predefined templates with limited parameter tuning or apply generic 3D generation methods that are not optimized for tool creation.To address these limitations, we propose **RobotSmith**, an automated pipeline that leverages the implicit physical knowledge embedded in vision-language models (VLMs) alongside the more accurate physics provided by physics simulations to design and use tools for robotic manipulation. Our system (1) iteratively proposes tool designs using collaborative VLM agents, (2) generates low-level robot trajectories for tool use, and (3) jointly optimizes tool geometry and usage for task performance.We evaluate our approach across a wide range of manipulation tasks involving rigid, deformable, and fluid objects. Experiments show that our method consistently outperforms strong baselines in both task success rate and overall performance. Notably, our approach achieves a 50.0\\% average success rate, significantly surpassing other baselines such as 3D generation (21.4\\%) and tool retrieval (11.1\\%). Finally, we deploy our system in real-world settings, demonstrating that the generated tools and their usage plans transfer effectively to physical execution, validating the practicality and generalization capabilities of our approach.", "arxiv_id": "2506.14763v1", "arxiv_authors": ["Chunru Lin", "Haotian Yuan", "Yian Wang", "Xiaowen Qiu", "Tsun-Hsuan Wang", "Minghao Guo", "Bohan Wang", "Yashraj Narang", "Dieter Fox", "Chuang Gan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a401"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057711, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a870"}, "filepath": "data/2510.11417v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993591275299829, "type": "Poster", "name": "Robust Ego-Exo Correspondence with Long-Term Memory", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117412", "abstract": "Establishing object-level correspondence between egocentric and exocentric views is essential for AI assistants to deliver precise and intuitive visual guidance. However, this task presents numerous challenges, including extreme viewpoint variations, occlusions, and the presence of many small objects. Existing methods usually borrow solutions from video object segmentation, such as XSegTx and XView-XMem, which struggle to address aforementioned difficulties effectively. Recently, the Segment Anything Model 2 (SAM 2) has demonstrated strong generalization capabilities and shown excellent performance in video object segmentation. Nevertheless, SAM 2 faces significant challenges when directly applied to the ego-exo correspondence (EEC) task, including suboptimal feature fusion between views and inefficient memory management for long video sequences. To address these limitations, we propose a novel EEC framework based on SAM 2 with long-term memories by presenting a dual-memory system and an adaptive Mixture-of-Experts module. Specifically, our approach features (1) a Memory-View Mixture-of-Experts module which consists of a dual-branch routing mechanism to adaptively assign contribution weights to each expert feature along both channel and spatial dimensions, and (2) a dual-memory bank system with a dedicated compression strategy to retain critical long-term information while eliminating redundancy. In the extensive experiments on the challenging EgoExo4D benchmark, our method, dubbed LM-EEC, achieves new state-of-the-art results and significantly outperforms existing methods and the SAM 2 baseline, showcasing its strong generalization across diverse scenarios. Our code will be released.", "arxiv_id": "2510.11417v1", "arxiv_authors": ["Yijun Hu", "Bing Fan", "Xin Gu", "Haiqing Ren", "Dongfang Liu", "Heng Fan", "Libo Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a402"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1083816, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a871"}, "filepath": "data/2405.13979v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998262416524917, "type": "Poster", "name": "Robust Hyperbolic Learning with Curvature-Aware Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116292", "abstract": "Hyperbolic deep learning has become a growing research direction in computer vision due to the unique properties afforded by the alternate embedding space. The negative curvature and exponentially growing distance metric provide a natural framework for capturing hierarchical relationships between datapoints and allowing for finer separability between their embeddings. However, current hyperbolic learning approaches are still prone to overfitting, computationally expensive, and prone to instability, especially when attempting to learn the manifold curvature to adapt to tasks and different datasets. To address these issues, our paper presents a derivation for Riemannian AdamW that helps increase hyperbolic generalization ability. For improved stability, we introduce a novel fine-tunable hyperbolic scaling approach to constrain hyperbolic embeddings and reduce approximation errors. Using this along with our curvature-aware learning schema for Riemannian Optimizers enables the combination of curvature and non-trivialized hyperbolic parameter learning. Our approach demonstrates consistent performance improvements across Computer Vision, EEG classification, and hierarchical metric learning tasks while greatly reducing runtime.", "arxiv_id": "2405.13979v3", "arxiv_authors": ["Ahmad Bdeir", "Johannes Burchert", "Lars Schmidt-Thieme", "Niels Landwehr"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a403"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1640973, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a872"}, "filepath": "data/2506.03538v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999565033079492, "type": "Poster", "name": "Robust Neural Rendering in the Wild with Asymmetric Dual 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116470", "abstract": "3D reconstruction from in-the-wild images remains a challenging task due to inconsistent lighting conditions and transient distractors. Existing methods typically rely on heuristic strategies to handle the low-quality training data, which often struggle to produce stable and consistent reconstructions, frequently resulting in visual artifacts. In this work, we propose Asymmetric Dual 3DGS, a novel framework that leverages the stochastic nature of these artifacts: they tend to vary across different training runs due to minor randomness. Specifically, our method trains two 3D Gaussian Splatting (3DGS) models in parallel, enforcing a consistency constraint that encourages convergence on reliable scene geometry while suppressing inconsistent artifacts. To prevent the two models from collapsing into similar failure modes due to confirmation bias, we introduce a divergent masking strategy that applies two complementary masks: a multi-cue adaptive mask and a self-supervised soft mask, which leads to an asymmetric training process of the two models, reducing shared error modes. In addition, to improve the efficiency of model training, we introduce a lightweight variant called Dynamic EMA Proxy, which replaces one of the two models with a dynamically updated Exponential Moving Average (EMA) proxy, and employs an alternating masking strategy to preserve divergence. Extensive experiments on challenging real-world datasets demonstrate that our method consistently outperforms existing approaches while achieving high efficiency. Codes and trained models will be released.", "arxiv_id": "2506.03538v3", "arxiv_authors": ["Chengqi Li", "Zhihao Shi", "Yangdi Lu", "Wenbo He", "Xiangyu Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a404"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1077481, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a873"}, "filepath": "data/2507.12201v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990852023700463, "type": "Poster", "name": "RODS: Robust Optimization Inspired Diffusion Sampling for Detecting and Reducing Hallucination in Generative Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116819", "abstract": "Diffusion models have achieved state-of-the-art performance in generative modeling, yet their sampling procedures remain vulnerable to hallucinations\u2014often stemming from inaccuracies in score approximation. In this work, we reinterpret diffusion sampling through the lens of optimization and introduce RODS (Robust Optimization\u2013inspired Diffusion Sampler), a novel method that detects and corrects high-risk sampling steps using geometric cues from the loss landscape. RODS enforces smoother sampling trajectories and \\textit{adaptively} adjusts perturbations, reducing hallucinations without retraining and at minimal additional inference cost. Experiments on AFHQv2, FFHQ, and 11k-hands demonstrate that RODS improves both sampling fidelity and robustness, detecting over 70\\% of hallucinated samples and correcting more than 25\\%, all while avoiding the introduction of new artifacts.", "arxiv_id": "2507.12201v2", "arxiv_authors": ["Yiqi Tian", "Pengfei Jin", "Mingze Yuan", "Na Li", "Bo Zeng", "Quanzheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a405"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062665, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a874"}, "filepath": "data/2510.03163v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993267260716582, "type": "Poster", "name": "ROGR: Relightable 3D Objects using Generative Relighting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117687", "abstract": "We introduce ROGR, a novel approach that reconstructs a relightable 3D model of an object captured from multiple views, driven by a generative relighting model that simulates the effects of placing the object under novel environment illuminations. Our method samples the appearance of the object under multiple lighting environments, creating a dataset that is used to train a lighting-conditioned Neural Radiance Field (NeRF) that outputs the object's appearance under any input environmental lighting. The lighting-conditioned NeRF uses a novel dual-branch architecture to encode the general lighting effects and specularities separately. The optimized lighting-conditioned NeRF enables efficient feed-forward relighting under arbitrary environment maps without requiring per-illumination optimization or light transport simulation. We evaluate our approach on the established TensoIR and Stanford-ORB datasets, where it improves upon the state-of-the-art on most metrics, and showcase our approach on real-world object captures.", "arxiv_id": "2510.03163v1", "arxiv_authors": ["Jiapeng Tang", "Matthew Lavine", "Dor Verbin", "Stephan J. Garbin", "Matthias Nie\u00dfner", "Ricardo Martin Brualla", "Pratul P. Srinivasan", "Philipp Henzler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a406"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3408175, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a875"}, "filepath": "data/2503.10037v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991015449264751, "type": "Poster", "name": "Role Bias in Text-to-Image Diffusion Models: Diagnosing and Mitigating Compositional Failures through Intermediate Decomposition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115203", "abstract": "Text-to-image (T2I) diffusion models exhibit impressive photorealistic image generation capabilities, yet they struggle in compositional image generation. In this work, we introduce RoleBench, a benchmark focused on evaluating compositional generalization in action-based relations (e.g., mouse chasing cat). We show that state-of-the-art T2I models and compositional approaches consistently default to frequent reversed relations (i.e., cat chasing mouse), a phenomenon we call Role-Collapse. Related works attribute this to the model's architectural limitation or being underrepresented in the data. Our key insight reveals that while models fail on rare compositions when their inversions are common, they can successfully generate similar intermediate compositions (e.g., ``mouse chasing boy\"), suggesting that this limitation is due to the presence of frequent counterparts rather than the absence of rare compositions. Motivated by this, we hypothesize that directional decomposition can gradually mitigate role collapse. We test this via ReBind, a lightweight framework that teaches role bindings using carefully selected active/passive intermediaries. Experiments suggest that intermediate compositions through intermediate fine-tuning can significantly mitigate role bias, with humans preferring more than 78% compared to state-of-the-art methods. Our findings highlight the role of distributional asymmetries in compositional failures and offer a simple, effective path to improving generalization.", "arxiv_id": "2503.10037v2", "arxiv_authors": ["Sina Malakouti", "Adriana Kovashka"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a407"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050023, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a876"}, "filepath": "data/2503.10392v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994373178774171, "type": "Poster", "name": "RoMA: Scaling up Mamba-based Foundation Models for Remote Sensing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118077", "abstract": "Recent advances in self-supervised learning for Vision Transformers (ViTs) have fueled breakthroughs in remote sensing (RS) foundation models. However, the quadratic complexity of self-attention poses a significant barrier to scalability, particularly for large models and high-resolution images. While the linear-complexity Mamba architecture offers a promising alternative, existing RS applications of Mamba remain limited to supervised tasks on small, domain-specific datasets. To address these challenges, we propose RoMA, a framework that enables scalable self-supervised pretraining of Mamba-based RS foundation models using large-scale, diverse, unlabeled data. RoMA enhances scalability for high-resolution images through a tailored auto-regressive learning strategy, incorporating two key innovations: 1) a rotation-aware pretraining mechanism combining adaptive cropping with angular embeddings to handle sparsely distributed objects with arbitrary orientations, and 2) multi-scale token prediction objectives that address the extreme variations in object scales inherent to RS imagery. Systematic empirical studies validate that Mamba adheres to RS data and parameter scaling laws, with performance scaling reliably as model and data size increase. Furthermore, experiments across scene classification, object detection, and semantic segmentation tasks demonstrate that RoMA-pretrained Mamba models consistently outperform ViT-based counterparts in both accuracy and computational efficiency. The source code and pretrained models have be released in Anonymous Github.", "arxiv_id": "2503.10392v1", "arxiv_authors": ["Fengxiang Wang", "Hongzhen Wang", "Yulin Wang", "Di Wang", "Mingshuo Chen", "Haiyan Zhao", "Yangang Sun", "Shuo Wang", "Long Lan", "Wenjing Yang", "Jing Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a408"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.643Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1680369, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a877"}, "filepath": "data/2505.23756v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997970946729302, "type": "Poster", "name": "Rooms from Motion: Un-posed Indoor 3D Object Detection as Localization and Mapping", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120323", "abstract": "We revisit scene-level 3D object detection as the output of an object-centric framework capable of both localization and mapping using 3D oriented boxes as the underlying geometric primitive. While existing 3D object detection approaches operate globally and implicitly rely on the a priori existence of metric camera poses, our method, Rooms from Motion (RfM) operates on a collection of un-posed images. By replacing the standard 2D keypoint-based matcher of structure-from-motion with an object-centric matcher based on image-derived 3D boxes, we estimate metric camera poses, object tracks, and finally produce a global, semantic 3D object map. When a priori pose is available, we can significantly improve map quality through optimization of global 3D boxes against individual observations. RfM shows strong localization performance and subsequently produces maps of higher quality than leading point-based and multi-view 3D object detection methods on CA-1M and ScanNet++, despite these global methods relying on overparameterization through point clouds or dense volumes. Rooms from Motion achieves a general, object-centric representation which not only extends the work of Cubify Anything to full scenes but also allows for inherently sparse localization and parametric mapping proportional to the number of objects in a scene.", "arxiv_id": "2505.23756v1", "arxiv_authors": ["Justin Lazarow", "Kai Kang", "Afshin Dehghan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a409"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4562367, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a878"}, "filepath": "data/2505.13344v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990977954619518, "type": "Poster", "name": "RoPECraft: Training-Free Motion Transfer with Trajectory-Guided RoPE Optimization on Diffusion Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119588", "abstract": "We propose RoPECraft, a training-free video motion transfer method for diffusion transformers that operates solely by modifying their rotary positional embeddings (RoPE). We first extract dense optical flow from a reference video, and utilize the resulting motion offsets to warp the complex-exponential tensors of RoPE, effectively encoding motion into the generation process. These embeddings are then further optimized during denoising time steps via trajectory alignment between the predicted and target velocities using a flow-matching objective. To keep the output faithful to the text prompt and prevent duplicate generations, we incorporate a regularization term based on the phase components of the reference video\u2019s Fourier transform, projecting the phase angles onto a smooth manifold to suppress high-frequency artifacts. Experiments on benchmarks reveal that RoPECraft outperforms all recently published methods, both qualitatively and quantitatively. Code will be released.", "arxiv_id": "2505.13344v1", "arxiv_authors": ["Ahmet Berke Gokmen", "Yigit Ekin", "Bahri Batuhan Bilecen", "Aysegul Dundar"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5807829, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a879"}, "filepath": "data/2508.18633v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998914949075701, "type": "Poster", "name": "ROSE: Remove Objects with Side Effects in Videos", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115238", "abstract": "Video object removal has achieved advanced performance due to the recent success of video generative models. However, when addressing the side effects of objects, \\textit{e.g.,} their shadows and reflections, existing works struggle to eliminate these effects for the scarcity of paired video data as supervision. This paper presents \\method, termed \\textbf{R}emove \\textbf{O}bjects with \\textbf{S}ide \\textbf{E}ffects, a framework that systematically studies the object's effects on environment, which can be categorized into five common cases: shadows, reflections, light, translucency and mirror. Given the challenges of curating paired videos exhibiting the aforementioned effects, we leverage a 3D rendering engine for synthetic data generation. We carefully construct a fully-automatic pipeline for data preparation, which simulates a large-scale paired dataset with diverse scenes, objects, shooting angles, and camera trajectories. ROSE is implemented as an video inpainting model built on diffusion transformer. To localize all object-correlated areas, the entire video is fed into the model for reference-based erasing. Moreover, additional supervision is introduced to explicitly predict the areas affected by side effects, which can be revealed through the differential mask between the paired videos. To fully investigate the model performance on various side effect removal, we presents a new benchmark, dubbed ROSE-Bench, incorporating both common scenarios and the five special side effects for comprehensive evaluation. Experimental results demonstrate that \\method achieves superior performance compared to existing video object erasing models and generalizes well to real-world video scenarios.", "arxiv_id": "2508.18633v1", "arxiv_authors": ["Chenxuan Miao", "Yutong Feng", "Jianshu Zeng", "Zixiang Gao", "Hantang Liu", "Yunfeng Yan", "Donglian Qi", "Xi Chen", "Bin Wang", "Hengshuang Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051189, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87a"}, "filepath": "data/2509.23991v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990231288563446, "type": "Poster", "name": "RPG360: Robust 360 Depth Estimation with Perspective Foundation Models and Graph Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116329", "abstract": "The increasing use of 360$^\\circ$ images across various domains has emphasized the need for robust depth estimation techniques tailored for omnidirectional images. However, obtaining large-scale labeled datasets for 360$^\\circ$ depth estimation remains a significant challenge. In this paper, we propose RPG360, a training-free robust 360$^\\circ$ monocular depth estimation method that leverages perspective foundation models and graph optimization. Our approach converts 360$^\\circ$ images into six- face cubemap representations, where a perspective foundation model is employed to estimate depth and surface normals. To address depth scale inconsistencies across different faces of the cubemap, we introduce a novel depth scale alignment technique using graph-based optimization, which parameterizes the predicted depth and normal maps while incorporating an additional per-face scale parameter. This optimization ensures depth scale consistency across the six-face cubemap while preserving 3D structural integrity. Furthermore, as foundation models exhibit inherent robustness in zero-shot settings, our method achieves superior performance across diverse datasets, including Matterport3D, Stanford2D3D, and 360Loc. We also demonstrate the versatility of our depth estimation approach by validating its benefits in downstream tasks such as feature matching 3.2 \u223c 5.4% and Structure from Motion 0.2 \u223c 9.7% in AUC@5$^\\circ$.", "arxiv_id": "2509.23991v1", "arxiv_authors": ["Dongki Jung", "Jaehoon Choi", "Yonghan Lee", "Dinesh Manocha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2970478, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87b"}, "filepath": "data/2509.01907v4.png", "tags": [], "_media_type": "image", "_rand": 0.9998232800854339, "type": "Poster", "name": "RSCC: A Large-Scale Remote Sensing Change Caption Dataset for Disaster Events", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121378", "abstract": "Remote sensing is critical for disaster monitoring, yet existing datasets lack temporal image pairs and detailed textual annotations. While single-snapshot imagery dominates current resources, it fails to capture dynamic disaster impacts over time. To address this gap, we introduce the Remote Sensing Change Caption (RSCC) dataset, a large-scale benchmark comprising 62,315 pre-/post-disaster image pairs (spanning earthquakes, floods, wildfires, and more) paired with rich, human-like change captions. By bridging the temporal and semantic divide in remote sensing data, RSCC enables robust training and evaluation of vision-language models for disaster-aware bi-temporal understanding. Our results highlight RSCC\u2019s ability to facilitate detailed disaster-related analysis, paving the way for more accurate, interpretable, and scalable vision-language applications in remote sensing. Code and dataset are available at https://github.com/Bili-Sakura/RSCC.", "arxiv_id": "2509.01907v4", "arxiv_authors": ["Zhenyuan Chen", "Chenxi Wang", "Ningyu Zhang", "Feng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 975600, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87c"}, "filepath": "data/2505.02064v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995808087120825, "type": "Poster", "name": "RTV-Bench: Benchmarking MLLM Continuous Perception, Understanding and Reasoning through Real-Time Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121690", "abstract": "Multimodal Large Language Models (MLLMs) increasingly excel at perception,understanding, and reasoning. However, current benchmarks inadequately evaluate their ability to perform these tasks continuously in dynamic, real-world environments. To bridge this gap, we introduce RT V-Bench, a fine-grained benchmark for MLLM real-time video analysis. RTV-Bench includes three key principles: (1) Multi-Timestamp Question Answering (MTQA), where answers evolve with scene changes; (2) Hierarchical Question Structure, combining basic and advanced queries; and (3) Multi-dimensional Evaluation, assessing the ability of continuous perception, understanding, and reasoning. RTV-Bench contains 552 diverse videos (167.2 hours) and 4,631 high-quality QA pairs. We evaluated leading MLLMs, including proprietary (GPT-4o, Gemini 2.0), open-source offline (Qwen2.5-VL, VideoLLaMA3), and open-source real-time (VITA-1.5, InternLM-XComposer2.5-OmniLive) models. Experiment results show open-source real-time models largely outperform offline ones but still trail top proprietary models. Our analysis also reveals that larger model size or higher frame sampling rates do not significantly boost RTV-Bench performance, sometimes causing slight decreases.This underscores the need for better model architectures optimized for video stream processing and long sequences to advance real-time video analysis with MLLMs.", "arxiv_id": "2505.02064v3", "arxiv_authors": ["Shuhang Xun", "Sicheng Tao", "Jungang Li", "Yibo Shi", "Zhixin Lin", "Zhanhui Zhu", "Yibo Yan", "Hanqian Li", "Linghao Zhang", "Shikang Wang", "Yixin Liu", "Hanbo Zhang", "Ying Ma", "Xuming Hu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1094206, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87d"}, "filepath": "data/2509.24266v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994099907720708, "type": "Poster", "name": "S$^2$NN: Sub-bit Spiking Neural Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116668", "abstract": "Spiking Neural Networks (SNNs) offer an energy-efficient paradigm for machine intelligence, but their continued scaling poses challenges for resource-limited deployment. Despite recent advances in binary SNNs, the storage and computational demands remain substantial for large-scale networks. To further explore the compression and acceleration potential of SNNs, we propose Sub-bit Spiking Neural Networks (S$^2$NNs) that represent weights with less than one bit. Specifically, we first establish an S$^2$NN baseline by leveraging the clustering patterns of kernels in well-trained binary SNNs. This baseline is highly efficient but suffers from \\textit{outlier-induced codeword selection bias} during training. To mitigate this issue, we propose an \\textit{outlier-aware sub-bit weight quantization} (OS-Quant) method, which optimizes codeword selection by identifying and adaptively scaling outliers. Furthermore, we propose a \\textit{membrane potential-based feature distillation} (MPFD) method, improving the performance of highly compressed S$^2$NN via more precise guidance from a teacher model. Extensive results on vision and non-vision tasks reveal that S$^2$NN outperforms existing quantized SNNs in both performance and efficiency, making it promising for edge computing applications.", "arxiv_id": "2509.24266v2", "arxiv_authors": ["Wenjie Wei", "Malu Zhang", "Jieyuan Zhang", "Ammar Belatreche", "Shuai Wang", "Yimeng Shan", "Hanwen Liu", "Honglin Cao", "Guoqing Wang", "Yang Yang", "Haizhou Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a40f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1144131, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87e"}, "filepath": "data/2506.09937v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995248223189782, "type": "Poster", "name": "SAFE: Scalable Failure Estimation for Vision-Language-Action Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117510", "abstract": "While vision-language-action models (VLAs) have shown promising robotic behaviors across a diverse set of manipulation tasks, they achieve limited success rates when deployed on novel tasks out-of-the-box. To allow these policies to safely interact with their environments, we need a failure detector that gives a timely alert such that the robot can stop, backtrack, or ask for help. However, existing failure detectors are trained and tested only on one or a few specific tasks, while VLAs require the detector to generalize and detect failures also in unseen tasks and novel environments. In this paper, we introduce the multitask failure detection problem and propose SAFE, a failure detector for generalist robot policies such as VLAs. We analyze the VLA feature space and find that VLAs have sufficient high-level knowledge about task success and failure, which is generic across different tasks. Based on this insight, we design SAFE to learn from VLA internal features and predict a single scalar indicating the likelihood of task failure. SAFE is trained on both successful and failed rollouts, and is evaluated on unseen tasks. SAFE is compatible with different policy architectures. We test it on OpenVLA, $\\pi_0$, and $\\pi_0$-FAST in both simulated and real-world environments extensively. We compare SAFE with diverse baselines and show that SAFE achieves state-of-the-art failure detection performance and the best trade-off between accuracy and detection time using conformal prediction.", "arxiv_id": "2506.09937v1", "arxiv_authors": ["Qiao Gu", "Yuanliang Ju", "Shengxiang Sun", "Igor Gilitschenski", "Haruki Nishimura", "Masha Itkina", "Florian Shkurti"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a410"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1052956, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a87f"}, "filepath": "data/2505.12667v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994251010999526, "type": "Poster", "name": "Safe-Sora: Safe Text-to-Video Generation via Graphical Watermarking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115956", "abstract": "The explosive growth of generative video models has amplified the demand forreliable copyright preservation of AI-generated content. Despite its popularity inimage synthesis, invisible generative watermarking remains largely underexploredin video generation. To address this gap, we propose Safe-Sora, the first frameworkto embed graphical watermarks directly into the video generation process. Motivated by the observation that watermarking performance is closely tied to the visualsimilarity between the watermark and cover content, we introduce a hierarchicalcoarse-to-fine adaptive matching mechanism. Specifically, the watermark image isdivided into patches, each assigned to the most visually similar video frame, andfurther localized to the optimal spatial region for seamless embedding. To enablespatiotemporal fusion of watermark patches across video frames, we develop a 3Dwavelet transform-enhanced Mamba architecture with a novel scanning strategy,effectively modeling long-range dependencies during watermark embedding andretrieval. To the best of our knowledge, this is the first attempt to apply state spacemodels to watermarking, opening new avenues for efficient and robust watermarkprotection. Extensive experiments demonstrate that Safe-Sora achieves state-of-the-art performance in terms of video quality, watermark fidelity, and robustness, whichis largely attributed to our proposals. Code and additional supporting materials areprovided in the supplementary.", "arxiv_id": "2505.12667v2", "arxiv_authors": ["Zihan Su", "Xuerui Qiu", "Hongbin Xu", "Tangyu Jiang", "Junhao Zhuang", "Chun Yuan", "Ming Li", "Shengfeng He", "Fei Richard Yu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a411"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100431, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a880"}, "filepath": "data/2505.11926v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996140548138066, "type": "Poster", "name": "SafeVid: Toward Safety Aligned Video Large Multimodal Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121640", "abstract": "As Video Large Multimodal Models (VLMMs) rapidly advance, their inherent complexity introduces significant safety challenges, particularly the issue of mismatched generalization where static safety alignments fail to transfer to dynamic video contexts. We introduce SafeVid, a framework designed to instill video-specific safety principles in VLMMs. SafeVid uniquely transfers robust textual safety alignment capabilities to the video domain by employing detailed textual video descriptions as an interpretive bridge, facilitating LLM-based rule-driven safety reasoning. This is achieved through a closed-loop system comprising: 1) generation of SafeVid-350K, a novel 350,000-pair video-specific safety preference dataset; 2) targeted alignment of VLMMs using Direct Preference Optimization (DPO); and 3) comprehensive evaluation via our new SafeVidBench benchmark. Alignment with SafeVid-350K significantly enhances VLMM safety, with models like LLaVA-NeXT-Video demonstrating substantial improvements (e.g., up to 42.39%) on SafeVidBench. SafeVid provides critical resources and a structured approach, demonstrating that leveraging textual descriptions as a conduit for safety reasoning markedly improves the safety alignment of VLMMs in complex multimodal scenarios.", "arxiv_id": "2505.11926v1", "arxiv_authors": ["Yixu Wang", "Jiaxin Song", "Yifeng Gao", "Xin Wang", "Yang Yao", "Yan Teng", "Xingjun Ma", "Yingchun Wang", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a412"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1131867, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a881"}, "filepath": "data/2503.03480v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990391203564793, "type": "Poster", "name": "SafeVLA: Towards Safety Alignment of Vision-Language-Action Model via Constrained Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116975", "abstract": "Vision-language-action models (VLAs) show potential as generalist robot policies. However, these models pose extreme safety challenges during real-world deployment, including the risk of harm to the environment, the robot itself, and humans. *How can safety constraints be explicitly integrated into VLAs?* We address this by exploring an integrated safety approach (ISA), systematically **modeling** safety requirements, then actively **eliciting** diverse unsafe behaviors, effectively **constraining** VLA policies via safe reinforcement learning, and rigorously **assuring** their safety through targeted evaluations. Leveraging the constrained Markov decision process (CMDP) paradigm, ISA optimizes VLAs from a min-max perspective against elicited safety risks. Thus, policies aligned through this comprehensive approach achieve the following key features: (I) effective **safety-performance trade-offs**, this exploration yields an 83.58\\% safety improvement compared to the current state-of-the-art method, while also maintaining task performance (+3.85\\%). (II) strong **safety assurance**, with the ability to mitigate long-tail risks and handle extreme failure scenarios. (III) robust **generalization** of learned safety behaviors to various out-of-distribution perturbations.", "arxiv_id": "2503.03480v2", "arxiv_authors": ["Borong Zhang", "Yuhao Zhang", "Jiaming Ji", "Yingshan Lei", "Josef Dai", "Yuanpei Chen", "Yaodong Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a413"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1181351, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a882"}, "filepath": "data/2510.10160v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998388921315406, "type": "Poster", "name": "SaFiRe: Saccade-Fixation Reiteration with Mamba for Referring Image Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117451", "abstract": "Referring Image Segmentation (RIS) aims to segment the target object in an image given a natural language expression. While recent methods leverage pre-trained vision backbones and more training corpus to achieve impressive results, they predominantly focus on simple expressions\u2014short, clear noun phrases like \u201cred car\u201d or \u201cleft girl\u201d. This simplification often reduces RIS to a key word/concept matching problem, limiting the model\u2019s ability to handle referential ambiguity in expressions. In this work, we identify two challenging real-world scenarios: object-distracting expressions, which involve multiple entities with contextual cues, and category-implicit expressions, where the object class is not explicitly stated. To address the challenges, we propose a novel framework, SaFiRe, which mimics the human two-phase cognitive process\u2014first forming a global understanding, then refining it through detail-oriented inspection. This is naturally supported by Mamba\u2019s scan-then-update property, which aligns with our phased design and enables efficient multi-cycle refinement with linear complexity. We further introduce aRefCOCO, a new benchmark designed to evaluate RIS models under ambiguous referring expressions. Extensive experiments on both standard and proposed datasets demonstrate the superiority of SaFiRe over state-of-the-art baselines.", "arxiv_id": "2510.10160v1", "arxiv_authors": ["Zhenjie Mao", "Yuhuan Yang", "Chaofan Ma", "Dongsheng Jiang", "Jiangchao Yao", "Ya Zhang", "Yanfeng Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a414"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1096283, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a883"}, "filepath": "data/2510.15194v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994470461778541, "type": "Poster", "name": "Salient Concept-Aware Generative Data Augmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115341", "abstract": "Recent generative data augmentation methods conditioned on both image and text prompts struggle to balance between fidelity and diversity, as it is challenging to preserve essential image details while aligning with varied text prompts. This challenge arises because representations in the synthesis process often become entangled with non-essential input image attributes such as environmental contexts, creating conflicts with text prompts intended to modify these elements.To address this, we propose a personalized image generation framework that uses a salient concept-aware image embedding model to reduce the influence of irrelevant visual details during the synthesis process, thereby maintaining intuitive alignment between image and text inputs.By generating images that better preserve class-discriminative features with additional controlled variations, our framework effectively enhances the diversity of training datasets and thereby improves the robustness of downstream models.Our approach demonstrates superior performance across eight fine-grained vision datasets, outperforming state-of-the-art augmentation methods with averaged classification accuracy improvements by 0.73\\% and 6.5\\% under conventional and long-tail settings, respectively.", "arxiv_id": "2510.15194v1", "arxiv_authors": ["Tianchen Zhao", "Xuanbai Chen", "Zhihua Li", "Jun Fang", "Dongsheng An", "Xiang Xu", "Zhuowen Tu", "Yifan Xing"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a415"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.644Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061012, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a884"}, "filepath": "data/2505.18812v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991499902324447, "type": "Poster", "name": "SAMA: Towards Multi-Turn Referential Grounded Video Chat with Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117034", "abstract": "Achieving fine-grained spatio-temporal understanding in videos remains a major challenge for current Video Large Multimodal Models (Video LMMs). Addressing this challenge requires mastering two core capabilities: video referring understanding, which captures the semantics of video regions, and video grounding, which segments object regions based on natural language descriptions.However, most existing approaches tackle these tasks in isolation, limiting progress toward unified, referentially grounded video interaction. We identify a key bottleneck in the lack of high-quality, unified video instruction data and a comprehensive benchmark for evaluating referentially grounded video chat.To address these challenges, we contribute in three core aspects: dataset, model, and benchmark.First, we introduce SAMA-239K, a large-scale dataset comprising 15K videos specifically curated to enable joint learning of video referring understanding, grounding, and multi-turn video chat.Second, we propose the SAMA model, which incorporates a versatile spatio-temporal context aggregator and a Segment Anything Model to jointly enhance fine-grained video comprehension and precise grounding capabilities.Finally, we establish SAMA-Bench, a meticulously designed benchmark consisting of 5,067 questions from 522 videos, to comprehensively evaluate the integrated capabilities of Video LMMs in multi-turn, spatio-temporal referring understanding and grounded dialogue.Extensive experiments and benchmarking results show that SAMA not only achieves strong performance on SAMA-Bench but also sets a new state-of-the-art on general grounding benchmarks, while maintaining highly competitive performance on standard visual understanding benchmarks.", "arxiv_id": "2505.18812v2", "arxiv_authors": ["Ye Sun", "Hao Zhang", "Henghui Ding", "Tiehua Zhang", "Xingjun Ma", "Yu-Gang Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a416"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061826, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a885"}, "filepath": "data/2509.15536v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990918792225161, "type": "Poster", "name": "SAMPO: Scale-wise Autoregression with Motion Prompt for Generative World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118228", "abstract": "World models allow agents to simulate the consequences of actions in imagined environments for planning, control, and long-horizon decision-making. However, existing autoregressive world models struggle with visually coherent predictions due to disrupted spatial structure, inefficient decoding, and inadequate motion modeling. In response, we propose Scale-wise Autoregression with Motion PrOmpt (SAMPO), a hybrid framework that combines visual autoregressive modeling for intra-frame generation with causal modeling for next-frame generation. Specifically, SAMPO integrates temporal causal decoding with bidirectional spatial attention, which preserves spatial locality and supports parallel decoding within each scale. This design significantly enhances both temporal consistency and rollout efficiency. To further improve dynamic scene understanding, we devise an asymmetric multi-scale tokenizer that preserves spatial details in observed frames and extracts compact dynamic representations for future frames, optimizing both memory usage and model performance. Additionally, we introduce a trajectory-aware motion prompt module that injects spatiotemporal cues about object and robot trajectories, focusing attention on dynamic regions and improving temporal consistency and physical realism. Extensive experiments show that SAMPO achieves competitive performance in action-conditioned video prediction and model-based control, improving generation quality with 4.4\u00d7 faster inference. We also evaluate SAMPO's zero-shot generalization and scaling behavior, demonstrating its ability to generalize to unseen tasks and benefit from larger model sizes.", "arxiv_id": "2509.15536v2", "arxiv_authors": ["Sen Wang", "Jingyi Tian", "Le Wang", "Zhimin Liao", "Jiayi Li", "Huaiyi Dong", "Kun Xia", "Sanping Zhou", "Wei Tang", "Hua Gang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a417"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044891, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a886"}, "filepath": "data/2505.22596v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998033062822299, "type": "Poster", "name": "SAM-R1: Leveraging SAM for Reward Feedback in Multimodal Segmentation via Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117014", "abstract": "Leveraging multimodal large models for image segmentation has become a prominent research direction. However, existing approaches typically rely heavily on manually annotated datasets that include explicit reasoning processes, which are costly and time-consuming to produce. Recent advances suggest that reinforcement learning (RL) can endow large models with reasoning capabilities without requiring such reasoning-annotated data.In this paper, we propose SAM-R1, a novel framework that enables multimodal large models to perform fine-grained reasoning in image understanding tasks. Our approach is the first to incorporate fine-grained segmentation settings during the training of multimodal reasoning models. By integrating task-specific, fine-grained rewards with a tailored optimization objective, we further enhance the model's reasoning and segmentation alignment. We also leverage the Segment Anything Model (SAM) as a strong and flexible reward provider to guide the learning process.With only 3k training samples, SAM-R1 achieves strong performance across multiple benchmarks, demonstrating the effectiveness of reinforcement learning in equipping multimodal models with segmentation-oriented reasoning capabilities.", "arxiv_id": "2505.22596v1", "arxiv_authors": ["Jiaqi Huang", "Zunnan Xu", "Jun Zhou", "Ting Liu", "Yicheng Xiao", "Mingwen Ou", "Bowen Ji", "Xiu Li", "Kehong Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a418"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1044068, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a887"}, "filepath": "data/2505.21795v1.png", "tags": [], "_media_type": "image", "_rand": 0.999730027726684, "type": "Poster", "name": "SANSA: Unleashing the Hidden Semantics in SAM2 for Few-Shot Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116074", "abstract": "Few-shot segmentation aims to segment unseen object categories from just a handful of annotated examples. This requires mechanisms that can both identify semantically related objects across images and accurately produce segmentation masks. We note that Segment Anything 2 (SAM2), with its prompt-and-propagate mechanism, offers both strong segmentation capabilities and a built-in feature matching process. However, we show that its representations are entangled with task-specific cues optimized for object tracking, which impairs its use for tasks requiring higher level semantic understanding. Our key insight is that, despite its class-agnostic pretraining, SAM2 already encodes rich semantic structure in its features. We propose SANSA (Semantically AligNed SegmentAnything 2), a framework that makes this latent structure explicit, and repurposes SAM2 for few-shot segmentation through minimal task-specific modifications. SANSA achieves state-of-the-art performance on few-shot segmentation benchmarks specifically designed to assess generalization, outperforms generalist methods in the popular in-context setting, supports flexible promptable interaction via points, boxes, or scribbles, and remains significantly faster and more compact than prior approaches.", "arxiv_id": "2505.21795v1", "arxiv_authors": ["Claudia Cuttano", "Gabriele Trivigno", "Giuseppe Averta", "Carlo Masone"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a419"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1120363, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a888"}, "filepath": "data/2505.15870v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997367913313998, "type": "Poster", "name": "Satellites Reveal Mobility: A Commuting Origin-destination Flow Generator for Global Cities", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121846", "abstract": "Commuting Origin-destination (OD) flows, capturing daily population mobility of citizens, are vital for sustainable development across cities around the world. However, it is challenging to obtain the data due to the high cost of travel surveys and privacy concerns. Surprisingly, we find that satellite imagery, publicly available across the globe, contains rich urban semantic signals to support high-quality OD flow generation, with over 98\\% expressiveness of traditional multisource hard-to-collect urban sociodemographic, economics, land use, and point of interest data. This inspires us to design a novel data generator, GlODGen, which can generate OD flow data for any cities of interest around the world. Specifically, GlODGen first leverages Vision-Language Geo-Foundation Models to extract urban semantic signals related to human mobility from satellite imagery. These features are then combined with population data to form region-level representations, which are used to generate OD flows via graph diffusion models. Extensive experiments on 4 continents and 6 representative cities show that GlODGen has great generalizability across diverse urban environments on different continents and can generate OD flow data for global cities highly consistent with real-world mobility data. We implement GlODGen as an automated tool, seamlessly integrating data acquisition and curation, urban semantic feature extraction, and OD flow generation together. It has been released at https://github.com/tsinghua-fib-lab/generate-od-pubtools.", "arxiv_id": "2505.15870v1", "arxiv_authors": ["Can Rong", "Xin Zhang", "Yanxin Xi", "Hongjie Sui", "Jingtao Ding", "Yong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1095650, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a889"}, "filepath": "data/2506.05414v1.png", "tags": [], "_media_type": "image", "_rand": 0.999652836182507, "type": "Poster", "name": "SAVVY: Spatial Awareness via Audio-Visual LLMs through Seeing and Hearing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115001", "abstract": "3D spatial reasoning in dynamic, audio-visual environments is a cornerstone of human cognition yet remains largely unexplored by existing Audio-Visual Large Language Models (AV-LLMs) and benchmarks, which predominantly focus on static or 2D scenes. We introduce SAVVY-Bench, the first benchmark for 3D spatial reasoning in dynamic scenes with synchronized spatial audio. SAVVY-Bench is comprised of thousands of carefully curated question\u2013answer pairs probing both directional and distance relationships involving static and moving objects, and requires fine-grained temporal grounding, consistent 3D localization, and multi-modal annotation. To tackle this challenge, we propose SAVVY, a novel training-free reasoning pipeline that consists of two stages: (i) Egocentric Spatial Tracks Estimation, which leverages AV-LLMs as well as other audio-visual methods to track the trajectories of key objects related to the query using both visual and spatial audio cues, and (ii) Dynamic Global Map Construction, which aggregates multi-modal queried object trajectories and converts them into a unified global dynamic map. Using the constructed map, a final QA answer is obtained through a coordinate transformation that aligns the global map with the queried viewpoint. Empirical evaluation demonstrates that SAVVY substantially enhances performance of state-of-the-art AV-LLMs, setting a new standard and stage for approaching dynamic 3D spatial reasoning in AV-LLMs.", "arxiv_id": "2506.05414v1", "arxiv_authors": ["Mingfei Chen", "Zijun Cui", "Xiulong Liu", "Jinlin Xiang", "Caleb Zheng", "Jingyuan Li", "Eli Shlizerman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1934215, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88a"}, "filepath": "data/2506.19212v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994860373856883, "type": "Poster", "name": "Scaffolding Dexterous Manipulation with Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118204", "abstract": "Dexterous robotic hands are essential for performing complex manipulation tasks, yet remain difficult to train due to the challenges of demonstration collection and high-dimensional control. While reinforcement learning (RL) can alleviate the data bottleneck by generating experience in simulation, it typically relies on carefully designed, task-specific reward functions, which hinder scalability and generalization. Thus, contemporary works in dexterous manipulation have often bootstrapped from reference trajectories. These trajectories specify target hand poses that guide the exploration of RL policies and object poses that enable dense, task-agnostic rewards.However, sourcing suitable trajectories---particularly for dexterous hands---remains a significant challenge. Yet, the precise details in explicit reference trajectories are often unnecessary, as RL ultimately refines the motion. Our key insight is that modern vision-language models (VLMs) already encode the commonsense spatial and semantic knowledge needed to specify tasks and guide exploration effectively. Given a task description (e.g., \u201copen the cabinet\u201d) and a visual scene, our method uses an off-the-shelf VLM to first identify task-relevant keypoints (e.g., handles, buttons) and then synthesize 3D trajectories for hand motion and object motion. Subsequently, we train a low-level residual RL policy in simulation to track these coarse trajectories or ``scaffolds'' with high fidelity. Across a number of simulated tasks involving articulated objects and semantic understanding, we demonstrate that our method is able to learn robust dexterous manipulation policies. Moreover, we showcase that our method transfers to real-world robotic hands without any human demonstrations or handcrafted rewards.", "arxiv_id": "2506.19212v1", "arxiv_authors": ["Vincent de Bakker", "Joey Hejna", "Tyler Ga Wei Lum", "Onur Celik", "Aleksandar Taranovic", "Denis Blessing", "Gerhard Neumann", "Jeannette Bohg", "Dorsa Sadigh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41c"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1013718, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88b"}, "filepath": "data/2510.22994v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996966609412211, "type": "Poster", "name": "SceneDecorator: Towards Scene-Oriented Story Generation with Scene Planning and Scene Consistency", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117381", "abstract": "Recent text-to-image models have revolutionized image generation, but they still struggle with maintaining narrative consistency across images. While existing solutions focus on character consistency, they overlook the crucial role of scenes in storytelling, restricting their creativity in practice. This paper introduces scene-oriented story generation, addressing two key challenges: (i) scene planning, where current methods fail to ensure narrative coherence across scenes due to independent scene generation, and(ii) scene consistency, which remains largely unexplored in terms of maintaining coherence across multiple storylines. We propose SceneDecorator, a training-free framework that employs VLM-Guided Scene Planning to ensure narrative coherence between different scenes in a ``global-to-local\" manner, and Long-Term Scene-Sharing Attention to maintain scene consistency and subject style diversity across different stories. Comprehensive experiments demonstrate the superior performance of SceneDecorator, highlighting its potential to unleash creativity in the fields of arts, films, and games. Code will be released.", "arxiv_id": "2510.22994v1", "arxiv_authors": ["Quanjian Song", "Donghao Zhou", "Jingyu Lin", "Fei Shen", "Jiaze Wang", "Xiaowei Hu", "Cunjian Chen", "Pheng-Ann Heng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3931255, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88c"}, "filepath": "data/2509.15693v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991225904773432, "type": "Poster", "name": "SceneForge: Enhancing 3D-text alignment with Structured Scene Compositions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119379", "abstract": "The whole is greater than the sum of its parts\u2014even in 3D-text contrastive learning. We introduce SceneForge, a novel framework that enhances contrastive alignment between 3D point clouds and text through structured multi-object scene compositions. SceneForge leverages individual 3D shapes to construct multi-object scenes with explicit spatial relations, pairing them with coherent multi-object descriptions refined by a large language model. By augmenting contrastive training with these structured, compositional samples, SceneForge effectively addresses the scarcity of large-scale 3D-text datasets, significantly enriching data complexity and diversity. We systematically investigate critical design elements, such as the optimal number of objects per scene, the proportion of compositional samples in training batches, and scene construction strategies. Extensive experiments demonstrate that SceneForge delivers substantial performance gains across multiple tasks, including zero-shot classification on ModelNet, ScanObjNN, Objaverse-LVIS, and ScanNet, as well as few-shot part segmentation on ShapeNetPart. SceneForge\u2019s compositional augmentations are model-agnostic, consistently improving performance across multiple encoder architectures. Moreover, SceneForge improves 3D visual question answering on ScanQA, generalizes robustly to retrieval scenarios with increasing scene complexity, and showcases spatial reasoning capabilities by adapting spatial configurations to align precisely with textual instructions.", "arxiv_id": "2509.15693v2", "arxiv_authors": ["Cristian Sbrolli", "Matteo Matteucci"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100145, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88d"}, "filepath": "data/2509.20414v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998122358776346, "type": "Poster", "name": "SceneWeaver: All-in-One 3D Scene Synthesis with an Extensible and Self-Reflective Agent", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115278", "abstract": "Indoor scene synthesis has become increasingly important with the rise of Embodied AI, which requires 3D environments that are not only visually realistic but also physically plausible and functionally diverse. While recent approaches have advanced visual fidelity, they often remain constrained to fixed scene categories, lack sufficient object-level detail and physical consistency, and struggle to align with complex user instructions. In this work, we present SceneWeaver, a reflective agentic framework that unifies diverse scene synthesis paradigms through tool-based iterative refinement. At its core, SceneWeaver employs a language model-based planner to select from a suite of extensible scene generation tools, ranging from data-driven generative models to visual- and LLM-based methods, guided by self-evaluation of physical plausibility, visual realism, and semantic alignment with user input. This closed-loop reason-act-reflect design enables the agent to identify semantic inconsistencies, invoke targeted tools, and update the environment over successive iterations. Extensive experiments on both common and open-vocabulary room types demonstrate that \\model not only outperforms prior methods on physical, visual, and semantic metrics, but also generalizes effectively to complex scenes with diverse instructions, marking a step toward general-purpose 3D environment generation.", "arxiv_id": "2509.20414v2", "arxiv_authors": ["Yandan Yang", "Baoxiong Jia", "Shujie Zhang", "Siyuan Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a41f"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2467156, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88e"}, "filepath": "data/2506.08997v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997229513186213, "type": "Poster", "name": "SDTagNet: Leveraging Text-Annotated Navigation Maps for Online HD Map Construction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118408", "abstract": "Autonomous vehicles rely on detailed and accurate environmental information to operate safely.High definition (HD) maps offer a promising solution, but their high maintenance cost poses a significant barrier to scalable deployment. This challenge is addressed by online HD map construction methods, which generate local HD maps from live sensor data.However, these methods are inherently limited by the short perception range of onboard sensors. To overcome this limitation and improve general performance, recent approaches have explored the use of standard definition (SD) maps as prior, which are significantly easier to maintain.We propose SDTagNet, the first online HD map construction method that fully utilizes the information of widely available SD maps, like OpenStreetMap, to enhance far range detection accuracy. Our approach introduces two key innovations.First, in contrast to previous work, we incorporate not only polyline SD map data with manually selected classes, but additional semantic information in the form of textual annotations.In this way, we enrich SD vector map tokens with NLP-derived features, eliminating the dependency on predefined specifications or exhaustive class taxonomies.Second, we introduce a point-level SD map encoder together with orthogonal element identifiers to uniformly integrate all types of map elements.Experiments on Argoverse 2 and nuScenes show that this boosts map perception performance by up to +5.9 mAP (+45%) w.r.t. map construction without priors and up to +3.2 mAP (+20%) w.r.t. previous approaches that already use SD map priors.", "arxiv_id": "2506.08997v2", "arxiv_authors": ["Fabian Immel", "Jan-Hendrik Pauls", "Richard Fehler", "Frank Bieder", "Jonas Merkert", "Christoph Stiller"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a420"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.645Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1057297, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a88f"}, "filepath": "data/2509.17664v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996808257620473, "type": "Poster", "name": "SD-VLM: Spatial Measuring and Understanding with Depth-Encoded Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118216", "abstract": "While vision language models (VLMs) excel in 2D semantic visual understanding, their ability to quantitatively reason about 3D spatial relationships remains under-explored, due to the deficiency of 2D images' spatial representation ability. In this paper, we analyze the problem hindering VLMs' spatial understanding abilities and propose SD-VLM, a novel framework that significantly enhances fundamental spatial perception abilities of VLMs through two key contributions: (1) propose Massive Spatial Measuring and Understanding (MSMU) dataset with precise spatial annotations, and (2) introduce a simple depth positional encoding method strengthening VLMs' spatial awareness. MSMU dataset covers massive quantitative spatial tasks with 700K QA pairs, 2.5M physical numerical annotations, and 10K chain-of-thought augmented samples. We have trained SD-VLM, a strong generalist VLM which shows superior quantitative spatial measuring and understanding capability. SD-VLM not only achieves state-of-the-art performance on our proposed MSMU-Bench, but also shows spatial generalization abilities on other spatial understanding benchmarks including Q-Spatial and SpatialRGPT-Bench. Extensive experiments demonstrate that SD-VLM outperforms GPT-4o and Intern-VL3-78B by 26.91% and 25.56% respectively on MSMU-Bench. We will release MSMU dataset and SD-VLM to facilitate future research in quantitative spatial measuring and understanding.", "arxiv_id": "2509.17664v1", "arxiv_authors": ["Pingyi Chen", "Yujing Lou", "Shen Cao", "Jinhui Guo", "Lubin Fan", "Yue Wu", "Lin Yang", "Lizhuang Ma", "Jieping Ye"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a421"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1149122, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a890"}, "filepath": "data/2510.18740v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990717462063189, "type": "Poster", "name": "SEAL: Semantic-Aware Hierarchical Learning for Generalized Category Discovery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119406", "abstract": "This paper investigates the problem of Generalized Category Discovery (GCD). Given a partially labelled dataset, GCD aims to categorize all unlabelled images, regardless of whether they belong to known or unknown classes. Existing approaches typically depend on either single-level semantics or manually designed abstract hierarchies, which limit their generalizability and scalability. To address these limitations, we introduce a SEmantic-aware hierArchical Learning framework (SEAL), guided by naturally occurring and easily accessible hierarchical structures. Within SEAL, we propose a Hierarchical Semantic-Guided Soft Contrastive Learning approach that exploits hierarchical similarity to generate informative soft negatives, addressing the limitations of conventional contrastive losses that treat all negatives equally. Furthermore, a Cross-Granularity Consistency (CGC) module is designed to align the predictions from different levels of granularity. SEAL consistently achieves state-of-the-art performance on fine-grained benchmarks, including the SSB benchmark, Oxford-Pet, and the Herbarium19 dataset, and further demonstrates generalization on coarse-grained datasets.", "arxiv_id": "2510.18740v1", "arxiv_authors": ["Zhenqi He", "Yuanpei Liu", "Kai Han"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a422"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1041012, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a891"}, "filepath": "data/2506.04224v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994700586094009, "type": "Poster", "name": "Seeing in the Dark: Benchmarking Egocentric 3D Vision with the Oxford Day-and-Night Dataset", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121549", "abstract": "We introduce Oxford Day-and-Night, a large-scale, egocentric dataset for novel view synthesis (NVS) and visual relocalisation under challenging lighting conditions. Existing datasets often lack crucial combinations of features such as ground-truth 3D geometry, wide-ranging lighting variation, and full 6DoF motion. Oxford Day-and-Night addresses these gaps by leveraging Meta ARIA glasses to capture egocentric video and applying multi-session SLAM to estimate camera poses, reconstruct 3D point clouds, and align sequences captured under varying lighting conditions, including both day and night. The dataset spans over 30 km of recorded trajectories and covers an area of $40{,}000\\mathrm{m}^2$, offering a rich foundation for egocentric 3D vision research. It supports two core benchmarks, NVS and relocalisation, providing a unique platform for evaluating models in realistic and diverse environments.", "arxiv_id": "2506.04224v1", "arxiv_authors": ["Zirui Wang", "Wenjing Bian", "Xinghui Li", "Yifu Tao", "Jianeng Wang", "Maurice Fallon", "Victor Adrian Prisacariu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a423"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1138629, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a892"}, "filepath": "data/2506.20168v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991217246840143, "type": "Poster", "name": "Seeing is Believing? Mitigating OCR Hallucinations in Multimodal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117155", "abstract": "Recent advancements in multimodal large language models (MLLMs) have enhanced document understanding by integrating textual and visual information. However, existing models exhibit incompleteness within their paradigm in real-world scenarios, particularly under visual degradation (e.g., blur, occlusion, low contrast). In such conditions, the current response paradigm often fails to adequately perceive visual degradation and ambiguity, leading to overreliance on linguistic priors or misaligned visual-textual reasoning. This difficulty in recognizing uncertainty frequently results in the generation of hallucinatory content, especially when a precise answer is not feasible. To better demonstrate and analyze this phenomenon and problem, we propose KIE-HVQA, the first benchmark dedicated to evaluating OCR hallucination in degraded document understanding. This dataset includes test samples spanning identity cards, invoices, and prescriptions, with simulated real-world degradations and pixel-level annotations for OCR reliability. This setup allows for evaluating models' capacity, under degraded input, to distinguish reliable visual information and answer accordingly, thereby highlighting the challenge of avoiding hallucination on uncertain data. To achieve vision-faithful reasoning and thereby avoid the aforementioned issues, we further introduce a Group Relative Policy Optimization (GRPO)-based framework featuring a novel reward mechanism. By incorporating a self-awareness of visual uncertainty and an analysis method that initiates refusal to answer to increase task difficulty within our supervised fine-tuning and reinforcement learning framework, we successfully mitigated hallucinations in ambiguous regions. Experiments on Qwen2.5-VL demonstrate that our 7B-parameter model achieves a ~28% absolute improvement in hallucination-free accuracy over GPT-4o on KIE-HVQA and there is no significant performance drop in standard tasks, highlighting both effectiveness and robustness. This work advances the development of reliable MLLMs for real-world document analysis by addressing critical challenges in visual-linguistic alignment under degradation.", "arxiv_id": "2506.20168v2", "arxiv_authors": ["Zhentao He", "Can Zhang", "Ziheng Wu", "Zhenghao Chen", "Yufei Zhan", "Yifan Li", "Zhao Zhang", "Xian Wang", "Minghui Qiu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a424"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042455, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a893"}, "filepath": "data/2506.03340v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993282027736294, "type": "Poster", "name": "Seeing the Arrow of Time in Large Multimodal Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118287", "abstract": "The Arrow of Time (AoT)\u2014time's irreversible flow shaping physical events\u2014is fundamental to video comprehension, yet remains a significant challenge for modern large multimodal models (LMMs). Current LMMs struggle to perceive and utilize temporal directionality in video when responding to language queries, obstructing deeper temporal understanding. We tackle this deficiency by first providing a critical analysis of existing benchmarks and models. We then introduce ArrowRL, a reinforcement learning (RL)-based training strategy with an innovative reverse reward that instills AoT awareness by encouraging divergent video interpretations between forward and reversed visual frames. For rigorous evaluation, we additionally develop AoTBench, a new multi-faceted benchmark probing temporally challenging questions. Experiments show ArrowRL greatly advances temporal perception: it not only achieves substantial improvements on our challenging AoTBench but also demonstrably boosts performance on standard video question answering (VQA) benchmarks (with peak accuracy gains reaching over 20% and 10% respectively). This validates ArrowRL's effectiveness and highlights the critical need for dedicated AoT understanding in LMMs.", "arxiv_id": "2506.03340v2", "arxiv_authors": ["Zihui Xue", "Mi Luo", "Kristen Grauman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a425"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112675, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a894"}, "filepath": "data/2510.00441v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998179291365451, "type": "Poster", "name": "Seeing through Uncertainty: Robust Task-Oriented Optimization in Visual Navigation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117355", "abstract": "Visual navigation is essential for autonomous agents, yet data scarcity poses a fundamental bottleneck that severely impairs the generalization of learned policies to unseen scenarios. Existing visual navigation agents depend on complex predictive or reasoning modules that become counterproductive and data-hungry in such limited-data regimes. This paper introduces NeuRO, a novel hybrid framework that pioneers the integration of networks with downstream task-based optimization to tackle this critical problem. NeuRO addresses core difficulties in this integration: (i) it empowers the network to translate inherently unreliable visual predictions under data scarcity into calibrated convex uncertainty sets, which then directly inform and constrain the downstream optimization problem, using Partially Input Convex Neural Networks (PICNNs) via a conformal method; and (ii) it reformulates the partially observable task as a generalizable robust optimization problem to effectively leverage these uncertainty-aware representations to derive robust policies. Extensive experiments on both unordered and sequential MultiON tasks demonstrate that NeuRO establishes state-of-the-art performance, particularly in generalization to unseen environments. Our work thus presents a significant advancement for developing robust, generalizable autonomous agents.", "arxiv_id": "2510.00441v3", "arxiv_authors": ["Yiyuan Pan", "Yunzhe Xu", "Zhe Liu", "Hesheng Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a426"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1119098, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a895"}, "filepath": "data/2506.16802v1.png", "tags": [], "_media_type": "image", "_rand": 0.999744408330308, "type": "Poster", "name": "Seeing What Matters: Generalizable AI-generated Video Detection with Forensic-Oriented Augmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117010", "abstract": "Synthetic video generation is progressing very rapidly. The latest models can produce very realistic high-resolution videos that are virtually indistinguishable from real ones. Although several video forensic detectors have been recently proposed, they often exhibit poor generalization, which limits their applicability in a real-world scenario. Our key insight to overcome this issue is to guide the detector towards _seeing_ _what_ _really_ _matters_. In fact, a well-designed forensic classifier should focus on identifying intrinsic low-level artifacts introduced by a generative architecture rather than relying on high-level semantic flaws that characterize a specific model. In this work, first, we study different generative architectures, searching and identifying discriminative features that are unbiased, robust to impairments, and shared across models. Then, we introduce a novel forensic-oriented data augmentation strategy based on the wavelet decomposition and replace specific frequency-related bands to drive the model to exploit more relevant forensic cues. Our novel training paradigm improves the generalizability of AI-generated video detectors, without the need for complex algorithms and large datasets that include multiple synthetic generators. To evaluate our approach, we train the detector using data from a single generative model and test it against videos produced by a wide range of other models. Despite its simplicity, our method achieves a significant accuracy improvement over state-of-the-art detectors and obtains excellent results even on very recent generative models, such as NOVA and FLUX. Code and data will be made publicly available.", "arxiv_id": "2506.16802v1", "arxiv_authors": ["Riccardo Corvi", "Davide Cozzolino", "Ekta Prashnani", "Shalini De Mello", "Koki Nagano", "Luisa Verdoliva"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a427"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1064707, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a896"}, "filepath": "data/2504.05288v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993998414746579, "type": "Poster", "name": "Seeking and Updating with Live Visual Knowledge", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121414", "abstract": "The visual world around us constantly evolves, from real-time news and social media trends to global infrastructure changes visible through satellite imagery and augmented reality enhancements. However, Multimodal Large Language Models (MLLMs), which automate many tasks, struggle to stay current, limited by the cutoff dates in their fixed training datasets.To quantify this stagnation, we introduce LiveVQA, the first-of-its-kind dataset featuring 107,143 samples and 12 categories data specifically designed to support research in both seeking and updating with live visual knowledge.Drawing from recent news articles, video platforms, and academic publications in April 2024-May 2025, LiveVQA enables evaluation of how models handle latest visual information beyond their knowledge boundaries and how current methods help to update them. Our comprehensive benchmarking of 17 state-of-the-art MLLMs reveals significant performance gaps on content beyond knowledge cutoff, and tool-use or agentic visual seeking framework drastically gain an average of 327% improvement. Furthermore, we explore parameter-efficient fine-tuning methods to update MLLMs with new visual knowledge.We dive deeply to the critical balance between adapter capacity and model capability when updating MLLMs with new visual knowledge. All the experimental dataset and source code are publicly available at: https://livevqa.github.io.", "arxiv_id": "2504.05288v2", "arxiv_authors": ["Mingyang Fu", "Yuyang Peng", "Dongping Chen", "Zetong Zhou", "Benlin Liu", "Yao Wan", "Zhou Zhao", "Philip S. Yu", "Ranjay Krishna"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a428"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3985841, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a897"}, "filepath": "data/2505.20641v3.png", "tags": [], "_media_type": "image", "_rand": 0.9998855057198085, "type": "Poster", "name": "See through the Dark: Learning Illumination-affined Representations for Nighttime Occupancy Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120132", "abstract": "Occupancy prediction aims to estimate the 3D spatial distribution of occupied regions along with their corresponding semantic labels. Existing vision-based methods perform well on daytime benchmarks but struggle in nighttime scenarios due to limited visibility and challenging lighting conditions. To address these challenges, we propose LIAR, a novel framework that learns illumination-affined representations. LIAR first introduces Selective Low-light Image Enhancement (SLLIE), which leverages the illumination priors from daytime scenes to adaptively determine whether a nighttime image is genuinely dark or sufficiently well-lit, enabling more targeted global enhancement. Building on the illumination maps generated by SLLIE, LIAR further incorporates two illumination-aware components: 2D Illumination-guided Sampling (2D-IGS) and 3D Illumination-driven Projection (3D-IDP), to respectively tackle local underexposure and overexposure. Specifically, 2D-IGS modulates feature sampling positions according to illumination maps, assigning larger offsets to darker regions and smaller ones to brighter regions, thereby alleviating feature degradation in underexposed areas. Subsequently, 3D-IDP enhances semantic understanding in overexposed regions by constructing illumination intensity fields and supplying refined residual queries to the BEV context refinement process. Extensive experiments on both real and synthetic datasets demonstrate the superior performance of LIAR under challenging nighttime scenarios. The source code and pretrained models are available.", "arxiv_id": "2505.20641v3", "arxiv_authors": ["Yuan Wu", "Zhiqiang Yan", "Yigong Zhang", "Xiang Li", "Jian Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a429"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068032, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a898"}, "filepath": "data/2509.16087v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990962866208479, "type": "Poster", "name": "See&Trek: Training-Free Spatial Prompting for Multimodal Large Language Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120121", "abstract": "We introduce See&Trek, the first training-free prompting framework tailored to enhance the spatial understanding of Multimodal Large Language Models (MLLMs) under vision-only constraints. While prior efforts have incorporated modalities like depth or point clouds to improve spatial reasoning, purely visual-spatial understanding remains underexplored. See&Trek addresses this gap by focusing on two core principles: increasing visual diversity and motion reconstruction. For visual diversity, we conduct Maximum Semantic Richness Sampling, which employs an off-the-shell perception model to extract semantically rich keyframes that capture scene structure. For motion reconstruction, we simulate visual trajectories and encode relative spatial positions into keyframes to preserve both spatial relations and temporal coherence. Our method is training&GPU-free, requiring only a single forward pass, and can be seamlessly integrated into existing MLLMs. Extensive experiments on the VSI-Bench and STI-Bench show that See&Trek consistently boosts various MLLMs performance across diverse spatial reasoning tasks with the most +3.5% improvement, offering a promising path toward stronger spatial intelligence.", "arxiv_id": "2509.16087v1", "arxiv_authors": ["Pengteng Li", "Pinhao Song", "Wuyang Li", "Weiyu Guo", "Huizai Yao", "Yijie Xu", "Dugang Liu", "Hui Xiong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1067641, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a899"}, "filepath": "data/2506.00596v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992199262519366, "type": "Poster", "name": "Seg2Any: Open-set Segmentation Mask-to-Image Generation with Precise Shape and Semantic Control", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115875", "abstract": "Despite recent advances in diffusion models, top-tier text-to-image (T2I) models still struggle to achieve precise spatial layout control, *i.e.* accurately generating entities with specified attributes and locations. Segmentation mask-to-image (S2I) generation has emerged as a promising solution by incorporating pixel-level spatial guidance and regional text prompts. However, existing S2I methods fail to simultaneously ensure semantic consistency and shape consistency.To address these challenges, we propose Seg2Any, a novel S2I framework built upon advanced multimodal diffusion transformers (*e.g.* FLUX). First, to achieve both semantic and shape consistency, we decouple segmentation mask conditions into regional semantic and high-frequency shape components. The regional semantic condition is introduced by a Semantic Alignment Attention Mask, ensuring that generated entities adhere to their assigned text prompts. The high-frequency shape condition, representing entity boundaries, is encoded as an Entity Contour Map and then introduced as an additional modality via multi-modal attention to guide image spatial structure. Second, to prevent attribute leakage across entities in multi-entity scenarios, we introduce an Attribute Isolation Attention Mask mechanism, which constrains each entity\u2019s image tokens to attend exclusively to themselves during image self-attention.To support open-set S2I generation, we construct SACap-1M, a large-scale dataset containing 1 million images with 5.9 million segmented entities and detailed regional captions, along with a SACap-Eval benchmark for comprehensive S2I evaluation.Extensive experiments demonstrate that Seg2Any achieves state-of-the-art performance on both open-set and closed-set S2I benchmarks, particularly in fine-grained spatial and attribute control of entities.", "arxiv_id": "2506.00596v2", "arxiv_authors": ["Danfeng li", "Hui Zhang", "Sheng Wang", "Jiacheng Li", "Zuxuan Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3130222, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89a"}, "filepath": "data/2503.22204v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994274325595288, "type": "Poster", "name": "Segment then Splat: Unified 3D Open-Vocabulary Segmentation via Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115127", "abstract": "Open-vocabulary querying in 3D space is crucial for enabling more intelligent perception in applications such as robotics, autonomous systems, and augmented reality. However, most existing methods rely on 2D pixel-level parsing, leading to multi-view inconsistencies and poor 3D object retrieval. Moreover, they are limited to static scenes and struggle with dynamic scenes due to the complexities of motion modeling. In this paper, we propose Segment then Splat, a 3D-aware open vocabulary segmentation approach for both static and dynamic scenes based on Gaussian Splatting. Segment then Splat reverses the long established approach of ``segmentation after reconstruction\" by dividing Gaussians into distinct object sets before reconstruction. Once the reconstruction is complete, the scene is naturally segmented into individual objects, achieving true 3D segmentation. This approach not only eliminates Gaussian-object misalignment issues in dynamic scenes but also accelerates the optimization process, as it eliminates the need for learning a separate language field. After optimization, a CLIP embedding is assigned to each object to enable open-vocabulary querying. Extensive experiments demonstrate the effectiveness of our proposed method in both static and dynamic scenarios.", "arxiv_id": "2503.22204v2", "arxiv_authors": ["Yiren Lu", "Yunlai Zhou", "Yiran Qiao", "Chaoda Song", "Tuo Liang", "Jing Ma", "Huan Wang", "Yu Yin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070283, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89b"}, "filepath": "data/2506.15675v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993773701990334, "type": "Poster", "name": "Sekai: A Video Dataset for World Exploration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121736", "abstract": "World exploration is an important human activity to forms the foundation of humankind's odyssey, while video generation techniques have made remarkable progress, promising to be the foundation of interactive world exploration.However, existing video generation datasets are not well-suited for world exploration training as they suffer from some limitations: limited duration, static scenes, and lack of exploratory and world annotations.In this paper, we introduce Sekai \u305b\u304b\u3044, meaning \"world'' in Japanese), a high-quality first-person view worldwide video dataset with rich annotations for world exploration. It consists of over 6,000 hours of walking or drone view (FPV and UVA) videos from 65 countries, over 1000 cities. We develop an efficient toolchain to pre-process videos and an effective toolbox to annotate them. For each video, we annotate location, scene type, weather, crowd density, captions, and camera trajectories.Experiments demonstrate the quality and effectiveness of the dataset. We believe Sekai will benefit the area of video generation and world exploration, and raise valuable applications. We make an introductory video about the dataset in the supplemental material.", "arxiv_id": "2506.15675v2", "arxiv_authors": ["Zhen Li", "Chuanhao Li", "Xiaofeng Mao", "Shaoheng Lin", "Ming Li", "Shitian Zhao", "Zhaopan Xu", "Xinyue Li", "Yukang Feng", "Jianwen Sun", "Zizhen Li", "Fanrui Zhang", "Jiaxin Ai", "Zhixiang Wang", "Yuwei Wu", "Tong He", "Jiangmiao Pang", "Yu Qiao", "Yunde Jia", "Kaipeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.646Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 984207, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89c"}, "filepath": "data/2506.11151v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998238352425394, "type": "Poster", "name": "Self-Calibrating BCIs: Ranking and Recovery of Mental Targets Without Labels", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117805", "abstract": "We consider the problem of recovering a mental target (e.g., an image of a face) that a participant has in mind from paired EEG (i.e., brain responses) and image (i.e., perceived faces) data collected during interactive sessions without access to labeled information. The problem has been previously explored with labeled data but not via self-calibration, where labeled data is unavailable. Here, we present the first framework and an algorithm, CURSOR, that learns to recover unknown mental targets without access to labeled data or pre-trained decoders. Our experiments on naturalistic images of faces demonstrate that CURSOR can (1) predict image similarity scores that correlate with human perceptual judgments without any label information, (2) use these scores to rank stimuli against an unknown mental target, and (3) generate new stimuli indistinguishable from the unknown mental target (validated via a user study, N=53). We release the brain response data set (N=29), associated face images used as stimuli data, and a codebase to initiate further research on this novel task.", "arxiv_id": "2506.11151v1", "arxiv_authors": ["Jonathan Grizou", "Carlos de la Torre-Ortiz", "Tuukka Ruotsalo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 952110, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89d"}, "filepath": "data/2506.08009v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990787682521872, "type": "Poster", "name": "Self Forcing: Bridging Training and Inference in Autoregressive Video Diffusion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116208", "abstract": "We introduce Self Forcing, a novel training paradigm for autoregressive video diffusion models. It addresses the longstanding issue of exposure bias\u2014where models trained on ground-truth context must generate sequences conditioned on their own imperfect outputs during inference. Unlike prior methods that denoise future frames based on ground-truth context frames, Self Forcing conditions each frame\u2019s generation on previously self-generated outputs by performing autoregressive rollout with key-value~(KV) caching during training. This strategy enables supervision through a holistic loss at the video level that directly evaluates the quality of the entire generated sequence, rather than relying solely on traditional frame-wise objectives. To ensure training efficiency, we employ a few-step diffusion model along with a stochastic gradient truncation strategy, effectively balancing computational cost and performance. We further introduce a rolling KV cache mechanism that enables efficient autoregressive video extrapolation. Extensive experiments demonstrate that our approach achieves real-time streaming video generation with sub-second latency on a single GPU, while matching or even surpassing the generation quality of significantly slower and non-causal diffusion models.", "arxiv_id": "2506.08009v1", "arxiv_authors": ["Xun Huang", "Zhengqi Li", "Guande He", "Mingyuan Zhou", "Eli Shechtman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a42f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1043183, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89e"}, "filepath": "data/2506.11777v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990197287712457, "type": "Poster", "name": "Self-supervised Learning of Echocardiographic Video Representations via Online Cluster Distillation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120161", "abstract": "Self-supervised learning (SSL) has achieved major advances in natural images and video understanding, but challenges remain in domains likeechocardiography (heart ultrasound) due to subtle anatomical structures, complex temporal dynamics, and the current lack of domain-specific pre-trained models. Existing SSL approaches such as contrastive, masked modeling, and clustering-based methods struggle with high intersample similarity, sensitivity to low PSNR inputs common in ultrasound, or aggressive augmentations that distort clinically relevant features.We present DISCOVR (Distilled Image Supervision for Cross Modal Video Representation), a self-supervised dual branch framework for cardiac ultrasound video representation learning. DISCOVR combines a clustering-based video encoder that models temporal dynamics with an online image encoder that extracts fine-grained spatial semantics. These branches are connected through a semantic cluster distillation loss that transfers anatomical knowledge from the evolving image encoder to the video encoder, enabling temporally coherent representations enriched with fine-grained semantic understanding. Evaluated on six echocardiography datasets spanning fetal, pediatric, and adult populations, DISCOVR outperforms both specialized video anomaly detection methods and state-of-the-art video-SSL baselines in zero-shot and linear probing setups, and achieves superior segmentation transfer.", "arxiv_id": "2506.11777v1", "arxiv_authors": ["Divyanshu Mishra", "Mohammadreza Salehi", "Pramit Saha", "Olga Patey", "Aris T. Papageorghiou", "Yuki M. Asano", "J. Alison Noble"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a430"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1061622, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a89f"}, "filepath": "data/2503.19953v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996347998001675, "type": "Poster", "name": "Self-Supervised Learning of Motion Concepts by Optimizing Counterfactuals", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116852", "abstract": "Estimating motion primitives from video (e.g. optical flow and occlusion) is a critically-important computer vision problem with many downstream applications, including in controllable video generation and robotics. Current solutions are primarily supervised on synthetic data or require tuning of situation-specific heuristics, which inherently limits these models' capabilities in real-world contexts. A natural solution to transcend these limitations would be to deploy large-scale self-supervised video models, which can be scalably trained on unrestricted real-world video datasets. However, despite recent progress, motion-primitive extraction from large pretrained video models remains relatively underexplored. In this work, we describe Opt-CWM, a self-supervised flow and occlusion estimation technique from a pretrained video prediction model. Opt-CWM uses ``counterfactual probes'' to extract motion information from a base video model in a zero-shot fashion. The key problem we solve is optimal probe generation, using a combination of an efficient parameterization of the space counterfactual probes, together with a novel generic sparse-prediction principle for learning the probe-generation parameters in a self-supervised fashion. Opt-CWM achieves state-of-the-art performance for motion estimation on real-world videos while requiring no labeled data.", "arxiv_id": "2503.19953v1", "arxiv_authors": ["Stefan Stojanov", "David Wendt", "Seungwoo Kim", "Rahul Venkatesh", "Kevin Feigelis", "Jiajun Wu", "Daniel LK Yamins"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a431"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1621107, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a0"}, "filepath": "data/2510.12114v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995192253092329, "type": "Poster", "name": "Self-Supervised Selective-Guided Diffusion Model for Old-Photo Face Restoration", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115678", "abstract": "Old-photo face restoration poses significant challenges due to compounded degradations such as breakage, fading, and severe blur. Existing pre-trained diffusion-guided methods either rely on explicit degradation priors or global statistical guidance, which struggle with localized artifacts or face color. We propose Self-Supervised Selective-Guided Diffusion (SSDiff), a framework that leverages pseudo-reference faces generated by a pre-trained diffusion model under weak guidance. These pseudo-labels exhibit structurally aligned contours and natural colors, enabling region-specific restoration via staged supervision: structural guidance applied throughout the denoising process and color refinement in later steps, aligned with the coarse-to-fine nature of diffusion. By incorporating face parsing maps and scratch masks, our method selectively restores breakage regions while avoiding identity mismatch. We further construct VintageFace, a 300-image benchmark of real old face photos with varying degradation levels. SSDiff outperforms existing GAN-based and diffusion-based methods in perceptual quality, fidelity, and regional controllability. Code and data will be released upon acceptance.", "arxiv_id": "2510.12114v1", "arxiv_authors": ["Wenjie Li", "Xiangyi Wang", "Heng Guo", "Guangwei Gao", "Zhanyu Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a432"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076646, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a1"}, "filepath": "data/2509.17847v2.png", "tags": [], "_media_type": "image", "_rand": 0.999812674119747, "type": "Poster", "name": "Semantic and Visual Crop-Guided Diffusion Models for Heterogeneous Tissue Synthesis in Histopathology", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115151", "abstract": "Synthetic data generation in histopathology faces unique challenges: preserving tissue heterogeneity, capturing subtle morphological features, and scaling to unannotated datasets. We present a latent diffusion model that generates realistic heterogeneous histopathology images through a novel dual-conditioning approach combining semantic segmentation maps with tissue-specific visual crops. Unlike existing methods that rely on text prompts or abstract visual embeddings, our approach preserves critical morphological details by directly incorporating raw tissue crops from corresponding semantic regions. For annotated datasets (i.e., Camelyon16, Panda), we extract patches ensuring 20-80\\% tissue heterogeneity. For unannotated data (i.e., TCGA), we introduce a self-supervised extension that clusters whole-slide images into 100 tissue types using foundation model embeddings, automatically generating pseudo-semantic maps for training. Our method synthesizes high-fidelity images with precise region-wise annotations, achieving superior performance on downstream segmentation tasks. When evaluated on annotated datasets, models trained on our synthetic data show competitive performance to those trained on real data, demonstrating the utility of controlled heterogeneous tissue generation. In quantitative evaluation, prompt\u2010guided synthesis reduces Fr\u00e9chet Distance by up to 6\u00d7 on Camelyon16 (from 430.1 to 72.0) and yields 2\u20133\u00d7 lower FD across Panda and TCGA. Downstream DeepLabv3+ models trained solely on synthetic data attain test IoU of 0.71 and 0.95 on Camelyon16 and Panda, within 1\u20132% of real\u2010data baselines (0.72 and 0.96). By scaling to 11,765 TCGA whole\u2010slide images without manual annotations, our framework offers a practical solution for an urgent need for generating diverse, annotated histopathology data, addressing a critical bottleneck in computational pathology.", "arxiv_id": "2509.17847v2", "arxiv_authors": ["Saghir Alfasly", "Wataru Uegami", "MD Enamul Hoq", "Ghazal Alabtah", "H. R. Tizhoosh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a433"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1103365, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a2"}, "filepath": "data/2510.22851v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992142608016208, "type": "Poster", "name": "Semantic Surgery: Zero-Shot Concept Erasure in Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120054", "abstract": "With the growing power of text-to-image diffusion models, their potential to generate harmful or biased content has become a pressing concern, motivating the development of concept erasure techniques. Existing approaches, whether relying on retraining or not, frequently compromise the generative capabilities of the target model in achieving concept erasure, often struggling with a critical trade-off between erasure completeness and the preservation of unrelated content (locality). Here, we introduce Semantic Surgery, a novel training-free framework for zero-shot concept erasure. Semantic Surgery directly operates on text embeddings before the diffusion process, aiming to neutralize undesired concepts at their semantic origin and thereby enhance both erasure completeness and the locality of generation by modifying the global semantic input to the diffusion model. Specifically, Semantic Surgery dynamically estimates the presence and intensity of target concepts within an input prompt's global semantics, based on which it performs a calibrated, scaled vector subtraction from the entire text embedding to neutralize their influence at the source. The overall framework consists of a Co-Occurrence Encoding module for robust multi-concept erasure by considering their joint semantic signatures, and an optional visual feedback loop that refines the textual embedding to address Latent Concept Persistence, thereby reinforcing erasure throughout the subsequent denoising process. Our proposed Semantic Surgery requires no model retraining and adapts dynamically to the specific concepts and their intensity detected in each input prompt, ensuring precise and context-aware interventions. Extensive experiments are conducted on object, explicit content, artistic style, and multi-celebrity erasure tasks, demonstrating that our method significantly outperforms state-of-the-art approaches. That is, our proposed concept erasure framework achieves superior completeness and robustness while preserving locality and general image quality, thereby offering an effective and practical solution in text-to-image generation.", "arxiv_id": "2510.22851v1", "arxiv_authors": ["Lexiang Xiong", "Chengyu Liu", "Jingwen Ye", "Yan Liu", "Yuecong Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a434"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1157374, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a3"}, "filepath": "data/2502.06734v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997639742199721, "type": "Poster", "name": "Se\u00f1orita-2M: A High-Quality Instruction-based Dataset for General Video Editing by Video Specialists", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121657", "abstract": "Video content editing has a wide range of applications. With the advancement of diffusion-based generative models, video editing techniques have made remarkable progress, yet they still remain far from practical usability. Existing inversion-based video editing methods are time-consuming and struggle to maintain consistency in unedited regions. Although instruction-based methods have high theoretical potential, they face significant challenges in constructing high-quality training datasets - current datasets suffer from issues such as editing correctness, frame consistency, and sample diversity. To bridge these gaps, we introduce the **Se\u00f1orita-2M** dataset, a large-scale, diverse, and high-quality video editing dataset. We systematically categorize editing tasks into 2 classes consinsting of 18 subcategories. To build this dataset, we design four new task specialists and employ or modify 14 existing task experts to generate data samples for each subclass. In addition, we design a filtering pipeline at both the visual content and instruction levels to further enhance data quality. This approach ensures the reliability of constructed data. Finally, the Se\u00f1orita-2M dataset comprises 2 million high-fidelity samples with diverse resolutions and frame counts. We trained multiple models using different base video models, i.e., Wan2.1 and CogVideoX-5B, on Se\u00f1orita-2M, and the results demonstrate that the models exhibit superior visual quality, robust frame-to-frame consistency, and strong alignment with text instructions. More videos are available at: *https://anonymous-senorita-2m.github.io*.", "arxiv_id": "2502.06734v3", "arxiv_authors": ["Bojia Zi", "Penghui Ruan", "Marco Chen", "Xianbiao Qi", "Shaozhe Hao", "Shihao Zhao", "Youze Huang", "Bin Liang", "Rong Xiao", "Kam-Fai Wong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a435"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1756070, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a4"}, "filepath": "data/2505.03176v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991376224558768, "type": "Poster", "name": "seq-JEPA: Autoregressive Predictive Learning of Invariant-Equivariant World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118974", "abstract": "Current self-supervised algorithms commonly rely on transformations such as data augmentation and masking to learn visual representations. This is achieved by enforcing invariance or equivariance with respect to these transformations after encoding two views of an image. This dominant two-view paradigm often limits the flexibility of learned representations for downstream adaptation by creating performance trade-offs between high-level invariance-demanding tasks such as image classification and more fine-grained equivariance-related tasks. In this work, we proposes \\emph{seq-JEPA}, a world modeling framework that introduces architectural inductive biases into joint-embedding predictive architectures to resolve this trade-off. Without relying on dual equivariance predictors or loss terms, seq-JEPA simultaneously learns two architecturally segregated representations: one equivariant to specified transformations and another invariant to them. To do so, our model processes short sequences of different views (observations) of inputs. Each encoded view is concatenated with an embedding of the relative transformation (action) that produces the next observation in the sequence. These view-action pairs are passed through a transformer encoder that outputs an aggregate representation. A predictor head then conditions this aggregate representation on the upcoming action to predict the representation of the next observation. Empirically, seq-JEPA demonstrates strong performance on both equivariant and invariant benchmarks without sacrificing one for the other. Furthermore, it excels at tasks that inherently require aggregating a sequence of observations, such as path integration across actions and predictive learning across eye movements.", "arxiv_id": "2505.03176v2", "arxiv_authors": ["Hafez Ghaemi", "Eilif Muller", "Shahab Bakhtiari"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a436"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 993834, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a5"}, "filepath": "data/2507.05077v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999086876767498, "type": "Poster", "name": "Sequential Attention-based Sampling for Histopathological Analysis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115302", "abstract": "Deep neural networks are increasingly applied for automated histopathology. Yet, whole-slide images (WSIs) are often acquired at gigapixel sizes, rendering it computationally infeasible to analyze them entirely at high resolution. Diagnostic labels are largely available only at the slide-level, because expert annotation of images at a finer (patch) level is both laborious and expensive. Moreover, regions with diagnostic information typically occupy only a small fraction of the WSI, making it inefficient to examine the entire slide at full resolution. Here, we propose SASHA -- ${\\it S}$equential ${\\it A}$ttention-based ${\\it S}$ampling for ${\\it H}$istopathological ${\\it A}$nalysis -- a deep reinforcement learning approach for efficient analysis of histopathological images. First, SASHA learns informative features with a lightweight hierarchical, attention-based multiple instance learning (MIL) model. Second, SASHA samples intelligently and zooms selectively into a small fraction (10-20\\%) of high-resolution patches, to achieve reliable diagnosis. We show that SASHA matches state-of-the-art methods that analyze the WSI fully at high-resolution, albeit at a fraction of their computational and memory costs. In addition, it significantly outperforms competing, sparse sampling methods. We propose SASHA as an intelligent sampling model for medical imaging challenges that involve automated diagnosis with exceptionally large images containing sparsely informative features.", "arxiv_id": "2507.05077v2", "arxiv_authors": ["Tarun G", "Naman Malpani", "Gugan Thoppe", "Sridharan Devarajan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a437"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1033229, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a6"}, "filepath": "data/2510.17603v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997981641580215, "type": "Poster", "name": "ShapeCraft: LLM Agents for Structured, Textured and Interactive 3D Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115664", "abstract": "Constructing structured 3D shapes is essential for gaming, virtual reality, and embodied AI. However, this process typically demands significant expertise and manual effort using conventional 3D modeling software. To make 3D content creation more accessible, we present ShapeCraft, a novel system that leverages large language model (LLM) agents to autonomously generate structured 3D shapes from natural language instructions. At the core of ShapeCraft is a graph-based procedural shape (GPS) representation, whose nodes store corresponding program snippets and layout information, facilitating efficient programmatic updates and flexible structure manipulation. This representation allows for post-modeling execution as structured geometry while preserving editability for artists. In the ShapeCraft workflow, LLM agents first hierarchically parses user's input and initialize GPS representation, then iteratively refines the procedural modeling and texturing steps to produce structured, textured, and interactive 3D assets. Our experiments show that ShapeCraft significantly outperforms existing LLM-based agents in generating geometrically accurate and semantically rich 3D models. Moreover, it achieves higher shape fidelity compared to optimization-based text-to-3D generation methods. We further demonstrate the versatility of ShapeCraft through examples of animated and user-customized editing, highlighting its potential for broader interactive applications.", "arxiv_id": "2510.17603v1", "arxiv_authors": ["Shuyuan Zhang", "Chenhan Jiang", "Zuoou Li", "Jiankang Deng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a438"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3182670, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a7"}, "filepath": "data/2507.01009v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991301948192484, "type": "Poster", "name": "ShapeEmbed: a self-supervised learning framework for 2D contour quantification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116558", "abstract": "The shape of objects is an important source of visual information in a wide range of applications. One of the core challenges of shape quantification is to ensure that the extracted measurements remain invariant to transformations that preserve an object\u2019s intrinsic geometry, such as changing its size, orientation, and position in the image. In this work, we introduce ShapeEmbed, a self-supervised representation learning framework designed to encode the contour of objects in 2D images, represented as a Euclidean distance matrix, into a shape descriptor that is invariant to translation, scaling, rotation, reflection, and point indexing. Our approach overcomes the limitations of traditional shape descriptors while improving upon existing state-of-the-art autoencoder-based approaches. We demonstrate that the descriptors learned by our framework outperform their competitors in shape classification tasks on natural and biological images. We envision our approach to be of particular relevance to biological imaging applications.", "arxiv_id": "2507.01009v1", "arxiv_authors": ["Anna Foix Romero", "Craig Russell", "Alexander Krull", "Virginie Uhlmann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a439"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1013773, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a8"}, "filepath": "data/2506.01853v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998825040429558, "type": "Poster", "name": "ShapeLLM-4o: A Native Multimodal LLM for 3D Generation and Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116179", "abstract": "Recently, the powerful text-to-image capabilities of GPT-4o have led to growing appreciation for native multimodal large language models. However, its multimodal capabilities remain confined to images and text. Yet beyond images, the ability to understand and generate 3D content is equally crucial. To address this gap, we propose ShapeLLM-4o\u2014a native 3D large language model capable of understanding and generating 3D assets and text in any sequence. First, we train a 3D vector-quantized variational autoencoder (VQ-VAE), which maps 3D objects into a discrete latent space to achieve efficient and accurate shape representation and reconstruction. Building upon the 3D-aware discrete tokens, we innovatively construct a large-scale continuous training dataset named 3D-Alpaca, encompassing generation, comprehension, and editing, thus providing rich resources for future research and training. Finally, by performing instruction-based training of the Qwen-2.5-vl-7B-Instruct model on the 3D-Alpaca dataset. Our work provides an effective attempt at extending multimodal models with basic 3D capabilities, which contributes to future research in 3D-native AI.", "arxiv_id": "2506.01853v1", "arxiv_authors": ["Junliang Ye", "Zhengyi Wang", "Ruowen Zhao", "Shenghao Xie", "Jun Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.647Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1004148, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8a9"}, "filepath": "data/2505.22651v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993359649527396, "type": "Poster", "name": "Sherlock: Self-Correcting Reasoning in Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117278", "abstract": "Reasoning Vision-Language Models (VLMs) have shown promising performance on complex multimodal tasks. However, they still face significant challenges: they are highly sensitive to reasoning errors, require large volumes of annotated data or accurate verifiers, and struggle to generalize beyond specific domains.To address these limitations, we explore self-correction as a strategy to enhance reasoning VLMs. We first conduct an in-depth analysis of reasoning VLMs\u2019 self-correction abilities and identify key gaps. Based on our findings, we introduce Sherlock, a self-correction and self-improvement training framework. \\emph{Sherlock} introduces a trajectory-level self-correction objective, a preference data construction method based on visual perturbation, and a dynamic $\\beta$ for preference tuning. Once the model acquires self-correction capabilities using only 20k randomly sampled annotated data, it continues to self-improve without external supervision.Built on the Llama3.2-Vision-11B model, \\emph{Sherlock Iter2} achieves remarkable results across eight benchmarks, reaching an average accuracy of 64.1 with direct generation and 65.4 after self-correction. It outperforms LLaVA-CoT (63.2), Mulberry (60.4), and LlamaV-o1 (63.4) while only using less than 20\\% of the annotated data.", "arxiv_id": "2505.22651v2", "arxiv_authors": ["Yi Ding", "Ruqi Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1195381, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8aa"}, "filepath": "data/2510.17858v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995165079458536, "type": "Poster", "name": "Shortcutting Pre-trained Flow Matching Diffusion Models is Almost Free Lunch", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115213", "abstract": "We present an ultra-efficient distillation method for shortcutting large-scale pre-trained flow matching diffusion models into efficient few-step samplers, enabled by novel velocity field self-distillation. While shortcutting in flow matching, originally introduced by shortcut models, offers flexible trajectory-skipping capabilities, it requires a specialized step-size embedding incompatible with existing models unless retraining from scratch\u2014a process nearly as costly as pretraining itself.Our key contribution is thus imparting a more aggressive shortcut mechanism to standard flow matching models (e.g., Flux), leveraging a unique distillation principle that obviates the need for step-size embedding. Working on the velocity field rather than sample space and learning rapidly from self-guided distillation in an online manner, our approach trains efficiently, e.g., producing a 3-step Flux <1 A100 day. This fast training immediately enables, to our knowledge, the first few-shot distillation method (e.g., 10 text-image pairs) for dozen-billion-parameter diffusion models, delivering state-of-the-art performance at almost free cost.", "arxiv_id": "2510.17858v1", "arxiv_authors": ["Xu Cai", "Yang Wu", "Qianli Chen", "Haoran Wu", "Lichuan Xiang", "Hongkai Wen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1094158, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ab"}, "filepath": "data/2506.05184v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997343233739088, "type": "Poster", "name": "Single GPU Task Adaptation of Pathology Foundation Models for Whole Slide Image Analysis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116568", "abstract": "Pathology foundation models (PFMs) have emerged as powerful tools for analyzing whole slide images (WSIs). However, adapting these pretrained PFMs for specific clinical tasks presents considerable challenges, primarily due to the availability of only weak (WSI-level) labels for gigapixel images, necessitating multiple instance learning (MIL) paradigm for effective WSI analysis. This paper proposes a novel approach for single-GPU \\textbf{T}ask \\textbf{A}daptation of \\textbf{PFM}s (TAPFM) that uses vision transformer (\\vit) attention for MIL aggregation while optimizing both for feature representations and attention weights. The proposed approach maintains separate computational graphs for MIL aggregator and the PFM to create stable training dynamics that align with downstream task objectives during end-to-end adaptation. Evaluated on mutation prediction tasks for bladder cancer and lung adenocarcinoma across institutional and TCGA cohorts, TAPFM consistently outperforms conventional approaches, with H-Optimus-0 (TAPFM) outperforming the benchmarks. TAPFM effectively handles multi-label classification of actionable mutations as well. Thus, TAPFM makes adaptation of powerful pre-trained PFMs practical on standard hardware for various clinical applications.", "arxiv_id": "2506.05184v1", "arxiv_authors": ["Neeraj Kumar", "Swaraj Nanda", "Siddharth Singi", "Jamal Benhamida", "David Kim", "Jie-Fu Chen", "Amir Momeni-Boroujeni", "Gregory M. Goldgof", "Gabriele Campanella", "Chad Vanderbilt"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 841398, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ac"}, "filepath": "data/2507.07995v1.png", "tags": [], "_media_type": "image", "_rand": 0.99991975459004, "type": "Poster", "name": "Single-pass Adaptive Image Tokenization for Minimum Program Search", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118875", "abstract": "According to Algorithmic Information Theory (AIT), intelligent representations compress data into the shortest possible program while remaining predictive of its content\u2014exhibiting low Kolmogorov Complexity (KC). In contrast, most visual representation learning systems assign fixed-length representations to all inputs, ignoring variations in complexity or familiarity. Recent adaptive tokenization methods address this by allocating variable-length representations but typically require test-time search over multiple hypotheses to identify the most predictive one. Inspired by KC principles, we propose a one-shot adaptive tokenizer, KARL, that predicts the appropriate number of tokens for an image in a single forward pass, halting once its approximate KC is reached. The token count serves as a proxy for the minimum description length. KARL performs comparably to recent adaptive tokenizers while operating in a one-pass manner. Additionally, we present a conceptual study showing a correlation between adaptive tokenization and core ideas from AIT. We demonstrate that adaptive tokenization not only aligns with KC but also reveals empirical signals approximating AIT concepts such as sophistication and logical depth. Finally, we analyze predicted image complexity and interestingness across axes such as structure vs. noise and in-distribution vs. out-of-distribution familiarity, highlighting alignment with human annotations.", "arxiv_id": "2507.07995v1", "arxiv_authors": ["Shivam Duggal", "Sanghyun Byun", "William T. Freeman", "Antonio Torralba", "Phillip Isola"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1103796, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ad"}, "filepath": "data/2510.22480v1.png", "tags": [], "_media_type": "image", "_rand": 0.999323035798574, "type": "Poster", "name": "Single-Teacher View Augmentation: Boosting Knowledge Distillation via Angular Diversity", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118239", "abstract": "Knowledge Distillation (KD) aims to train a lightweight student model by transferring knowledge from a large, high-capacity teacher.Recent studies have shown that leveraging diverse teacher perspectives can significantly improve distillation performance; however, achieving such diversity typically requires multiple teacher networks, leading to high computational costs. In this work, we propose a novel cost-efficient knowledge augmentation method for KD that generates diverse multi-views by attaching multiple branches to a single teacher. To ensure meaningful semantic variation across multi-views, we introduce two angular diversity objectives: 1) $\\textit{constrained inter-angle diversify loss}$, which maximizes angles between augmented views while preserving proximity to the original teacher output, and 2) $\\textit{intra-angle diversify loss}$, which encourages an even distribution of views around the original output. The ensembled knowledge from these angularly diverse views, along with the original teacher, is distilled into the student. We further theoretically demonstrate that our objectives increase the diversity among ensemble members and thereby reduce the upper bound of the ensemble's expected loss, leading to more effective distillation. Experimental results show that our method surpasses an existing knowledge augmentation method across diverse configurations. Moreover, the proposed method is compatible with other KD frameworks in a plug-and-play fashion, providing consistent improvements in generalization performance.", "arxiv_id": "2510.22480v1", "arxiv_authors": ["Seonghoon Yu", "Dongjun Nam", "Dina Katabi", "Jeany Son"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a43f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1104944, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ae"}, "filepath": "data/2509.21927v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996277185736123, "type": "Poster", "name": "SingRef6D: Monocular Novel Object Pose Estimation with a Single RGB Reference", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115633", "abstract": "Recent 6D pose estimation methods demonstrate notable performance but still face some practical limitations. For instance, many of them rely heavily on sensor depth, which may fail with challenging surface conditions, such as transparent or highly reflective materials. In the meantime, RGB-based solutions provide less robust matching performance in low-light and texture-less scenes due to the lack of geometry information. Motivated by these, we propose **SingRef6D**, a lightweight pipeline requiring only a **single RGB** image as a reference, eliminating the need for costly depth sensors, multi-view image acquisition, or training view synthesis models and neural fields. This enables SingRef6D to remain robust and capable even under resource-limited settings where depth or dense templates are unavailable. Our framework incorporates two key innovations. First, we propose a token-scaler-based fine-tuning mechanism with a novel optimization loss on top of Depth-Anything v2 to enhance its ability to predict accurate depth, even for challenging surfaces. Our results show a 14.41% improvement (in $\\delta_{1.05}$) on REAL275 depth prediction compared to Depth-Anything v2 (with fine-tuned head). Second, benefiting from depth availability, we introduce a depth-aware matching process that effectively integrates spatial relationships within LoFTR, enabling our system to handle matching for challenging materials and lighting conditions. Evaluations of pose estimation on the REAL275, ClearPose, and Toyota-Light datasets show that our approach surpasses state-of-the-art methods, achieving a 6.1% improvement in average recall.", "arxiv_id": "2509.21927v1", "arxiv_authors": ["Jiahui Wang", "Haiyue Zhu", "Haoren Guo", "Abdullah Al Mamun", "Cheng Xiang", "Tong Heng Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a440"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068888, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8af"}, "filepath": "data/2510.11509v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995324848961608, "type": "Poster", "name": "Situat3DChange: Situated 3D Change Understanding Dataset for Multimodal Large Language Model", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121602", "abstract": "Physical environments and circumstances are fundamentally dynamic, yet current 3D datasets and evaluation benchmarks tend to concentrate on either dynamic scenarios or dynamic situations in isolation, resulting in incomplete comprehension. To overcome these constraints, we introduce Situat3DChange, an extensive dataset supporting three situation-aware change understanding tasks following the perception-action model: 121K question-answer pairs, 36K change descriptions for perception tasks, and 17K rearrangement instructions for the action task. To construct this large-scale dataset, Situat3DChange leverages 11K human observations of environmental changes to establish shared mental models and shared situational awareness for human-AI collaboration. These observations, enriched with egocentric and allocentric perspectives as well as categorical and coordinate spatial relations, are integrated using an LLM to support understanding of situated changes. To address the challenge of comparing pairs of point clouds from the same scene with minor changes, we propose SCReasoner, an efficient 3D MLLM approach that enables effective point cloud comparison with minimal parameter overhead and no additional tokens required for the language decoder. Comprehensive evaluation on Situat3DChange tasks highlights both the progress and limitations of MLLMs in dynamic scene and situation understanding. Additional experiments on data scaling and cross-domain transfer demonstrate the task-agnostic effectiveness of using Situat3DChange as a training dataset for MLLMs. The established dataset and source code are publicly available at: https://github.com/RuipingL/Situat3DChange.", "arxiv_id": "2510.11509v1", "arxiv_authors": ["Ruiping Liu", "Junwei Zheng", "Yufan Chen", "Zirui Wang", "Kunyu Peng", "Kailun Yang", "Jiaming Zhang", "Marc Pollefeys", "Rainer Stiefelhagen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a441"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 967359, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b0"}, "filepath": "data/2507.02705v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998621505055987, "type": "Poster", "name": "SIU3R: Simultaneous Scene Understanding and 3D Reconstruction Beyond Feature Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118929", "abstract": "Simultaneous understanding and 3D reconstruction plays an important role in developing end-to-end embodied intelligent systems.To achieve this, recent approaches resort to 2D-to-3D feature alignment paradigm, which leads to limited 3D understanding capability and potential semantic information loss.In light of this, we propose SIU3R, the first alignment-free framework for generalizable simultaneous understanding and 3D reconstruction from unposed images.Specifically, SIU3R bridges reconstruction and understanding tasks via pixel-aligned 3D representation, and unifies multiple understanding tasks into a set of unified learnable queries, enabling native 3D understanding without the need of alignment with 2D models.To encourage collaboration between the two tasks with shared representation, we further conduct in-depth analyses of their mutual benefits, and propose two lightweight modules to facilitate their interaction.Extensive experiments demonstrate that our method achieves state-of-the-art performance not only on the individual tasks of 3D reconstruction and understanding, but also on the task of simultaneous understanding and 3D reconstruction, highlighting the advantages of our alignment-free framework and the effectiveness of the mutual benefit designs.", "arxiv_id": "2507.02705v2", "arxiv_authors": ["Qi Xu", "Dongxu Wei", "Lingzhe Zhao", "Wenpu Li", "Zhangchi Huang", "Shunping Ji", "Peidong Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a442"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1810571, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b1"}, "filepath": "data/2502.13143v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996492463298129, "type": "Poster", "name": "SoFar: Language-Grounded Orientation Bridges Spatial Reasoning and Object Manipulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116350", "abstract": "While spatial reasoning has made progress in object localization relationships, it often overlooks object orientation\u2014 a key factor in 6-DoF fine-grained manipulation. Traditional pose representations rely on pre-defined frames or templates, limiting generalization and semantic grounding. In this paper, we introduce the concept of semantic orientation, which defines object orientations using natural language in a reference-frame-free manner (e.g., the \"plug-in\" direction of a USB or the \"handle\" direction of a cup). To support this, we construct OrienText300K, a large-scale dataset of 3D objects annotated with semantic orientations, and develop PointSO, a general model for zero-shot semantic orientation prediction. By integrating semantic orientation into VLM agents, our SoFAR framework enables 6-DoF spatial reasoning and generates robotic actions. Extensive experiments demonstrated the effectiveness and generalization of our SOFAR, e.g., zero-shot 48.7% successful rate on OpenDOR and 58.3% successful rate on SIMPLER Widox-X setting.", "arxiv_id": "2502.13143v2", "arxiv_authors": ["Zekun Qi", "Wenyao Zhang", "Yufei Ding", "Runpei Dong", "Xinqiang Yu", "Jingwen Li", "Lingyun Xu", "Baoyu Li", "Xialin He", "Guofan Fan", "Jiazhao Zhang", "Jiawei He", "Jiayuan Gu", "Xin Jin", "Kaisheng Ma", "Zhizheng Zhang", "He Wang", "Li Yi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a443"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3817733, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b2"}, "filepath": "data/2506.02680v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991061484248491, "type": "Poster", "name": "Solving Inverse Problems with FLAIR", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115349", "abstract": "Flow-based latent generative models such as Stable Diffusion 3 are able to generate images with remarkable quality, even enabling photorealistic text-to-image generation. Their impressive performance suggests that these models should also constitute powerful priors for inverse imaging problems, but that approach has not yet led to comparable fidelity.There are several key obstacles: (i) the encoding into a lower-dimensional latent space makes the underlying (forward) mapping non-linear; (ii) the data likelihood term is usually intractable; and (iii) learned generative models struggle to recover rare, atypical data modes during inference.We present FLAIR, a novel training free variational framework that leverages flow-based generative models as a prior for inverse problems. To that end, we introduce a variational objective for flow matching that is agnostic to the type of degradation, and combine it with deterministic trajectory adjustments to recover atypical modes. To enforce exact consistency with the observed data, we decouple the optimization of the data fidelity and regularization terms. Moreover, we introduce a time-dependent calibration scheme in which the strength of the regularization is modulated according to off-line accuracy estimates. Results on standard imaging benchmarks demonstrate that FLAIR consistently outperforms existing diffusion- and flow-based methods in terms of reconstruction quality and sample diversity.", "arxiv_id": "2506.02680v2", "arxiv_authors": ["Julius Erbach", "Dominik Narnhofer", "Andreas Dombos", "Bernt Schiele", "Jan Eric Lenssen", "Konrad Schindler"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a444"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1016170, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b3"}, "filepath": "data/2507.01152v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997352723009075, "type": "Poster", "name": "SonoGym: High Performance Simulation for Challenging Surgical Tasks with Robotic Ultrasound", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121645", "abstract": "Ultrasound (US) is a widely used medical imaging modality due to its real-time capabilities, non-invasive nature, and cost-effectiveness. By reducing operator dependency and enhancing access to complex anatomical regions, robotic ultrasound can help improve workflow efficiency. Recent studies have demonstrated the potential of deep reinforcement learning (DRL) and imitation learning (IL) to enable more autonomous and intelligent robotic ultrasound navigation. However, the application of learning-based robotic ultrasound to computer-assisted surgical tasks, such as anatomy reconstruction and surgical guidance, remains largely unexplored. A key bottleneck for this is the lack of realistic and efficient simulation environments tailored to these tasks. In this work, we present SonoGym, a scalable simulation platform for robotic ultrasound, enabling parallel simulation across tens to hundreds of environments. Our framework supports realistic and real-time simulation of US data from CT-derived 3D models of the anatomy through both a physics-based and a Generative Adversarial Network (GAN) approach. Our framework enables the training of DRL and recent IL agents (vision transformers and diffusion policies) for relevant tasks in robotic orthopedic surgery by integrating common robotic platforms and orthopedic end effectors. We further incorporate submodular DRL---a recent method that handles history-dependent rewards---for anatomy reconstruction and safe reinforcement learning for surgery. Our results demonstrate successful policy learning across a range of scenarios, while also highlighting the limitations of current methods in clinically relevant environments. We believe our simulation can facilitate research in robot learning approaches for such challenging robotic surgery applications. Dataset, codes and videos are publicly available at https://sonogym.github.io/.", "arxiv_id": "2507.01152v1", "arxiv_authors": ["Yunke Ao", "Masoud Moghani", "Mayank Mittal", "Manish Prajapat", "Luohong Wu", "Frederic Giraud", "Fabio Carrillo", "Andreas Krause", "Philipp F\u00fcrnstahl"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a445"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 896415, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b4"}, "filepath": "data/2412.05095v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992819041720423, "type": "Poster", "name": "SoPo: Text-to-Motion Generation Using Semi-Online Preference Optimization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118773", "abstract": "Text-to-motion generation is essential for advancing the creative industry but often presents challenges in producing consistent, realistic motions. To address this, we focus on fine-tuning text-to-motion models to consistently favor high-quality, human-preferred motions\u2014a critical yet largely unexplored problem. In this work, we theoretically investigate the DPO under both online and offline settings, and reveal their respective limitation: overfitting in offline DPO, and biased sampling in online DPO. Building on our theoretical insights, we introduce Semi-online Preference Optimization (SoPo), a DPO-based method for training text-to-motion models using ``semi-online\u201d data pair, consisting of unpreferred motion from online distribution and preferred motion in offline datasets. This method leverages both online and offline DPO, allowing each to compensate for the other\u2019s limitations. Extensive experiments demonstrate that SoPo outperforms other preference alignment methods, with an MM-Dist of 3.25\\% (vs e.g. 0.76\\% of MoDiPO) on the MLD model, 2.91\\% (vs e.g. 0.66\\% of MoDiPO) on MDM model, respectively. Additionally, the MLD model fine-tuned by our SoPo surpasses the SoTA model in terms of R-precision and MM Dist. Visualization results also show the efficacy of our SoPo in preference alignment. Code will be released publicly.", "arxiv_id": "2412.05095v3", "arxiv_authors": ["Xiaofeng Tan", "Hongsong Wang", "Xin Geng", "Pan Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a446"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1064481, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b5"}, "filepath": "data/2504.07934v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997753881058503, "type": "Poster", "name": "SoTA with Less: MCTS-Guided Sample Selection for Data-Efficient Visual Reasoning Self-Improvement", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118230", "abstract": "We introduce ThinkLite-VL, a family of visual reasoning models that achieve state-of-the-art (SoTA) performance using an order of magnitude fewer training samples, relying purely on reinforcement fine-tuning (RFT) self-improvement without any knowledge distillation. Our central insight is that sample difficulty critically influences RFT effectiveness: appropriately challenging examples can drive substantial reasoning improvements, even in low-data regimes. However, quantifying sample difficulty in a reliable and scalable manner remains non-trivial. To address this, we repurpose Monte Carlo Tree Search (MCTS) to measure sample difficulty via the number of reasoning iterations a vision-language model (VLM) requires to solve each instance. This MCTS-based selection procedure identifies samples that induce deeper reasoning while remaining solvable, allowing us to filter a high-quality subset from 70k open-source examples spanning math, natural image understanding, and chart comprehension. Using this approach, we select just 11k challenging samples for RFT on Qwen2.5-VL-7B-Instruct and 7.5k samples for Qwen2.5-VL-72B-Instruct. The resulting models, ThinkLite-VL-7B and ThinkLite-VL-72B, significantly outperform their respective base models across eight visual reasoning benchmarks. In particular, ThinkLite-VL-7B improves the average performance of Qwen2.5-VL-7B-Instruct by 7\\% and surpasses all existing 7B-level models, as well as much larger models such as GPT-4o, O1 and Qwen2.5-VL-72B, achieving a new SoTA score of 75.1 on MathVista. ThinkLite-VL-72B further advances the SoTA frontier, achieving an accuracy of 79.7 on MathVista and an average benchmark improvement of 4.42 over the open-source SOTA. These results demonstrate that MCTS-guided difficulty filtering provides a scalable and effective path toward data-efficient self-improvement in multimodal reasoning.", "arxiv_id": "2504.07934v3", "arxiv_authors": ["Xiyao Wang", "Zhengyuan Yang", "Chao Feng", "Hongjin Lu", "Linjie Li", "Chung-Ching Lin", "Kevin Lin", "Furong Huang", "Lijuan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a447"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.648Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1221706, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b6"}, "filepath": "data/2505.14521v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991639736606099, "type": "Poster", "name": "SparC: Sparse Representation and Construction for High-Resolution 3D Shapes Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115101", "abstract": "High-fidelity 3D object synthesis remains significantly more challenging than 2D image generation due to the unstructured nature of mesh data and the cubic complexity of dense volumetric grids. Existing two-stage pipelines\u2014compressing meshes with a VAE (using either 2D or 3D supervision), followed by latent diffusion sampling\u2014often suffer from severe detail loss caused by inefficient representations and modality mismatches introduced in VAE. We introduce **SparC**, a unified framework that combines a sparse deformable marching cubes representation **SparseCubes** with a novel encoder **SparConv-VAE**. SparseCubes converts raw meshes into high-resolution ($1024^3$) surfaces with arbitrary topology by scattering signed distance and deformation fields onto a sparse cube, allowing differentiable optimization. SparConv-VAE is the first modality-consistent variational autoencoder built entirely upon sparse convolutional networks, enabling efficient and near-lossless 3D reconstruction suitable for high-resolution generative modeling through latent diffusion. SparC achieves state-of-the-art reconstruction fidelity on challenging inputs, including open surfaces, disconnected components, and intricate geometry. It preserves fine-grained shape details, reduces training and inference cost, and integrates naturally with latent diffusion models for scalable, high-resolution 3D generation.", "arxiv_id": "2505.14521v3", "arxiv_authors": ["Zhihao Li", "Yufei Wang", "Heliang Zheng", "Yihao Luo", "Bihan Wen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a448"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3226139, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b7"}, "filepath": "data/2504.02821v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995660939803171, "type": "Poster", "name": "Sparse Autoencoders Learn Monosemantic Features in Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119210", "abstract": "Given that interpretability and steerability are crucial to AI safety, Sparse Autoencoders (SAEs) have emerged as a tool to enhance them in Large Language Models (LLMs). In this work, we extend the application of SAEs to Vision-Language Models (VLMs), such as CLIP, and introduce a comprehensive framework for evaluating monosemanticity at the neuron-level in vision representations. To ensure that our evaluation aligns with human perception, we propose a benchmark derived from a large-scale user study. Our experimental results reveal that SAEs trained on VLMs significantly enhance the monosemanticity of individual neurons, with sparsity and wide latents being the most influential factors. Notably, we demonstrate that applying SAE interventions on CLIP's vision encoder directly steers multimodal LLM outputs (e.g., LLaVA), without any modifications to the underlying model. These findings emphasize the practicality and efficacy of SAEs as an unsupervised tool for enhancing both interpretability and control of VLMs.", "arxiv_id": "2504.02821v2", "arxiv_authors": ["Mateusz Pach", "Shyamgopal Karthik", "Quentin Bouniot", "Serge Belongie", "Zeynep Akata"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a449"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1084748, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b8"}, "filepath": "data/2412.06028v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993996339471287, "type": "Poster", "name": "SparseDiT: Token Sparsification for Efficient Diffusion Transformer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116457", "abstract": "Diffusion Transformers (DiT) are renowned for their impressive generative performance; however, they are significantly constrained by considerable computational costs due to the quadratic complexity in self-attention and the extensive sampling steps required. While advancements have been made in expediting the sampling process, the underlying architectural inefficiencies within DiT remain underexplored. We introduce SparseDiT, a novel framework that implements token sparsification across spatial and temporal dimensions to enhance computational efficiency while preserving generative quality. Spatially, SparseDiT employs a tri-segment architecture that allocates token density based on feature requirements at each layer: Poolingformer in the bottom layers for efficient global feature extraction, Sparse-Dense Token Modules (SDTM) in the middle layers to balance global context with local detail, and dense tokens in the top layers to refine high-frequency details. Temporally, SparseDiT dynamically modulates token density across denoising stages, progressively increasing token count as finer details emerge in later timesteps. This synergy between SparseDiT\u2019s spatially adaptive architecture and its temporal pruning strategy enables a unified framework that balances efficiency and fidelity throughout the generation process. Our experiments demonstrate SparseDiT\u2019s effectiveness, achieving a 55\\% reduction in FLOPs and a 175\\% improvement in inference speed on DiT-XL with similar FID score on 512$\\times$512 ImageNet, a 56\\% reduction in FLOPs across video generation datasets, and a 69\\% improvement in inference speed on PixArt-$\\alpha$ on text-to-image generation task with a 0.24 FID score decrease. SparseDiT provides a scalable solution for high-quality diffusion-based generation compatible with sampling optimization techniques.", "arxiv_id": "2412.06028v2", "arxiv_authors": ["Shuning Chang", "Pichao Wang", "Jiasheng Tang", "Fan Wang", "Yi Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1093267, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8b9"}, "filepath": "data/2506.07491v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998142487105303, "type": "Poster", "name": "SpatialLM: Training Large Language Models for Structured Indoor Modeling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115535", "abstract": "SpatialLM is a large language model designed to process 3D point cloud data and generate structured 3D scene understanding outputs. These outputs include architectural elements like walls, doors, windows, and oriented object boxes with their semantic categories. Unlike previous methods which exploit task-specific network designs, our model adheres to the standard multimodal LLM architecture and is fine-tuned directly from open-source LLMs.To train SpatialLM, we collect a large-scale, high-quality synthetic dataset consisting of 12,328 indoor scenes with ground-truth 3D annotations and photo-realistic RGBD scans, and conduct a careful study on various modeling and training decisions. On public benchmarks, our model gives state-of-the-art performance in layout estimation and competitive results in 3D object detection. With that, we show a feasible path for enhancing the spatial understanding capabilities of modern LLMs for applications in augmented reality, embodied robotics, and more.", "arxiv_id": "2506.07491v1", "arxiv_authors": ["Yongsen Mao", "Junhao Zhong", "Chuan Fang", "Jia Zheng", "Rui Tang", "Hao Zhu", "Ping Tan", "Zihan Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042565, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ba"}, "filepath": "data/2505.23747v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995042687848595, "type": "Poster", "name": "Spatial-MLLM: Boosting MLLM Capabilities in Visual-based Spatial Intelligence", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117993", "abstract": "Recent advancements in Multimodal Large Language Models (MLLMs) have significantly enhanced performance on 2D visual tasks. However, improving their spatial intelligence remains a challenge. Existing 3D MLLMs always rely on additional 3D or 2.5D data to incorporate spatial awareness, restricting their utility in scenarios with only 2D inputs, such as images or videos. In this paper, we present \\emph{Spatial-MLLM}, a novel framework for visual-based spatial reasoning from purely 2D observations. Unlike conventional video MLLMs, which rely on CLIP-based visual encoders optimized for semantic understanding, our key insight is to unleash the strong structure prior from the feed-forward visual geometry foundation model. Specifically, we propose a dual-encoder architecture: a pretrained 2D visual encoder to extract semantic features, and a spatial encoder\u2014initialized from the backbone of the visual geometry model\u2014to extract 3D structure feature. A connector then integrates both features into unified visual tokens for enhanced spatial understanding. Furthermore, we propose a space-aware frame sampling strategy at inference time, which selects the spatially informative frames of a video sequence, ensuring that even under limited token length, the model focuses on frames critical for spatial reasoning. Beyond architecture improvements, we construct Spatial-MLLM-120k dataset and train the model using supervised-fining and GRPO on it. Extensive experiments on various real-world datasets demonstrate that our spatial-MLLM achieves state-of-the-art performance in a wide range of visual-based spatial understanding and reasoning tasks.", "arxiv_id": "2505.23747v1", "arxiv_authors": ["Diankun Wu", "Fangfu Liu", "Yi-Hsin Hung", "Yueqi Duan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2470931, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8bb"}, "filepath": "data/2504.20024v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998949851913509, "type": "Poster", "name": "SpatialReasoner: Towards Explicit and Generalizable 3D Spatial Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116669", "abstract": "Despite recent advances on multi-modal models, 3D spatial reasoning remains a challenging task for state-of-the-art open-source and proprietary models. Recent studies explore data-driven approaches and achieve enhanced spatial reasoning performance by fine-tuning models on 3D-related visual question-answering data. However, these methods typically perform spatial reasoning in an implicit manner and often fail on questions that are trivial to humans, even with long chain-of-thought reasoning. In this work, we introduce SpatialReasoner, a novel large vision-language model (LVLM) that addresses 3D spatial reasoning with explicit 3D representations shared between multiple stages--3D perception, computation, and reasoning. Explicit 3D representations provide a coherent interface that supports advanced 3D spatial reasoning and improves the generalization ability to novel question types. Furthermore, by analyzing the explicit 3D representations in multi-step reasoning traces of SpatialReasoner, we study the factual errors and identify key shortcomings of current LVLMs. Results show that our SpatialReasoner achieves improved performance on a variety of spatial reasoning benchmarks, outperforming Gemini 2.0 by 9.2% on 3DSRBench, and generalizes better when evaluating on novel 3D spatial reasoning questions. Our study bridges the 3D parsing capabilities of prior visual foundation models with the powerful reasoning abilities of large language models, opening new directions for 3D spatial reasoning.", "arxiv_id": "2504.20024v2", "arxiv_authors": ["Wufei Ma", "Yu-Cheng Chou", "Qihao Liu", "Xingrui Wang", "Celso de Melo", "Jianwen Xie", "Alan Yuille"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1239831, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8bc"}, "filepath": "data/2506.03642v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996685797948726, "type": "Poster", "name": "Spatial Understanding from Videos: Structured Prompts Meet Simulation Data", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117952", "abstract": "Visual-spatial understanding, the ability to infer object relationships and layouts from visual input, is fundamental to downstream tasks such as robotic navigation and embodied interaction. However, existing methods face spatial uncertainty and data scarcity, limiting the 3D spatial reasoning capability of pre-trained vision-language models (VLMs). To address these challenges, we present a unified framework for enhancing 3D spatial reasoning in pre-trained VLMs without modifying their architecture. This framework combines SpatialMind, a structured prompting strategy that decomposes complex scenes and questions into interpretable reasoning steps, with ScanForgeQA, a scalable question-answering dataset built from diverse 3D simulation scenes through an automated construction process designed for fine-tuning. Extensive experiments across multiple benchmarks demonstrate the individual and combined effectiveness of our prompting and fine-tuning strategies, and yield insights that may inspire future research on visual-spatial understanding.", "arxiv_id": "2506.03642v2", "arxiv_authors": ["Haoyu Zhang", "Meng Liu", "Zaijing Li", "Haokun Wen", "Weili Guan", "Yaowei Wang", "Liqiang Nie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028263, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8bd"}, "filepath": "data/2506.21924v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993289135170915, "type": "Poster", "name": "SPAZER: Spatial-Semantic Progressive Reasoning Agent for Zero-shot 3D Visual Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116117", "abstract": "3D Visual Grounding (3DVG) aims to localize target objects within a 3D scene based on natural language queries. To alleviate the reliance on costly 3D training data, recent studies have explored zero-shot 3DVG by leveraging the extensive knowledge and powerful reasoning capabilities of pre-trained LLMs and VLMs. However, existing paradigms tend to emphasize either spatial (3D-based) or semantic (2D-based) understanding, limiting their effectiveness in complex real-world applications. In this work, we introduce SPAZER \u2014 a VLM-driven agent that combines both modalities in a progressive reasoning framework. It first holistically analyzes the scene and produces a 3D rendering from the optimal viewpoint. Based on this, anchor-guided candidate screening is conducted to perform a coarse-level localization of potential objects. Furthermore, leveraging retrieved relevant 2D camera images, 3D-2D joint decision-making is efficiently performed to determine the best-matching object. By bridging spatial and semantic reasoning neural streams, SPAZER achieves robust zero-shot grounding without training on 3D-labeled data. Extensive experiments on ScanRefer and Nr3D benchmarks demonstrate that SPAZER significantly outperforms previous state-of-the-art zero-shot methods, achieving notable gains of $\\mathbf{9.0\\}$% and $\\mathbf{10.9\\}$% in accuracy.", "arxiv_id": "2506.21924v1", "arxiv_authors": ["Zhao Jin", "Rong-Cheng Tu", "Jingyi Liao", "Wenhao Sun", "Xiao Luo", "Shunyu Liu", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a44f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1123610, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8be"}, "filepath": "data/2509.16690v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992495594837447, "type": "Poster", "name": "Spectral Compressive Imaging via Chromaticity-Intensity Decomposition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115241", "abstract": "In coded aperture snapshot spectral imaging (CASSI), the captured measurement entangles spatial and spectral information, posing a severely ill-posed inverse problem for hyperspectral images (HSIs) reconstruction. Moreover, the captured radiance inherently depends on scene illumination, making it difficult to recover the intrinsic spectral reflectance that remains invariant to lighting conditions. To address these challenges, we propose a \\textbf{chromaticity-intensity decomposition framework}, which disentangles an HSI into a spatially smooth intensity map and a spectrally variant chromaticity cube. The chromaticity encodes lighting-invariant reflectance, enriched with high-frequency spatial details and local spectral sparsity. Building on this decomposition, we develop \\textbf{CIDNet}\u2014a Chromaticity-Intensity Decomposition unfolding network within a dual-camera CASSI system. CIDNet integrates a hybrid spatial-spectral Transformer tailored to reconstruct fine-grained and sparse spectral chromaticity and a degradation-aware, spatially-adaptive noise estimation module that captures anisotropic noise across iterative stages. Extensive experiments on both synthetic and real-world CASSI datasets demonstrate that our method achieves superior performance in both spectral and chromaticity fidelity. Code and models will be publicly available.", "arxiv_id": "2509.16690v1", "arxiv_authors": ["Xiaodong Wang", "Zijun He", "Ping Wang", "Lishun Wang", "Yanan Hu", "Xin Yuan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a450"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1342467, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8bf"}, "filepath": "data/2510.08994v1.png", "tags": [], "_media_type": "image", "_rand": 0.999458599705478, "type": "Poster", "name": "Speculative Jacobi-Denoising Decoding for Accelerating Autoregressive Text-to-image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115190", "abstract": "As a new paradigm of visual content generation, autoregressive text-to-image models suffer from slow inference due to their sequential token-by-token decoding process, often requiring thousands of model forward passes to generate a single image. To address this inefficiency, we propose Speculative Jacobi-Denoising Decoding (SJD2), a framework that incorporates the denoising process into Jacobi iterations to enable parallel token generation in autoregressive models. Our method introduces a next-clean-token prediction paradigm that enables the pre-trained autoregressive models to accept noise-perturbed token embeddings and predict the next clean tokens through low-cost fine-tuning. This denoising paradigm is beneficial to the stabilization of the Jacobi trajectories. During inference, our method initializes token sequences with Gaussian noise and performs iterative denoising in the embedding space. Concurrently with the denoising, we employ a probabilistic criterion to verify and accept multiple tokens in parallel. Experiments show that our method can accelerate generation by reducing model forward passes while maintaining the visual quality of generated images.", "arxiv_id": "2510.08994v1", "arxiv_authors": ["Yao Teng", "Fuyun Wang", "Xian Liu", "Zhekai Chen", "Han Shi", "Yu Wang", "Zhenguo Li", "Weiyang Liu", "Difan Zou", "Xihui Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a451"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080201, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c0"}, "filepath": "data/2509.18648v4.png", "tags": [], "_media_type": "image", "_rand": 0.9997916641690384, "type": "Poster", "name": "SPiDR: A Simple Approach for Zero-Shot Safety in Sim-to-Real Transfer", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118201", "abstract": "Safety remains a major concern for deploying reinforcement learning (RL) in real-world applications. Simulators provide safe, scalable training environments, but the inevitable *sim-to-real gap* introduces additional safety concerns, as policies must satisfy constraints in real-world conditions that differ from simulation. To address this challenge, robust safe RL techniques offer principled methods, but are often incompatible with standard scalable training pipelines. In contrast, *domain randomization*, a simple and popular sim-to-real technique, stands out as a promising alternative, although it often results in unsafe behaviors in practice. We present SPiDR, short for Sim-to-real via Pessimistic Domain Randomization\u2014a scalable algorithm with provable guarantees for safe sim-to-real transfer. SPiDR uses domain randomization to incorporate the uncertainty about the sim-to-real gap into the safety constraints, making it flexible and highly compatible with existing training pipelines. Through extensive experiments on sim-to-sim benchmarks and two distinct real-world robotic platforms, we demonstrate that SPiDR effectively ensures safety under sim-to-real gaps while maintaining strong performance.", "arxiv_id": "2509.18648v4", "arxiv_authors": ["Yarden As", "Chengrui Qu", "Benjamin Unger", "Dongho Kang", "Max van der Hart", "Laixi Shi", "Stelian Coros", "Adam Wierman", "Andreas Krause"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a452"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1157270, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c1"}, "filepath": "data/2503.04223v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999783267041276, "type": "Poster", "name": "Spiking Meets Attention: Efficient Remote Sensing Image Super-Resolution with Attention Spiking Neural Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117672", "abstract": "Spiking neural networks (SNNs) are emerging as a promising alternative to traditional artificial neural networks (ANNs), offering biological plausibility and energy efficiency. Despite these merits, SNNs are frequently hampered by limited capacity and insufficient representation power, yet remain underexplored in remote sensing image (RSI) super-resolution (SR) tasks. In this paper, we first observe that spiking signals exhibit drastic intensity variations across diverse textures, highlighting an active learning state of the neurons. This observation motivates us to apply SNNs for efficient SR of RSIs. Inspired by the success of attention mechanisms in representing salient information, we devise the spiking attention block (SAB), a concise yet effective component that optimizes membrane potentials through inferred attention weights, which, in turn, regulates spiking activity for superior feature representation. Our key contributions include: 1) we bridge the independent modulation between temporal and channel dimensions, facilitating joint feature correlation learning, and 2) we access the global self-similar patterns in large-scale remote sensing imagery to infer spatial attention weights, incorporating effective priors for realistic and faithful reconstruction. Building upon SAB, we proposed SpikeSR, which achieves state-of-the-art performance across various remote sensing benchmarks such as AID, DOTA, and DIOR, while maintaining high computational efficiency. The code of SpikeSR will be available upon paper acceptance.", "arxiv_id": "2503.04223v3", "arxiv_authors": ["Yi Xiao", "Qiangqiang Yuan", "Kui Jiang", "Wenke Huang", "Qiang Zhang", "Tingting Zheng", "Chia-Wen Lin", "Liangpei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a453"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1067185, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c2"}, "filepath": "data/2505.18608v5.png", "tags": [], "_media_type": "image", "_rand": 0.999326840589111, "type": "Poster", "name": "Spiking Transformers Need High-Frequency Information", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115997", "abstract": "Spiking Transformers offer an energy-efficient alternative to conventional deep learning by transmitting information solely through binary (0/1) spikes. However, there remains a substantial performance gap compared to artificial neural networks. A common belief is that their binary and sparse activation transmission leads to information loss, thus degrading feature representation and accuracy. In this work, however, we reveal for the first time that spiking neurons preferentially propagate low-frequency information. We hypothesize that the rapid dissipation of high-frequency components is the primary cause of performance degradation. For example, on Cifar-100, adopting Avg-Pooling (low-pass) for token mixing lowers performance to 76.73\\%; interestingly, replacing it with Max-Pooling (high-pass) pushes the top-1 accuracy to 79.12\\%, surpassing the well-tuned Spikformer baseline by 0.97\\%. Accordingly, we introduce Max-Former that restores high-frequency signals through two frequency-enhancing operators: extra Max-Pooling in patch embedding and Depth-Wise Convolution in place of self-attention. Notably, our Max-Former (63.99 M) hits the top-1 accuracy of 82.39\\% on ImageNet, showing a +7.58\\% improvement over Spikformer with comparable model size (74.81\\%, 66.34 M). We hope this simple yet effective solution inspires future research to explore the distinctive nature of spiking neural networks, beyond standard deep learning.", "arxiv_id": "2505.18608v5", "arxiv_authors": ["Yuetong Fang", "Deming Zhou", "Ziqing Wang", "Hongwei Ren", "ZeCui Zeng", "Lusong Li", "Shibo Zhou", "Renjing Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a454"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.649Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110794, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c3"}, "filepath": "data/2505.22643v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994173979018862, "type": "Poster", "name": "Spiral: Semantic-Aware Progressive LiDAR Scene Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117899", "abstract": "Leveraging diffusion models, 3D LiDAR scene generation has achieved great success in both range-view and voxel-based representations. While recent voxel-based approaches can generate both geometric structures and semantic labels, existing range-view methods are limited to producing unlabeled LiDAR scenes. Relying on pretrained segmentation models to predict the semantic maps often results in suboptimal cross-modal consistency. To address this limitation while preserving the advantages of range-view representations, such as computational efficiency and simplified network design, we propose Spiral, a novel range-view LiDAR diffusion model that simultaneously generates depth, reflectance images, and semantic maps. Furthermore, we introduce novel semantic-aware metrics to evaluate the quality of the generated labeled range-view data. Experiments on SemanticKITTI and nuScenes datasets demonstrate that Spiral achieves state-of-the-art performance with the smallest parameter size, outperforming two-step methods that combine the best available generative and segmentation models. Additionally, we validate that Spiral\u2019s generated range images can be effectively used for synthetic data augmentation in the downstream segmentation training, significantly reducing the labeling effort on LiDAR data.", "arxiv_id": "2505.22643v1", "arxiv_authors": ["Dekai Zhu", "Yixuan Hu", "Youquan Liu", "Dongyue Lu", "Lingdong Kong", "Slobodan Ilic"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a455"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1155954, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c4"}, "filepath": "data/2503.14905v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995854223830558, "type": "Poster", "name": "Spot the Fake: Large Multimodal Model-Based Synthetic Image Detection with Artifact Explanation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115251", "abstract": "With the rapid advancement of Artificial Intelligence Generated Content (AIGC) technologies, synthetic images have become increasingly prevalent in everyday life, posing new challenges for authenticity assessment and detection. Despite the effectiveness of existing methods in evaluating image authenticity and locating forgeries, these approaches often lack human interpretability and do not fully address the growing complexity of synthetic data. To tackle these challenges, we introduce FakeVLM, a specialized large multimodal model designed for both general synthetic image and DeepFake detection tasks. FakeVLM not only excels in distinguishing real from fake images but also provides clear, natural language explanations for image artifacts, enhancing interpretability. Additionally, we present FakeClue, a comprehensive dataset containing over 100,000 images across seven categories, annotated with fine-grained artifact clues in natural language. FakeVLM demonstrates performance comparable to expert models while eliminating the need for additional classifiers, making it a robust solution for synthetic data detection. Extensive evaluations across multiple datasets confirm the superiority of FakeVLM in both authenticity classification and artifact explanation tasks, setting a new benchmark for synthetic image detection. The code and model weights are available at https://anonymous.4open.science/r/nips_2025-2B62.", "arxiv_id": "2503.14905v2", "arxiv_authors": ["Siwei Wen", "Junyan Ye", "Peilin Feng", "Hengrui Kang", "Zichen Wen", "Yize Chen", "Jiang Wu", "Wenjun Wu", "Conghui He", "Weijia Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a456"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1099457, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c5"}, "filepath": "data/2506.23881v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998560376025699, "type": "Poster", "name": "Spurious-Aware Prototype Refinement for Reliable Out-of-Distribution Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115233", "abstract": "Out-of-distribution (OOD) detection is crucial for ensuring the reliability and safety of machine learning models in real-world applications, where they frequently face data distributions unseen during training.Despite progress, existing methods are often vulnerable to spurious correlations that mislead models and compromise robustness. To address this, we propose SPROD, a novel prototype-based OOD detection approach that explicitly addresses the challenge posed by unknown spurious correlations.Our post-hoc method refines class prototypes to mitigate bias from spurious features without additional data or hyperparameter tuning, and is broadly applicable across diverse backbones and OOD detection settings.We conduct a comprehensive spurious correlation OOD detection benchmarking, comparing our method against existing approaches and demonstrating its superior performance across challenging OOD datasets, such as CelebA, Waterbirds, UrbanCars, Spurious Imagenet, and the newly introduced Animals MetaCoCo. On average, SPROD improves AUROC by 4.7% and FPR@95 by 9.3% over the second best.", "arxiv_id": "2506.23881v2", "arxiv_authors": ["Reihaneh Zohrabi", "Hosein Hasani", "Mahdieh Soleymani Baghshah", "Anna Rohrbach", "Marcus Rohrbach", "Mohammad Hossein Rohban"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a457"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070700, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c6"}, "filepath": "data/2509.16588v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990157094229136, "type": "Poster", "name": "SQS: Enhancing Sparse Perception Models via Query-based Splatting in Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115925", "abstract": "Sparse Perception Models (SPMs) adopt a query-driven paradigm that forgoes explicit dense BEV or volumetric construction, enabling highly efficient computation and accelerated inference. In this paper, we introduce SQS, a novel query-based splatting pre-training specifically designed to advance SPMs in autonomous driving. SQS introduces a plug-in module that predicts 3D Gaussian representations from sparse queries during pre-training, leveraging self-supervised splatting to learn fine-grained contextual features through the reconstruction of multi-view images and depth maps. During fine-tuning, the pre-trained Gaussian queries are seamlessly integrated into downstream networks via query interaction mechanisms that explicitly connect pre-trained queries with task-specific queries, effectively accommodating the diverse requirements of occupancy prediction and 3D object detection. Extensive experiments on autonomous driving benchmarks demonstrate that SQS delivers considerable performance gains across multiple query-based 3D perception tasks, notably in occupancy prediction and 3D object detection, outperforming prior state-of-the-art pre-training approaches by a significant margin (i.e., +1.3 mIoU on occupancy prediction and +1.0 NDS on 3D detection).", "arxiv_id": "2509.16588v1", "arxiv_authors": ["Haiming Zhang", "Yiyao Zhu", "Wending Zhou", "Xu Yan", "Yingjie Cai", "Bingbing Liu", "Shuguang Cui", "Zhen Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a458"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050982, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c7"}, "filepath": "data/2510.22534v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991291156476237, "type": "Poster", "name": "SRSR: Enhancing Semantic Accuracy in Real-World Image Super-Resolution with Spatially Re-Focused Text-Conditioning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115714", "abstract": "Existing Stable-Diffusion-based super-resolution approaches often exhibit semantic ambiguities due to inaccuracies and incompleteness in their text prompts, coupled with the inherent tendency for cross-attention to divert towards irrelevant pixels. To address these, we propose a novel, plug-and-play spatially re-focused super-resolution (SRSR) framework, which refines text conditioning at inference time by applying visually-grounded segmentation masks to guide cross-attention and selectively bypassing text influences on ungrounded pixels to prevent hallucinations. Extensive experiments on both synthetic and real-world datasets demonstrate that SRSR consistently outperforms seven state-of-the-art baselines in standard fidelity metrics (PSNR and SSIM) across all datasets, and in perceptual quality measures (LPIPS and DISTS) on two real-world benchmarks, underscoring its effectiveness in achieving both high semantic fidelity and perceptual quality in super-resolution.", "arxiv_id": "2510.22534v1", "arxiv_authors": ["Chen Chen", "Majid Abdolshah", "Violetta Shevchenko", "Hongdong Li", "Chang Xu", "Pulak Purkait"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a459"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1109418, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c8"}, "filepath": "data/2506.04283v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998793737172306, "type": "Poster", "name": "SSIMBaD: Sigma Scaling with SSIM-Guided Balanced Diffusion for AnimeFace Colorization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115872", "abstract": "We propose a novel diffusion-based framework for automatic colorization of Anime-style facial sketches, which preserves the structural fidelity of the input sketch while effectively transferring stylistic attributes from a reference image. Our approach builds upon recent continuous-time diffusion models, but departs from traditional methods that rely on predefined noise schedules, which often fail to maintain perceptual consistency across the generative trajectory. To address this, we introduce SSIMBaD (Sigma Scaling with SSIM-Guided Balanced Diffusion), a sigma-space transformation that ensures linear alignment of perceptual degradation, as measured by structural similarity. This perceptual scaling enforces uniform visual difficulty across timesteps, enabling more balanced and faithful reconstructions. Experiments on a large-scale Anime face dataset show that our method significantly outperforms state-of-the-art (SOTA) models in terms of both pixel-level accuracy and perceptual quality, while generalizing robustly to diverse styles and structural variations.", "arxiv_id": "2506.04283v1", "arxiv_authors": ["Junpyo Seo", "Hanbin Koo", "Jieun Yook", "Byung-Ro Moon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45a"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 947631, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8c9"}, "filepath": "data/2505.12448v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995491562592108, "type": "Poster", "name": "SSR: Enhancing Depth Perception in Vision-Language Models via Rationale-Guided Spatial Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117942", "abstract": "Despite impressive advancements in Visual-Language Models (VLMs) for multi-modal tasks, their reliance on RGB inputs limits precise spatial understanding. Existing methods for integrating spatial cues, such as point clouds or depth, either require specialized sensors or fail to effectively exploit depth information for higher-order reasoning. To this end, we propose a novel Spatial Sense and Reasoning method, dubbed SSR, a novel framework that transforms raw depth data into structured, interpretable textual rationales. These textual rationales serve as meaningful intermediate representations to significantly enhance spatial reasoning capabilities. Additionally, we leverage knowledge distillation to compress the generated rationales into compact latent embeddings, which facilitate resource-efficient and plug-and-play integration into existing VLMs without retraining. To enable comprehensive evaluation, we introduce a new dataset named SSR-CoT, a million-scale visual-language reasoning dataset enriched with intermediate spatial reasoning annotations, and present SSRBench, a comprehensive multi-task benchmark. Extensive experiments on multiple benchmarks demonstrate SSR substantially improves depth utilization and enhances spatial reasoning, thereby advancing VLMs toward more human-like multi-modal understanding.", "arxiv_id": "2505.12448v3", "arxiv_authors": ["Yang Liu", "Ming Ma", "Xiaomin Yu", "Pengxiang Ding", "Han Zhao", "Mingyang Sun", "Siteng Huang", "Donglin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060614, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ca"}, "filepath": "data/2509.26555v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992028772842474, "type": "Poster", "name": "Stable Cinemetrics : Structured Taxonomy and Evaluation for Professional Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118677", "abstract": "Recent advances in text-to-video (T2V) generation have enabled high-fidelity video synthesis from natural language prompts. However, existing models and benchmarks fail to capture the complexity and requirements of professional video generation. Towards that goal, we introduce Stable Cinemetrics (SCINE), a structured evaluation framework that formalizes filmmaking principles into four disentangled, hierarchical taxonomies: Setup, Event, Lighting, and Camera} Together, these taxonomies define 76 fine-grained control nodes grounded in industry practices. Using these taxonomies, we construct a benchmark of prompts aligned with professional use cases and develop an automated pipeline for prompt categorization and question generation, enabling independent evaluation of each control dimension. We conduct a large-scale human study spanning 10+ models and 20K videos, annotated by a pool of 80+ film professionals. Our analysis, both coarse and fine-grained reveal that even the strongest current models exhibit significant gaps, particularly in Events and Camera-related controls. To enable scalable evaluation, we train an automatic evaluator, a vision-language model aligned with expert annotations that outperforms existing zero-shot baselines. SCINE is the first approach to formalize professional video generation within the landscape of video generative models, introducing taxonomies centered around cinematic control and supporting them with structured evaluation pipelines and detailed analyses to guide future research.", "arxiv_id": "2509.26555v1", "arxiv_authors": ["Agneet Chatterjee", "Rahim Entezari", "Maksym Zhuravinskyi", "Maksim Lapin", "Reshinth Adithyan", "Amit Raj", "Chitta Baral", "Yezhou Yang", "Varun Jampani"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1124649, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8cb"}, "filepath": "data/2509.17993v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994365827917269, "type": "Poster", "name": "StableGuard: Towards Unified Copyright Protection and Tamper Localization in Latent Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119879", "abstract": "The advancement of diffusion models has enhanced the realism of AI-generated content but also raised concerns about misuse, necessitating robust copyright protection and tampering localization. Although recent methods have made progress toward unified solutions, their reliance on post hoc processing introduces considerable application inconvenience and compromises forensic reliability. We propose StableGuard, a novel framework that seamlessly integrates a binary watermark into the diffusion generation process, ensuring copyright protection and tampering localization in Latent Diffusion Models through an end-to-end design. We develop a Multiplexing Watermark VAE (MPW-VAE) by equipping a pretrained Variational Autoencoder (VAE) with a lightweight latent residual-based adapter, enabling the generation of paired watermarked and watermark-free images. These pairs, fused via random masks, create a diverse dataset for training a tampering-agnostic forensic network. To further enhance forensic synergy, we introduce a Mixture-of-Experts Guided Forensic Network (MoE-GFN) that dynamically integrates holistic watermark patterns, local tampering traces, and frequency-domain cues for precise watermark verification and tampered region detection. The MPW-VAE and MoE-GFN are jointly optimized in a self-supervised, end-to-end manner, fostering a reciprocal training between watermark embedding and forensic accuracy. Extensive experiments demonstrate that StableGuard consistently outperforms state-of-the-art methods in image fidelity, watermark verification, and tampering localization.", "arxiv_id": "2509.17993v2", "arxiv_authors": ["Haoxin Yang", "Bangzhen Liu", "Xuemiao Xu", "Cheng Xu", "Yuyang Yu", "Zikai Huang", "Yi Wang", "Shengfeng He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054467, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8cc"}, "filepath": "data/2509.10687v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997502708513365, "type": "Poster", "name": "Stable Part Diffusion: Multi-View RGB and Kinematic Parts Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118844", "abstract": "We present Stable Part Diffusion (SPD), a framework for generating paired RGB and kinematic part videos from monocular inputs. Unlike conventional part segmentation methods that rely on appearance-based semantic cues, SPD learns to produce kinematic parts --- structural components aligned with object articulation and consistent across views and time.SPD adopts a dual-branch diffusion model that jointly synthesizes RGB frames and corresponding part segmentation maps. To simplify architecture and flexibly enable different part counts, we introduce a spatial color encoding scheme that maps part masks to continuous RGB-like images. This encoding allows the segmentation branch to share the latents VAE from the RGB branch, while enabling part segmentation to be recovered via straightforward post-processing. A Bidirectional Diffusion Fusion (BiDiFuse) module enhances cross-branch consistency, supported by a contrastive part consistency loss to promote spatial and temporal alignment of part predictions.We demonstrate that the generated 2D part maps can be lifted to 3D to derive skeletal structures and harmonic skinning weights with few manual adjustments. To train and evaluate SPD, we construct KinematicParts20K, a curated dataset of over 20K rigged objects selected and processed from Objaverse XL, each paired with multi-view RGB and part video sequences. Experiments show that SPD generalizes strongly to diverse scenarios, including real-world videos, novel generated objects, and rare articulated poses, producing kinematic-aware outputs suitable for downstream animation and motion-related tasks.", "arxiv_id": "2509.10687v1", "arxiv_authors": ["Hao Zhang", "Chun-Han Yao", "Simon Donn\u00e9", "Narendra Ahuja", "Varun Jampani"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1724970, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8cd"}, "filepath": "data/2507.16385v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993159492349222, "type": "Poster", "name": "STAR: A Benchmark for Astronomical Star Fields Super-Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121539", "abstract": "Super-resolution (SR) advances astronomical imaging by enabling cost-effective high-resolution capture, crucial for detecting faraway celestial objects and precise structural analysis. However, existing datasets for astronomical SR (ASR) exhibit three critical limitations: flux inconsistency, object-crop setting, and insufficient data diversity, significantly impeding ASR development. We propose STAR, a large-scale astronomical SR dataset containing 54,738 flux-consistent star field image pairs covering wide celestial regions. These pairs combine Hubble Space Telescope high-resolution observations with physically faithful low-resolution counterparts generated through a flux-preserving data generation pipeline, enabling systematic development of field-level ASR models. To further empower the ASR community, STAR provides a novel Flux Error (FE) to evaluate SR models in physical view. Leveraging this benchmark, we propose a Flux-Invariant Super Resolution (FISR) model that could accurately infer the flux-consistent high-resolution images from input photometry, suppressing several SR state-of-the-art methods by 24.84% on a novel designed flux consistency metric, showing the priority of our method for astrophysics. Extensive experiments demonstrate the effectiveness of our proposed method and the value of our dataset. Code and models are available at https://github.com/GuoCheng12/STAR.", "arxiv_id": "2507.16385v2", "arxiv_authors": ["Kuo-Cheng Wu", "Guohang Zhuang", "Jinyang Huang", "Xiang Zhang", "Wanli Ouyang", "Yan Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a45f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1092827, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ce"}, "filepath": "data/2506.06276v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996534204460886, "type": "Poster", "name": "STARFlow: Scaling Latent Normalizing Flows for High-resolution Image Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120027", "abstract": "We present STARFlow, a scalable generative model based on autoregressive flows (AFs)\u2014a special class of normalizing flows\u2014that achieves strong performance on high-resolution image synthesis. We first establish the theoretical universality of AFs for modeling continuous distributions. Building on this foundation, we introduce a set of architectural and algorithmic innovations that significantly enhance the scalability of normalizing flows: (1) a deep-shallow design where a deep AF block captures most of the model\u2019s capacity, followed by a few shallow AF blocks that are computationally cheap yet contribute non-negligibly, (2) learning in the latent space of pretrained autoencoders, which proves far more effective than modeling pixels directly, and (3) a novel guidance algorithm that substantially improves sample quality. Crucially, our model remains a single, end-to-end normalizing flow, allowing exact maximum likelihood training in continuous space without discretization. \\model{} achieves competitive results in both class- and text-conditional image generation, with sample quality approaching that of state-of-the-art diffusion models. To our knowledge, this is the first successful demonstration of normalizing flows at this scale and resolution.", "arxiv_id": "2506.06276v1", "arxiv_authors": ["Jiatao Gu", "Tianrong Chen", "David Berthelot", "Huangjie Zheng", "Yuyang Wang", "Ruixiang Zhang", "Laurent Dinh", "Miguel Angel Bautista", "Josh Susskind", "Shuangfei Zhai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a460"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.650Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6501865, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8cf"}, "filepath": "data/2505.22246v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999514258971027, "type": "Poster", "name": "StateSpaceDiffuser: Bringing Long-Context Content to Diffusion World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116774", "abstract": "World models have recently become promising tools for learning to simulate environments and produce realistic visuals based on actions in complex settings. However, because they rely on short observation sequences, they quickly lose track of context. As a result, visual consistency breaks down after just a few steps, and generated scenes no longer reflect information seen earlier. This problem comes from a common design choice in state-of-the-art diffusion-based models: they do not keep track of a lasting environment state. To address this problem, we introduce StateSpaceDiffuser, where a diffusion model is enabled to perform on long-context tasks by integrating a rich representation from a state-space model (Mamba). The state-space branch maintains a compact latent summarizing the entire interaction history, while the diffusion branch conditions on this latent to render context-aware future frames. This design restores long-term memory without sacrificing the high-fidelity synthesis of diffusion models.To rigorously measure temporal consistency, we develop an evaluation protocol that probes a model\u2019s ability to remember and re-instantiate previously seen content during extended rollouts. Through comprehensive experiments, we show that the StateSpaceDiffuser significantly outperforms a strong diffusion-only baseline, maintaining coherent visual context for an order of magnitude more steps. It delivers consistent views in both a 2D maze navigation and a complex 3D environment. These results establish that bringing state-space representations into diffusion models is highly effective in both producing visually detailed and capable of long-term memory.", "arxiv_id": "2505.22246v2", "arxiv_authors": ["Nedko Savov", "Naser Kazemi", "Deheng Zhang", "Danda Pani Paudel", "Xi Wang", "Luc Van Gool"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a461"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1614281, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d0"}, "filepath": "data/2510.12160v1.png", "tags": [], "_media_type": "image", "_rand": 0.999223839379825, "type": "Poster", "name": "State Space Prompting via Gathering and Spreading Spatio-Temporal Information for Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117910", "abstract": "Recently, pre-trained state space models have shown great potential for video classification, which sequentially compresses visual tokens in videos with linear complexity, thereby improving the processing efficiency of video data while maintaining high performance. To apply powerful pre-trained models to downstream tasks, prompt learning is proposed to achieve efficient downstream task adaptation with only a small number of fine-tuned parameters. However, the sequentially compressed visual prompt tokens fail to capture the spatial and temporal contextual information in the video, thus limiting the effective propagation of spatial information within a video frame and temporal information between frames in the state compression model and the extraction of discriminative information. To tackle the above issue, we proposed a State Space Prompting (SSP) method for video understanding, which combines intra-frame and inter-frame prompts to aggregate and propagate key spatiotemporal information in the video. Specifically, an Intra-Frame Gathering (IFG) module is designed to aggregate spatial key information within each frame. Besides, an Inter-Frame Spreading (IFS) module is designed to spread discriminative spatio-temporal information across different frames. By adaptively balancing and compressing key spatio-temporal information within and between frames, our SSP effectively propagates discriminative information in videos in a complementary manner. Extensive experiments on four video benchmark datasets verify that our SSP significantly outperforms existing SOTA methods by 2.76\\% on average while reducing the overhead of fine-tuning parameters.", "arxiv_id": "2510.12160v1", "arxiv_authors": ["Jiahuan Zhou", "Kai Zhu", "Zhenyu Cui", "Zichen Liu", "Xu Zou", "Gang Hua"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a462"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060335, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d1"}, "filepath": "data/2505.20781v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994871515846149, "type": "Poster", "name": "STITCH-OPE: Trajectory Stitching with Guided Diffusion for Off-Policy Evaluation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119444", "abstract": "Off-policy evaluation (OPE) estimates the performance of a target policy using offline data collected from a behavior policy, and is crucial in domains such as robotics or healthcare where direct interaction with the environment is costly or unsafe. Existing OPE methods are ineffective for high-dimensional, long-horizon problems, due to exponential blow-ups in variance from importance weighting or compounding errors from learned dynamics models. To address these challenges, we propose STITCH-OPE, a model-based generative framework that leverages denoising diffusion for long-horizon OPE in high-dimensional state and action spaces. Starting with a diffusion model pre-trained on the behavior data, STITCH-OPE generates synthetic trajectories from the target policy by guiding the denoising process using the score function of the target policy. STITCH-OPE proposes two technical innovations that make it advantageous for OPE: (1) prevents over-regularization by subtracting the score of the behavior policy during guidance, and (2) generates long-horizon trajectories by stitching partial trajectories together end-to-end. We provide a theoretical guarantee that under mild assumptions, these modifications result in an exponential reduction in variance versus long-horizon trajectory diffusion. Experiments on the D4RL and OpenAI Gym benchmarks show substantial improvement in mean squared error, correlation, and regret metrics compared to state-of-the-art OPE methods.", "arxiv_id": "2505.20781v1", "arxiv_authors": ["Hossein Goli", "Michael Gimelfarb", "Nathan Samuel de Lara", "Haruki Nishimura", "Masha Itkina", "Florian Shkurti"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a463"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069776, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d2"}, "filepath": "data/2410.14238v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994113097131914, "type": "Poster", "name": "Storyboard-guided Alignment for Fine-grained Video Action Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119384", "abstract": "Fine-grained video action recognition can be formulated as a video\u2013text matching problem. Previous approaches primarily rely on global video semantics to consolidate video embeddings, often leading to misaligned video\u2013text pairs due to inaccurate atomic-level action understanding. This inaccuracy arises due to i) videos with distinct global semantics may share similar atomic actions or visual appearances, and ii) atomic actions can be momentary, gradual, or not directly aligned with overarching video semantics. Inspired by storyboarding, where a script is segmented into individual shots, we propose a multi-granularity framework, SFAR. SFAR generates fine-grained descriptions of common atomic actions for each global semantic using a large language model. Unlike existing works that refine global semantics with auxiliary video frames, SFAR introduces a filtering metric to ensure correspondence between the descriptions and the global semantics, eliminating the need for direct video involvement and thereby enabling more nuanced recognition of subtle actions. By leveraging both global semantics and fine-grained descriptions, our SFAR effectively identifies prominent frames within videos, thereby improving the accuracy of embedding aggregation. Extensive experiments on various video action recognition datasets demonstrate the competitive performance of our SFAR in supervised, few-shot, and zero-shot settings.", "arxiv_id": "2410.14238v1", "arxiv_authors": ["Enqi Liu", "Liyuan Pan", "Yan Yang", "Yiran Zhong", "Zhijing Wu", "Xinxiao Wu", "Liu Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a464"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2966441, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d3"}, "filepath": "data/2509.21056v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993948425562778, "type": "Poster", "name": "Stratify or Die: Rethinking Data Splits in Image Segmentation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116115", "abstract": "Random splitting of datasets in image segmentation often leads to unrepresentative test sets, resulting in biased evaluations and poor model generalization. While stratified sampling has proven effective for addressing label distribution imbalance in classification tasks, extending these ideas to segmentation remains challenging due to the multi-label structure and class imbalance typically present in such data. Building on existing stratification concepts, we introduce Iterative Pixel Stratification (IPS), a straightforward, label-aware sampling method tailored for segmentation tasks. Additionally, we present Wasserstein-Driven Evolutionary Stratification (WDES), a novel genetic algorithm designed to minimize the Wasserstein distance, thereby optimizing the similarity of label distributions across dataset splits. We prove that WDES is globally optimal given enough generations. Using newly proposed statistical heterogeneity metrics, we evaluate both methods against random sampling and find that WDES consistently produces more representative splits. Applying WDES across diverse segmentation tasks, including street scenes, medical imaging, and satellite imagery, leads to lower performance variance and improved model evaluation. Our results also highlight the particular value of WDES in handling small, imbalanced, and low-diversity datasets, where conventional splitting strategies are most prone to bias.", "arxiv_id": "2509.21056v1", "arxiv_authors": ["Naga Venkata Sai Jitin Jami", "Thomas Altstidl", "Jonas Mueller", "Jindong Li", "Dario Zanca", "Bjoern Eskofier", "Heike Leutheuser"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a465"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058836, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d4"}, "filepath": "data/2505.05467v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995112446335203, "type": "Poster", "name": "StreamBridge: Turning Your Offline Video Large Language Model into a Proactive Streaming Assistant", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119217", "abstract": "We present StreamBridge, a simple yet effective framework that seamlessly transforms offline Video-LLMs into streaming-capable models. It addresses two fundamental challenges in adapting existing models into online scenarios: (1) limited capability for multi-turn real-time understanding, and (2) lack of proactive response mechanisms. Specifically, StreamBridge incorporates (1) a memory buffer combined with a round-decayed compression strategy, supporting long-context multi-turn interactions, and (2) a decoupled, lightweight activation model that can be effortlessly integrated into existing Video-LLMs, enabling continuous proactive responses. To further support StreamBridge, we construct Stream-IT, a large-scale dataset tailored for streaming video understanding, featuring interleaved video-text sequences and diverse instruction formats. Extensive experiments show that StreamBridge significantly improves the streaming understanding capabilities of offline Video-LLMs across various tasks, outperforming even proprietary models such as GPT-4o and Gemini 1.5 Pro. Simultaneously, it achieves competitive or superior performance on standard video understanding benchmarks.", "arxiv_id": "2505.05467v2", "arxiv_authors": ["Haibo Wang", "Bo Feng", "Zhengfeng Lai", "Mingze Xu", "Shiyu Li", "Weifeng Ge", "Afshin Dehghan", "Meng Cao", "Ping Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a466"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113233, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d5"}, "filepath": "data/2509.24871v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997260935040825, "type": "Poster", "name": "StreamForest: Efficient Online Video Understanding with Persistent Event Memory", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119513", "abstract": "Multimodal Large Language Models (MLLMs) have recently achieved remarkable progress in video understanding. However, their effectiveness in real-time streaming scenarios remains limited due to storage constraints of historical visual features and insufficient real-time spatiotemporal reasoning. To address these challenges, we propose StreamForest, a novel architecture specifically designed for streaming video understanding. Central to StreamForest is the Persistent Event Memory Forest, a memory mechanism that adaptively merges video frames into multiple event-level tree structures. This process is guided by penalty functions based on temporal distance, content similarity, and merge frequency, enabling efficient long-term memory retention under limited computational resources. To enhance real-time perception, we introduce a Fine-grained Spatiotemporal Window, which captures detailed short-term visual cues to improve current scene understanding. Additionally, we present OnlineIT, an instruction-tuning dataset tailored for streaming video tasks. OnlineIT significantly boosts MLLM performance in both real-time perception and future prediction. To evaluate generalization in practical applications, we introduce ODV-Bench, a benchmark focused on real-time streaming video understanding in autonomous driving scenarios. Experimental results demonstrate that StreamForest achieves state-of-the-art performance, with accuracies of 77.3% on StreamingBench, 60.5% on OVBench, and 55.6% on OVO-Bench. Notably, even under extreme visual token compression (limited to 1024 tokens), the model retains 96.7% of its average accuracy across eight benchmarks relative to the default setting. These results underscore the robustness, efficiency, and generalizability of StreamForest for streaming video understanding.", "arxiv_id": "2509.24871v1", "arxiv_authors": ["Xiangyu Zeng", "Kefan Qiu", "Qingyu Zhang", "Xinhao Li", "Jing Wang", "Jiaxin Li", "Ziang Yan", "Kun Tian", "Meng Tian", "Xinhai Zhao", "Yi Wang", "Limin Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a467"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032060, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d6"}, "filepath": "data/2506.04220v2.png", "tags": [], "_media_type": "image", "_rand": 0.999820094228166, "type": "Poster", "name": "Struct2D: A Perception-Guided Framework for Spatial Reasoning in Multimodal Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115762", "abstract": "Unlocking spatial reasoning in Large Multimodal Models (LMMs) is crucial for enabling intelligent interaction with 3D environments. While prior efforts often rely on explicit 3D inputs or specialized model architectures, we ask: can LMMs reason about 3D space using only structured 2D representations derived from perception?In this work, we introduce Struct2D, a perception-guided prompting framework that combines bird\u2019s-eye-view (BEV) images with object marks and object-centric metadata, optionally incorporating egocentric keyframes when needed. Using Struct2D, we conduct an in-depth zero-shot analysis of closed-source LMMs (e.g., GPT-4o) and find that they exhibit surprisingly strong spatial reasoning abilities when provided with projected 2D inputs, effectively handling tasks such as relative direction estimation and route planning.Motivated by these findings, we construct a large-scale instructional tuning dataset, \\textbf{Struct2D-Set}, using an automated pipeline that generates fine-grained QA pairs grounded in 3D indoor scenes. We then fine-tune an open-source LMM (Qwen2.5VL) using Struct2D-Set, relying on noisy 3D perception rather than ground-truth annotations. Despite this, the tuned model achieves strong performance across multiple spatial reasoning benchmarks, including 3D question answering, captioning, and object grounding, spanning eight diverse reasoning categories.Our approach demonstrates that structured 2D inputs can effectively bridge perception and language reasoning in LMMs\u2014without requiring explicit 3D representations as input. We will release both our code and dataset to support future research.", "arxiv_id": "2506.04220v2", "arxiv_authors": ["Fangrui Zhu", "Hanhui Wang", "Yiming Xie", "Jing Gu", "Tianye Ding", "Jianwei Yang", "Huaizu Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a468"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1968374, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d7"}, "filepath": "data/2505.19985v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992993165611828, "type": "Poster", "name": "Structured Initialization for Vision Transformers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116806", "abstract": "Convolutional Neural Networks (CNNs) inherently encode strong inductive biases, enabling effective generalization on small-scale datasets. In this paper, we propose integrating this inductive bias into ViTs, not through an architectural intervention but solely through initialization.The motivation here is to have a ViT that can enjoy strong CNN-like performance when data assets are small, but can still scale to ViT-like performance as the data expands. Our approach is motivated by our empirical results that random impulse filters can achieve commensurate performance to learned filters within a CNN. We improve upon current ViT initialization strategies, which typically rely on empirical heuristics such as using attention weights from pretrained models or focusing on the distribution of attention weights without enforcing structures. Empirical results demonstrate that our method significantly outperforms standard ViT initialization across numerous small and medium-scale benchmarks, including Food-101, CIFAR-10, CIFAR-100, STL-10, Flowers, and Pets, while maintaining comparative performance on large-scale datasets such as ImageNet-1K. Moreover, our initialization strategy can be easily integrated into various transformer-based architectures such as Swin Transformer and MLP-Mixer with consistent improvements in performance.", "arxiv_id": "2505.19985v1", "arxiv_authors": ["Jianqiao Zheng", "Xueqian Li", "Hemanth Saratchandran", "Simon Lucey"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a469"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 997418, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d8"}, "filepath": "data/2506.06218v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997995749641554, "type": "Poster", "name": "STSBench: A Spatio-temporal Scenario Benchmark for Multi-modal Large Language Models in Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121664", "abstract": "We introduce STSBench, a scenario-based framework to benchmark the holistic understanding of vision-language models (VLMs) for autonomous driving. The framework automatically mines pre-defined traffic scenarios from any dataset using ground-truth annotations, provides an intuitive user interface for efficient human verification, and generates multiple-choice questions for model evaluation. Applied to the NuScenes dataset, we present STSnu, the first benchmark that evaluates the spatio-temporal reasoning capabilities of VLMs based on comprehensive 3D perception. Existing benchmarks typically target off-the-shelf or fine-tuned VLMs for images or videos from a single viewpoint and focus on semantic tasks such as object recognition, dense captioning, risk assessment, or scene understanding. In contrast, STSnu evaluates driving expert VLMs for end-to-end driving, operating on videos from multi-view cameras or LiDAR. It specifically assesses their ability to reason about both ego-vehicle actions and complex interactions among traffic participants, a crucial capability for autonomous vehicles. The benchmark features 43 diverse scenarios spanning multiple views and frames, resulting in 971 human-verified multiple-choice questions. A thorough evaluation uncovers critical shortcomings in existing models\u2019 ability to reason about fundamental traffic dynamics in complex environments. These findings highlight the urgent need for architectural advances that explicitly model spatio-temporal reasoning. By addressing a core gap in spatio-temporal evaluation, STSBench enables the development of more robust and explainable VLMs for autonomous driving.", "arxiv_id": "2506.06218v1", "arxiv_authors": ["Christian Fruhwirth-Reisinger", "Du\u0161an Mali\u0107", "Wei Lin", "David Schinagl", "Samuel Schulter", "Horst Possegger"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.651Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 990682, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8d9"}, "filepath": "data/2505.21060v2.png", "tags": [], "_media_type": "image", "_rand": 0.999041009823677, "type": "Poster", "name": "Styl3R: Instant 3D Stylization Reconstruction from Arbitrary Scenes and Styles", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119247", "abstract": "Stylizing 3D scenes instantly while maintaining multi-view consistency and faithfully resembling a style image remains a significant challenge. Current state-of-the-art 3D stylization methods typically involve computationally intensive test-time optimization to transfer artistic features into a pretrained 3D representation, often requiring dense posed input images. In contrast, leveraging recent advances in feed-forward reconstruction models, we demonstrate a novel approach to achieve direct 3D stylization in less than a second using unposed sparse-view scene images and an arbitrary style image. To address the inherent decoupling between reconstruction and stylization, we introduce a branched architecture that separates structure modeling and appearance shading, effectively preventing stylistic transfer from distorting the underlying 3D scene structure. Furthermore, we adapt an identity loss to facilitate pre-training our stylization model through the novel view synthesis task. This strategy also allows our model to retain its original reconstruction capabilities while being fine-tuned for stylization. Comprehensive evaluations, using both in-domain and out-of-domain datasets, demonstrate that our approach produces high-quality stylized 3D content that achieve a superior blend of style and scene appearance, while also outperforming existing methods in terms of multi-view consistency and efficiency.", "arxiv_id": "2505.21060v2", "arxiv_authors": ["Peng Wang", "Xiang Liu", "Peidong Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3552145, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8da"}, "filepath": "data/2505.18766v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993478193503733, "type": "Poster", "name": "StyleGuard: Preventing Text-to-Image-Model-based Style Mimicry Attacks by Style Perturbations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117897", "abstract": "Recently, text-to-image diffusion models have been widely used for style mimicry and personalized customization through methods such as DreamBooth and Textual Inversion. This has raised concerns about intellectual property protection and the generation of deceptive content.Recent studies, such as Glaze and Anti-DreamBooth, have proposed using adversarial noise to protect images from these attacks. However, recent purification-based methods, such as DiffPure and Noise Upscaling, have successfully attacked these latest defenses, showing the vulnerabilities of these methods. Moreover, present methods show limited transferability across models, making them less effective against unknown text-to-image models.To address these issues, we propose a novel anti-mimicry method, StyleGuard. We propose a novel style loss that optimizes the style-related features in the latent space to make it deviate from the original image, which improves model-agnostic transferability.Additionally, to enhance the perturbation's ability to bypass diffusion-based purification, we designed a novel upscale loss that involves ensemble purifiers and upscalers during training.Extensive experiments on the WikiArt and CelebA datasets demonstrate that StyleGuard outperforms existing methods in robustness against various transformations and purifications, effectively countering style mimicry in various models. Moreover, StyleGuard is effective on different style mimicry methods, including DreamBooth and Textual Inversion.", "arxiv_id": "2505.18766v1", "arxiv_authors": ["Yanjie Li", "Wenxuan Zhang", "Xinqi Lyu", "Yihao Liu", "Bin Xiao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058989, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8db"}, "filepath": "data/2411.13112v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993621518338925, "type": "Poster", "name": "SURDS: Benchmarking Spatial Understanding and Reasoning in Driving Scenarios with Vision Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121802", "abstract": "Accurate spatial reasoning in outdoor environments\u2014covering geometry, object pose, and inter-object relationships\u2014is fundamental to downstream tasks such as mapping, motion forecasting, and high-level planning in autonomous driving. We introduce SURDS, a large-scale benchmark designed to systematically evaluate the spatial reasoning capabilities of vision language models (VLMs). Built on the nuScenes dataset, SURDS comprises 41,080 vision\u2013question\u2013answer training instances and 9,250 evaluation samples, spanning six spatial categories: orientation, depth estimation, pixel-level localization, pairwise distance, lateral ordering, and front\u2013behind relations. We benchmark leading general-purpose VLMs, including GPT, Gemini, and Qwen, revealing persistent limitations in fine-grained spatial understanding. To address these deficiencies, we go beyond static evaluation and explore whether alignment techniques can improve spatial reasoning performance. Specifically, we propose a reinforcement learning\u2013based alignment scheme leveraging spatially grounded reward signals\u2014capturing both perception-level accuracy (location) and reasoning consistency (logic). We further incorporate final-answer correctness and output-format rewards to guide fine-grained policy adaptation. Our GRPO-aligned variant achieves overall score of 40.80 in SURDS benchmark. Notably, it outperforms proprietary systems such as GPT-4o (13.30) and Gemini-2.0-flash (35.71). To our best knowledge, this is the first study to demonstrate that reinforcement learning\u2013based alignment can significantly and consistently enhance the spatial reasoning capabilities of VLMs in real-world driving contexts. We release the SURDS benchmark, evaluation toolkit, and GRPO alignment code through: https://github.com/XiandaGuo/Drive-MLLM.", "arxiv_id": "2411.13112v3", "arxiv_authors": ["Xianda Guo", "Ruijun Zhang", "Yiqun Duan", "Yuhang He", "Dujun Nie", "Wenke Huang", "Chenming Zhang", "Shuai Liu", "Hao Zhao", "Long Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2012305, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8dc"}, "filepath": "data/2507.07781v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992041082557207, "type": "Poster", "name": "Surprise3D: A Dataset for Spatial Understanding and Reasoning in Complex 3D Scenes", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121718", "abstract": "The integration of language and 3D perception is critical for embodied AI and robotic systems to perceive, understand, and interact with the physical world. Spatial reasoning, a key capability for understanding spatial relationships between objects, remains underexplored in current 3D vision-language research. Existing datasets often mix semantic cues (e.g., object name) with spatial context, leading models to rely on superficial shortcuts rather than genuinely interpreting spatial relationships. To address this gap, we introduce Surprise3D, a novel dataset designed to evaluate language-guided spatial reasoning segmentation in complex 3D scenes. Surprise3D consists of more than 200k vision language pairs across 900+ detailed indoor scenes from ScanNet++ v2, including more than 2.8k unique object classes. The dataset contains 89k+ human-annotated spatial queries deliberately crafted without object name, thereby mitigating shortcut biases in spatial understanding. These queries comprehensively cover various spatial reasoning skills, such as relative position, narrative perspective, parametric perspective, and absolute distance reasoning. Initial benchmarks demonstrate significant challenges for current state-of-the-art expert 3D visual grounding methods and 3D-LLMs, underscoring the necessity of our dataset and the accompanying 3D Spatial Reasoning Segmentation (3D-SRS) benchmark suite. Surprise3D and 3D-SRS aim to facilitate advancements in spatially aware AI, paving the way for effective embodied interaction and robotic planning.", "arxiv_id": "2507.07781v1", "arxiv_authors": ["Jiaxin Huang", "Ziwen Li", "Hanlve Zhang", "Runnan Chen", "Xiao He", "Yandong Guo", "Wenping Wang", "Tongliang Liu", "Mingming Gong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073574, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8dd"}, "filepath": "data/2510.20965v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990006932140457, "type": "Poster", "name": "SutureBot: A Precision Framework & Benchmark For Autonomous End-to-End Suturing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121650", "abstract": "Robotic suturing is a prototypical long-horizon dexterous manipulation task, requiring coordinated needle grasping, precise tissue penetration, and secure knot tying. Despite numerous efforts toward end-to-end autonomy, a fully autonomous suturing pipeline has yet to be demonstrated on physical hardware. We introduce SutureBot: an autonomous suturing benchmark on the da Vinci Research Kit (dVRK), spanning needle pickup, tissue insertion, and knot tying. To ensure repeatability, we release a high-fidelity dataset comprising 1,890 suturing demonstrations. Furthermore, we propose a goal-conditioned framework that explicitly optimizes insertion-point precision, improving targeting accuracy by 80\\% over a task-only baseline. To establish this task as a benchmark for dexterous imitation learning, we evaluate state-of-the-art vision-language-action (VLA) models, including $\\pi_0$, GR00T N1, OpenVLA-OFT, and multitask ACT, each augmented with a high-level task-prediction policy. Autonomous suturing is a key milestone toward achieving robotic autonomy in surgery. These contributions support reproducible evaluation and development of precision-focused, long-horizon dexterous manipulation policies necessary for end-to-end suturing. Dataset is available at: \\href{https://huggingface.co/datasets/jchen396/suturebot}{Hugging Face}.", "arxiv_id": "2510.20965v1", "arxiv_authors": ["Jesse Haworth", "Juo-Tung Chen", "Nigel Nelson", "Ji Woong Kim", "Masoud Moghani", "Chelsea Finn", "Axel Krieger"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a46f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1741510, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8de"}, "filepath": "data/2506.02444v3.png", "tags": [], "_media_type": "image", "_rand": 0.9994491672628261, "type": "Poster", "name": "SViMo: Synchronized Diffusion for Video and Motion Generation in Hand-object Interaction Scenarios", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116609", "abstract": "Hand-Object Interaction (HOI) generation has significant application potential. However, current 3D HOI motion generation approaches heavily rely on predefined 3D object models and lab-captured motion data, limiting generalization capabilities. Meanwhile, HOI video generation methods prioritize pixel-level visual fidelity, often sacrificing physical plausibility. Recognizing that visual appearance and motion patterns share fundamental physical laws in the real world, we propose a novel framework that combines visual priors and dynamic constraints within a synchronized diffusion process to generate the HOI video and motion simultaneously. To integrate the heterogeneous semantics, appearance, and motion features, our method implements tri-modal adaptive modulation for feature aligning, coupled with 3D full-attention for modeling inter- and intra-modal dependencies. Furthermore, we introduce a vision-aware 3D interaction diffusion model that generates explicit 3D interaction sequences directly from the synchronized diffusion outputs, then feeds them back to establish a closed-loop feedback cycle. This architecture eliminates dependencies on predefined object models or explicit pose guidance while significantly enhancing video-motion consistency. Experimental results demonstrate our method's superiority over state-of-the-art approaches in generating high-fidelity, dynamically plausible HOI sequences, with notable generalization capabilities in unseen real-world scenarios.", "arxiv_id": "2506.02444v3", "arxiv_authors": ["Lingwei Dang", "Ruizhi Shao", "Hongwen Zhang", "Wei Min", "Yebin Liu", "Qingyao Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a470"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1010902, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8df"}, "filepath": "data/2510.22943v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993057208886389, "type": "Poster", "name": "Switchable Token-Specific Codebook Quantization For Face Image Compression", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119645", "abstract": "Codebook-based image compression has achieved lower bits per pixel (bpp) while maintaining high reconstruction quality. These approaches utilize a globally shared codebook to quantize and reconstruct each token, controlling the bpp by adjusting the number of tokens or the codebook size. However, these methods perform poorly on facial images at low bpp: reducing the number of tokens or the codebook size significantly degrades the recognition performance of reconstructed faces. By analyzing facial images, we observed that pictures sharing the same attributes (e.g., gender, age, race) often exhibit many common features. At the same time, different facial regions demonstrate distinct characteristics (e.g., nose vs. ear).Based on these observations, we propose a group-routed codebook quantization and reconstruction method. By recording the codebook group to which each token belongs with a small number of bits, our method can reduce the loss incurred when decreasing the size of each codebook group. This enables a larger total number of codebooks under a lower overall bpp, thereby enhancing the expressive power and improving reconstruction performance. On face recognition datasets, our method outperforms state-of-the-art approaches such as 87.56\\% and 91.66\\% at the same bpp (0.0234), and remains competitive even when the bpp is reduced to 0.0040 (66.02\\%).", "arxiv_id": "2510.22943v1", "arxiv_authors": ["Yongbo Wang", "Haonan Wang", "Guodong Mu", "Ruixin Zhang", "Jiaqi Chen", "Jingyun Zhang", "Jun Wang", "Yuan Xie", "Zhizhong Zhang", "Shouhong Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a471"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 877440, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e0"}, "filepath": "data/2510.07723v2.png", "tags": [], "_media_type": "image", "_rand": 0.999174931408007, "type": "Poster", "name": "SyncHuman: Synchronizing 2D and 3D Diffusion Models for Single-view Human Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119432", "abstract": "Photorealistic 3D full-body human reconstruction from a single image is a critical yet challenging task for applications in films and video games due to inherent ambiguities and severe self-occlusions. While recent approaches leverage SMPL estimation and SMPL-conditioned image diffusion models to hallucinate novel views, they suffer from inaccurate 3D priors estimated from SMPL meshes and have difficulty in handling difficult human poses and reconstructing fine details.In this paper, we propose SyncHuman, a novel framework that combines 2D multiview diffusion and 3D native diffusion for the first time, enabling high-quality clothed human mesh reconstruction from single-view images even under challenging human poses.Multiview diffusion excels at capturing fine 2D details but struggles with structural consistency, whereas 3D native diffusion generates coarse yet structurally consistent 3D shapes. By integrating the complementary strengths of these two approaches, we develop a more effective generation framework. Specifically, we first jointly fine-tune the multiview diffusion model and the 3D native diffusion model with proposed pixel-aligned 2D-3D synchronization attention to produce geometrically aligned 3D shapes and 2D multiview images. To further improve details, we introduce a feature injection mechanism that lifts fine details from 2D multiview images onto the aligned 3D shapes, enabling accurate and high-fidelity reconstruction.Extensive experiments demonstrate that SyncHuman achieves robust and photorealistic 3D human reconstruction, even for images with challenging poses. Our method outperforms baseline methods in geometric accuracy and visual fidelity, demonstrating a promising direction for future 3D generation models.", "arxiv_id": "2510.07723v2", "arxiv_authors": ["Wenyue Chen", "Peng Li", "Wangguandong Zheng", "Chengfeng Zhao", "Mengfei Li", "Yaolong Zhu", "Zhiyang Dou", "Ronggang Wang", "Yuan Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a472"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2958218, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e1"}, "filepath": "data/2411.06780v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992692405748247, "type": "Poster", "name": "SynCL: A Synergistic Training Strategy with Instance-Aware Contrastive Learning for End-to-End Multi-Camera 3D Tracking", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118522", "abstract": "While existing query-based 3D end-to-end visual trackers integrate detection and tracking via the $\\textit{tracking-by-attention}$ paradigm, these two chicken-and-egg tasks encounter optimization difficulties when sharing the same parameters. Our findings reveal that these difficulties arise due to two inherent constraints on the self-attention mechanism, i.e., over-deduplication for object queries and self-centric attention for track queries. In contrast, removing self-attention mechanism not only minimally impacts regression predictions of the tracker, but also tends to generate more latent candidate boxes. Based on these analyses, we present SynCL, a novel plug-and-play synergistic training strategy designed to co-facilitate multi-task learning for detection and tracking. Specifically, we propose a Task-specific Hybrid Matching module for a weight-shared cross-attention-based decoder that matches the targets of track queries with multiple object queries to exploit promising candidates overlooked by the self-attention mechanism. To flexibly select optimal candidates for the one-to-many matching, we also design a Dynamic Query Filtering module controlled by model training status. Moreover, we introduce Instance-aware Contrastive Learning to break through the barrier of self-centric attention for track queries, effectively bridging the gap between detection and tracking. Without additional inference costs, SynCL consistently delivers improvements in various benchmarks and achieves state-of-the-art performance with $58.9\\%$ AMOTA on the nuScenes dataset. Code and raw results will be publicly available.", "arxiv_id": "2411.06780v3", "arxiv_authors": ["Shubo Lin", "Yutong Kou", "Zirui Wu", "Shaoru Wang", "Bing Li", "Weiming Hu", "Jin Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a473"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1130299, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e2"}, "filepath": "data/2506.07555v3.png", "tags": [], "_media_type": "image", "_rand": 0.9991868102714971, "type": "Poster", "name": "Synthesize Privacy-Preserving High-Resolution Images via Private Textual Intermediaries", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116772", "abstract": "Generating high-fidelity, differentially private (DP) synthetic images offers a promising route to share and analyze sensitive visual data without compromising individual privacy. However, existing DP image synthesis methods struggle to produce high-resolution outputs that faithfully capture the structure of the original data. In this paper, we introduce a novel method, referred to as Synthesis via Private Textual Intermediaries (SPTI), that can generate high-resolution images with strong privacy guarantees and easy adoptions. The key idea is to shift the challenge of DP image synthesis from the image domain to the text domain by leveraging state-of-the-art DP text generation methods. SPTI first summarizes each private image into a concise textual description using image-to-text models, then applies a modified Private Evolution algorithm to generate DP text, and finally reconstructs images using text-to-image models. Notably, SPTI requires no model training, only inferences with off-the-shelf models. Given a private dataset, SPTI produces synthetic images of substantially higher quality than prior DP approaches. On the LSUN Bedroom dataset, SPTI attains an FID $\\le$ 26.71 under $\\epsilon=1.0$, improving over Private Evolution\u2019s FID of 40.36. Similarly, on MM-CelebA-HQ, SPTI achieves an FID $\\le$ 33.27 at $\\epsilon=1.0$, compared to 57.01 from DP fine-tuning baselines. Overall, our results demonstrate that Synthesis via Private Textual Intermediaries provides a resource-efficient and proprietary-model-compatible framework for generating high-resolution DP synthetic images, greatly expanding access to private visual datasets.", "arxiv_id": "2506.07555v3", "arxiv_authors": ["Haoxiang Wang", "Zinan Lin", "Da Yu", "Huishuai Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a474"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1073094, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e3"}, "filepath": "data/2505.00703v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999115962116593, "type": "Poster", "name": "T2I-R1: Reinforcing Image Generation with Collaborative Semantic-level and Token-level CoT", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118320", "abstract": "Recent advancements in large language models have demonstrated how chain-of-thought (CoT) and reinforcement learning (RL) can improve performance. However, applying such reasoning strategies to the visual generation domain remains largely unexplored. In this paper, we present **T2I-R1**, a novel reasoning-enhanced text-to-image generation model, powered by RL with a bi-level CoT reasoning process. Specifically, we identify two levels of CoT that can be utilized to enhance different stages of generation: (1) the semantic-level CoT for high-level planning of the prompt and (2) the token-level CoT for low-level pixel processing during patch-by-patch generation. To better coordinate these two levels of CoT, we introduce **BiCoT-GRPO** with an ensemble of generation rewards, which seamlessly optimizes both generated CoTs within the same training step. By applying our reasoning strategies to the baseline model, Janus-Pro, we achieve superior performance with 13% improvement on T2I-CompBench and 19% improvement on the WISE benchmark, even surpassing the state-of-the-art model FLUX.1. All the training code is in the supplementary material and will be made public.", "arxiv_id": "2505.00703v2", "arxiv_authors": ["Dongzhi Jiang", "Ziyu Guo", "Renrui Zhang", "Zhuofan Zong", "Hao Li", "Le Zhuo", "Shilin Yan", "Pheng-Ann Heng", "Hongsheng Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a475"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1030993, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e4"}, "filepath": "data/2510.22366v1.png", "tags": [], "_media_type": "image", "_rand": 0.999878556635144, "type": "Poster", "name": "T2SMark: Balancing Robustness and Diversity in Noise-as-Watermark for Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119129", "abstract": "Diffusion models have advanced rapidly in recent years, producing high-fidelity images while raising concerns about intellectual property protection and the misuse of generative AI. Image watermarking for diffusion models, particularly Noise-as-Watermark (NaW) methods, embeds watermarks into latent representations drawn from a standard Gaussian distribution to preserve image quality. For detection, the generation process is inverted to recover the initial noise vector containing the watermark before extraction. However, existing NaW methods struggle to balance watermark robustness with generation diversity. Some methods achieve strong robustness by heavily constraining the initial noise sampling, which degrades user experience, while others preserve diversity but prove too fragile for real-world deployment. To address this issue, we propose T2SMark, a training-free watermarking scheme based on two-stage tail-truncated sampling. Unlike prior methods that simply map bits to positive or negative values, tail-truncated sampling excludes the easily flipped regions of the latent distribution, enhancing robustness with even less redundancy. Our two-stage framework then compensates sampling diversity by incorporating a random key into both encryption pipelines\u2014first as the payload and then as the encryption key. We evaluate T2SMark on diffusion models with both U-Net and DiT backbones. Extensive experiments show that it achieves an optimal balance between robustness and diversity.", "arxiv_id": "2510.22366v1", "arxiv_authors": ["Jindong Yang", "Han Fang", "Weiming Zhang", "Nenghai Yu", "Kejiang Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a476"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.652Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1105265, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e5"}, "filepath": "data/2504.12908v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991494580842988, "type": "Poster", "name": "Taccel: Scaling Up Vision-based Tactile Robotics via High-performance GPU Simulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118179", "abstract": "Tactile sensing is crucial for achieving human-level robotic capabilities in manipulation tasks. VBTSs have emerged as a promising solution, offering high spatial resolution and cost-effectiveness by sensing contact through camera-captured deformation patterns of elastic gel pads. However, these sensors' complex physical characteristics and visual signal processing requirements present unique challenges for robotic applications. The lack of efficient and accurate simulation tools for VBTS has significantly limited the scale and scope of tactile robotics research. Here we present Taccel, a high-performance simulation platform that integrates IPC and ABD to model robots, tactile sensors, and objects with both accuracy and unprecedented speed, achieving an 18-fold acceleration over real-time across thousands of parallel environments. Unlike previous simulators that operate at sub-real-time speeds with limited parallelization, Taccel provides precise physics simulation and realistic tactile signals while supporting flexible robot-sensor configurations through user-friendly APIs. Through extensive validation in object recognition, robotic grasping, and articulated object manipulation, we demonstrate precise simulation and successful sim-to-real transfer. These capabilities position Taccel as a powerful tool for scaling up tactile robotics research and development. By enabling large-scale simulation and experimentation with tactile sensing, Taccel accelerates the development of more capable robotic systems, potentially transforming how robots interact with and understand their physical environment.", "arxiv_id": "2504.12908v2", "arxiv_authors": ["Yuyang Li", "Wenxin Du", "Chang Yu", "Puhao Li", "Zihang Zhao", "Tengyu Liu", "Chenfanfu Jiang", "Yixin Zhu", "Siyuan Huang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a477"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1090891, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e6"}, "filepath": "data/2507.17664v1.png", "tags": [], "_media_type": "image", "_rand": 0.999692414610713, "type": "Poster", "name": "Talk2Event: Grounded Understanding of Dynamic Scenes from Event Cameras", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116788", "abstract": "Event cameras offer microsecond-level latency and robustness to motion blur, making them ideal for understanding dynamic environments. Yet, connecting these asynchronous streams to human language remains an open challenge. We introduce Talk2Event, the first large-scale benchmark for language-driven object grounding in event-based perception. Built from real-world driving data, Talk2Event provides over 30,000 validated referring expressions, each enriched with four grounding attributes -- appearance, status, relation to viewer, and relation to other objects -- bridging spatial, temporal, and relational reasoning. To fully exploit these cues, we propose EventRefer, an attribute-aware grounding framework that dynamically fuses multi-attribute representations through a Mixture of Event-Attribute Experts (MoEE). Our method adapts to different modalities and scene dynamics, achieving consistent gains over state-of-the-art baselines in event-only, frame-only, and event-frame fusion settings. We hope our dataset and approach will establish a foundation for advancing multimodal, temporally-aware, and language-driven perception in real-world robotics and autonomy.", "arxiv_id": "2507.17664v1", "arxiv_authors": ["Lingdong Kong", "Dongyue Lu", "Ao Liang", "Rong Li", "Yuhao Dong", "Tianshuai Hu", "Lai Xing Ng", "Wei Tsang Ooi", "Benoit R. Cottereau"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a478"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2674671, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e7"}, "filepath": "data/2510.07249v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993854475299587, "type": "Poster", "name": "TalkCuts: A Large-Scale Dataset for Multi-Shot Human Speech Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121841", "abstract": "In this work, we present TalkCuts, a large-scale dataset designed to facilitate the study of multi-shot human speech video generation. Unlike existing datasets that focus on single-shot, static viewpoints, TalkCuts offers 164k clips totaling over 500 hours of high-quality 1080P human speech videos with diverse camera shots, including close-up, half-body, and full-body views. The dataset includes detailed textual descriptions, 2D keypoints and 3D SMPL-X motion annotations, covering over 10k identities, enabling multimodal learning and evaluation. As a first attempt to showcase the value of the dataset, we present Orator, an LLM-guided multi-modal generation framework as a simple baseline, where the language model functions as a multi-faceted director, orchestrating detailed specifications for camera transitions, speaker gesticulations, and vocal modulation. This architecture enables the synthesis of coherent long-form videos through our integrated multi-modal video generation module. Extensive experiments in both pose-guided and audio-driven settings show that training on TalkCuts significantly enhances the cinematographic coherence and visual appeal of generated multi-shot speech videos. We believe TalkCuts provides a strong foundation for future work in controllable, multi-shot speech video generation and broader multimodal learning. The dataset, tools, and evaluation protocols will be publicly released to facilitate community progress.", "arxiv_id": "2510.07249v2", "arxiv_authors": ["Jiaben Chen", "Zixin Wang", "Ailing Zeng", "Yang Fu", "Xueyang Yu", "Siyuan Cen", "Julian Tanke", "Yihang Chen", "Koichi Saito", "Yuki Mitsufuji", "Chuang Gan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a479"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1136361, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e8"}, "filepath": "data/2507.09082v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995405658406066, "type": "Poster", "name": "Taming generative world models for zero-shot optical flow extraction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116626", "abstract": "Extracting dense motion (optical flow) from videos remains a core computer-vision problem. Motivated by the recent success of large general-purpose models, we ask whether frozen self-supervised video world models trained only to predict future frames can be prompted, without fine-tuning, to output flow. Prior attempts to read out depth or illumination from video generators required fine-tuning; that strategy is ill-suited for flow, where labeled data are scarce and synthetic datasets suffer from a sim-to-real gap. We study several popular generative model architectures and find that successful zero-shot flow extraction requires three model properties: (1) distributional prediction of future frames (avoiding blurry or noisy outputs); (2) factorized latents that treat each spatio-temporal patch independently; and (3) random-access decoding that can condition on any subset of future pixels. These criteria are met by the recently introduced Local Random Access Sequence (LRAS) architecture. Building on LRAS, we propose KL-tracing: a procedure for injecting a small, local perturbation into the first frame, rolling out the model one step, and computing the Kullback\u2013Leibler divergence between perturbed and unperturbed predictive distributions. The KL peak traces the displacement field, yielding optical flow in a single forward pass. Our method outperforms state-of-the-art models on real-world TAP-Vid DAVIS dataset (16.6% relative improvement for endpoint error) and synthetic TAP-Vid Kubric (4.7% relative improvement), despite being trained on real-world videos. Our results indicate that prompting controllable, self-supervised world models is a scalable and effective alternative to supervised or photometric-loss approaches for high-quality optical flow.", "arxiv_id": "2507.09082v1", "arxiv_authors": ["Seungwoo Kim", "Khai Loong Aw", "Klemen Kotar", "Cristobal Eyzaguirre", "Wanhee Lee", "Yunong Liu", "Jared Watrous", "Stefan Stojanov", "Juan Carlos Niebles", "Jiajun Wu", "Daniel L. K. Yamins"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2541126, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8e9"}, "filepath": "data/2504.14717v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993444083774187, "type": "Poster", "name": "TAPIP3D: Tracking Any Point in Persistent 3D Geometry", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117634", "abstract": "We introduce TAPIP3D, a novel approach for long-term 3D point tracking in monocular RGB and RGB-D videos. TAPIP3D represents videos as camera-stabilized spatio-temporal feature clouds, leveraging depth and camera motion information to lift 2D video features into a 3D world space where camera movement is effectively canceled out. Within this stabilized 3D representation, TAPIP3D iteratively refines multi-frame motion estimates, enabling robust point tracking over long time horizons. To handle the irregular structure of 3D point distributions, we propose a 3D Neighborhood-to-Neighborhood (N2N) attention mechanism\u2014a 3D-aware contextualization strategy that builds informative, spatially coherent feature neighborhoods to support precise trajectory estimation. Our 3D-centric formulation significantly improves performance over existing 3D point tracking methods and even surpasses state-of-the-art 2D pixel trackers in accuracy when reliable depth is available. The model supports inference in both camera-centric (unstabilized) and world-centric (stabilized) coordinates, with experiments showing that compensating for camera motion leads to substantial gains in tracking robustness. By replacing the conventional 2D square correlation windows used in prior 2D and 3D trackers with a spatially grounded 3D attention mechanism, TAPIP3D achieves strong and consistent results across multiple 3D point tracking benchmarks. Our code and trained checkpoints will be public.", "arxiv_id": "2504.14717v2", "arxiv_authors": ["Bowei Zhang", "Lei Ke", "Adam W. Harley", "Katerina Fragkiadaki"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1009389, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ea"}, "filepath": "data/2506.00996v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998048568578276, "type": "Poster", "name": "Temporal In\u2011Context Fine\u2011Tuning for Versatile Control of Video Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117922", "abstract": "Recent advances in text-to-video diffusion models have enabled high-quality video synthesis, but controllable generation remains challenging\u2014particularly under limited data and compute. Existing fine-tuning methods often rely on external encoders or architectural modifications, which demand large datasets and are typically restricted to spatially aligned conditioning, limiting flexibility and scalability. In this work, we introduce Temporal In-Context Fine-Tuning (TIC-FT), an efficient and versatile approach for adapting pretrained video diffusion models to diverse conditional generation tasks. Our key idea is to concatenate condition and target frames along the temporal axis and insert intermediate buffer frames with progressively increasing noise levels. These buffer frames enable smooth transitions, aligning the fine-tuning process with the pretrained model\u2019s temporal dynamics. TIC-FT requires no architectural changes and achieves strong performance with as few as 10\u201330 training samples. We validate our method across a range of tasks\u2014including image-to-video and video-to-video generation\u2014using large-scale base models such as CogVideoX-5B and Wan-14B. Extensive experiments show that TIC-FT outperforms existing baselines in both condition fidelity and visual quality, while remaining highly efficient in both training and inference.", "arxiv_id": "2506.00996v1", "arxiv_authors": ["Kinam Kim", "Junha Hyung", "Jaegul Choo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 979058, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8eb"}, "filepath": "data/2502.05454v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993524621457396, "type": "Poster", "name": "Temporal Representation Alignment: Successor Features Enable Emergent Compositionality in Robot Instruction Following", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115132", "abstract": "Effective task representations should facilitate compositionality, such that after learning a variety of basic tasks, an agent can perform compound tasks consisting of multiple steps simply by composing the representations of the constituent steps together. While this is conceptually simple and appealing, it is not clear how to automatically learn representations that enable this sort of compositionality. We show that learning to associate the representations of current and future states with a temporal alignment loss can improve compositional generalization, even in the absence of any explicit subtask planning or reinforcement learning. We evaluate our approach across diverse robotic manipulation tasks as well as in simulation, showing substantial improvements for tasks specified with either language or goal images.", "arxiv_id": "2502.05454v2", "arxiv_authors": ["Vivek Myers", "Bill Chunyuan Zheng", "Anca Dragan", "Kuan Fang", "Sergey Levine"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47d"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1649469, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ec"}, "filepath": "data/2507.17336v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996243734982982, "type": "Poster", "name": "Temporal Smoothness-Aware Rate-Distortion Optimized 4D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115526", "abstract": "Dynamic 4D Gaussian Splatting (4DGS) effectively extends the high-speed rendering capabilities of 3D Gaussian Splatting (3DGS) to represent volumetric videos. However, the large number of Gaussians, substantial temporal redundancies, and especially the absence of an entropy-aware compression framework result in large storage requirements. Consequently, this poses significant challenges for practical deployment, efficient edge-device processing, and data transmission.In this paper, we introduce a novel end-to-end RD-optimized compression framework tailored for 4DGS, aiming to enable flexible, high-fidelity rendering across varied computational platforms.Leveraging Fully Explicit Dynamic Gaussian Splatting (Ex4DGS), one of the state-of-the-art 4DGS methods, as our baseline, we start from the existing 3DGS compression methods for compatibility while effectively addressing additional challenges introduced by the temporal axis. In particular, instead of storing motion trajectories independently per point, we employ a wavelet transform to reflect the real-world smoothness prior, significantly enhancing storage efficiency.This approach yields significantly improved compression ratios and provides a user-controlled balance between compression efficiency and rendering quality. Extensive experiments demonstrate the effectiveness of our method, achieving up to 91$\\times$ compression compared to the original Ex4DGS model while maintaining high visual fidelity. These results highlight the applicability of our framework for real-time dynamic scene rendering in diverse scenarios, from resource-constrained edge devices to high-performance environments.", "arxiv_id": "2507.17336v2", "arxiv_authors": ["Hyeongmin Lee", "Kyungjune Baek"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47e"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1084241, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ed"}, "filepath": "data/2509.18056v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990818660626906, "type": "Poster", "name": "TempSamp-R1: Effective Temporal Sampling with Reinforcement Fine-Tuning for Video LLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115934", "abstract": "This paper introduces TempSamp-R1, a new reinforcement fine-tuning framework designed to improve the effectiveness of adapting multimodal large language models (MLLMs) to video temporal grounding tasks. We reveal that existing reinforcement learning methods, such as Group Relative Policy Optimization (GRPO), rely on on-policy sampling for policy updates. However, in tasks with large temporal search spaces, this strategy becomes both inefficient and limited in performance, as it often fails to identify temporally accurate solutions. To address this limitation, TempSamp-R1 leverages ground-truth annotations as off-policy supervision to provide temporally precise guidance, effectively compensating for the sparsity and misalignment in on-policy solutions. To further stabilize training and reduce variance in reward-based updates, TempSamp-R1 provides a non-linear soft advantage computation method that dynamically reshapes the reward feedback via an asymmetric transformation. By employing a hybrid Chain-of-Thought (CoT) training paradigm, TempSamp-R1 optimizes a single unified model to support both CoT and non-CoT inference modes, enabling efficient handling of queries with varying reasoning complexity. Experimental results demonstrate that TempSamp-R1 outperforms GRPO-based baselines, establishing new state-of-the-art performance on benchmark datasets:Charades-STA (R1\\@0.7: 52.9\\%, +**2.7**\\%), ActivityNet Captions (R1\\@0.5: 56.0\\%, +**5.3**\\%), and QVHighlights (mAP: 30.0\\%, +**3.0**\\%). Moreover, TempSamp-R1 shows robust few-shot generalization capabilities under limited data. The code will be released publicly.", "arxiv_id": "2509.18056v2", "arxiv_authors": ["Yunheng Li", "Jing Cheng", "Shaoyong Jia", "Hangyi Kuang", "Shaohui Jiao", "Qibin Hou", "Ming-Ming Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a47f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1277957, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ee"}, "filepath": "data/2506.13750v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999791999097063, "type": "Poster", "name": "Test3R: Learning to Reconstruct 3D at Test Time", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117756", "abstract": "Dense matching methods like DUST3R regress pairwise pointmaps for 3D reconstruction. However, the reliance on pairwise prediction and the limited generalization capability inherently restrict the global geometric consistency. In this work, we introduce \\textbf{Test3R}, a surprisingly simple test-time learning technique that significantly boosts geometric accuracy. Using image triplets ($I_1,I_2,I_3$), Test3R generates reconstructions from pairs ($I_1,I_2$) and ($I_1,I_3$). The core idea is to optimize the network at test time via a self-supervised objective: maximizing the geometric consistency between these two reconstructions relative to the common image $I_1$. This ensures the model produces cross-pair consistent outputs, regardless of the inputs. Extensive experiments demonstrate that our technique significantly outperforms previous state-of-the-art methods on the 3D reconstruction and multi-view depth estimation tasks. Moreover, it is universally applicable and nearly cost-free, making it easily applied to other models and implemented with minimal test-time training overhead and parameter footprint.", "arxiv_id": "2506.13750v1", "arxiv_authors": ["Yuheng Yuan", "Qiuhong Shen", "Shizun Wang", "Xingyi Yang", "Xinchao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a480"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3235251, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ef"}, "filepath": "data/2507.12508v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995909471649317, "type": "Poster", "name": "Test-Time Scaling with World Models for Spatial Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118581", "abstract": "Spatial reasoning in 3D space is central to human cognition and indispensable for embodied tasks such as navigation and manipulation. However, state-of-the-art vision\u2013language models (VLMs) struggle frequently with tasks as simple as anticipating how a scene will look after an egocentric motion: they perceive 2D images but lack an internal model of 3D dynamics. We therefore propose SpatialNavigator, a test-time scaling framework that grants a VLM with this missing capability by coupling it to a controllable world model based on video diffusion. The VLM iteratively sketches a concise camera trajectory, while the world model synthesizes the corresponding view at each step. The VLM then reasons over this multi-view evidence gathered during the interactive exploration. Without any fine-tuning, our SpatialNavigator achieves over an average 8\\% performance boost on the representative spatial reasoning benchmark SAT, showing that pairing VLMs with world models for test-time scaling offers a simple, plug-and-play route to robust 3D reasoning. Meanwhile, our method also improves upon the test-time inference VLMs trained through reinforcement learning, which demonstrates the potential of our method that utilizes world models for test-time scaling.", "arxiv_id": "2507.12508v1", "arxiv_authors": ["Yuncong Yang", "Jiageng Liu", "Zheyuan Zhang", "Siyuan Zhou", "Reuben Tan", "Jianwei Yang", "Yilun Du", "Chuang Gan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a481"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1117723, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f0"}, "filepath": "data/2506.04641v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993781770391058, "type": "Poster", "name": "Text-Aware Real-World Image Super-Resolution via Diffusion Model with Joint Segmentation Decoders", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115913", "abstract": "The introduction of generative models has significantly advanced image super-resolution (SR) in handling real-world degradations. However, they often incur fidelity-related issues, particularly distorting textual structures. In this paper, we introduce a novel diffusion-based SR framework, namely TADiSR, which integrates text-aware attention and joint segmentation decoders to recover not only natural details but also the structural fidelity of text regions in degraded real-world images. Moreover, we propose a complete pipeline for synthesizing high-quality images with fine-grained full-image text masks, combining realistic foreground text regions with detailed background content. Extensive experiments demonstrate that our approach substantially enhances text legibility in super-resolved images, achieving state-of-the-art performance across multiple evaluation metrics and exhibiting strong generalization to real-world scenarios. Our code will be open-sourced.", "arxiv_id": "2506.04641v1", "arxiv_authors": ["Qiming Hu", "Linlong Fan", "Yiyan Luo", "Yuhang Yu", "Xiaojie Guo", "Qingnan Fan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a482"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3261001, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f1"}, "filepath": "data/2410.12787v1.png", "tags": [], "_media_type": "image", "_rand": 0.999470987387134, "type": "Poster", "name": "The Curse of Multi-Modalities: Evaluating Hallucinations of Large Multimodal Models across Language, Visual, and Audio", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121756", "abstract": "Recent advancements in large multimodal models (LMMs) have significantly enhanced performance across diverse tasks, with ongoing efforts to further integrate additional modalities such as video and audio. However, most existing LMMs remain vulnerable to hallucinations, the discrepancy between the factual multimodal input and the generated textual output, which has limited their applicability in various real-world scenarios. This paper presents the first systematic investigation of hallucinations in LMMs involving the three most common modalities: language, visual, and audio. Our study reveals two key contributors to hallucinations: overreliance on unimodal priors and spurious inter-modality correlations. To address these challenges, we introduce the benchmark The Curse of Multi-Modalities (CMM), which comprehensively evaluates hallucinations in LMMs, providing a detailed analysis of their underlying issues. Our findings highlight key vulnerabilities, including imbalances in modality integration and biases from training data, underscoring the need for balanced cross-modal learning and enhanced hallucination mitigation strategies. Based on our observations and findings, we suggest potential research directions that could enhance the reliability of LMMs.", "arxiv_id": "2410.12787v1", "arxiv_authors": ["Sicong Leng", "Yun Xing", "Zesen Cheng", "Yang Zhou", "Hang Zhang", "Xin Li", "Deli Zhao", "Shijian Lu", "Chunyan Miao", "Lidong Bing"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a483"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1118851, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f2"}, "filepath": "data/2409.12394v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990647986930725, "type": "Poster", "name": "The Fluorescent Veil: A Stealthy and Effective Physical Adversarial Patch Against Traffic Sign Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117893", "abstract": "Recently, traffic sign recognition (TSR) systems have become a prominent target for physical adversarial attacks. These attacks typically rely on conspicuous stickers and projections, or using invisible light and acoustic signals that can be easily blocked. In this paper, we introduce a novel attack medium, i.e., fluorescent ink, to design a stealthy and effective physical adversarial patch, namely FIPatch, to advance the state-of-the-art. Specifically, we first model the fluorescence effect in the digital domain to identify the optimal attack settings, which guide the real-world fluorescence parameters. By applying a carefully designed fluorescence perturbation to the target sign, the attacker can later trigger a fluorescent effect using invisible ultraviolet light, causing the TSR system to misclassify the sign and potentially leading to traffic accidents. We conducted a comprehensive evaluation to investigate the effectiveness of FIPatch, which shows a success rate of 98.31% in low-light conditions. Furthermore, our attack successfully bypasses five popular defenses and achieves a success rate of 96.72%.", "arxiv_id": "2409.12394v2", "arxiv_authors": ["Shuai Yuan", "Xingshuo Han", "Hongwei Li", "Guowen Xu", "Wenbo Jiang", "Tao Ni", "Qingchuan Zhao", "Yuguang Fang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a484"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.653Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115578, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f3"}, "filepath": "data/2412.06646v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993613775247706, "type": "Poster", "name": "The Narrow Gate: Localized Image-Text Communication in Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116175", "abstract": "Recent advances in multimodal training have significantly improved the integration of image understanding and generation within a unified model. This study investigates how vision-language models (VLMs) handle image-understanding tasks, specifically focusing on how visual information is processed and transferred to the textual domain. We compare VLMs that generate both images and text with those that output only text, highlighting key differences in information flow. We find that in models with multimodal outputs, image and text embeddings are more separated within the residual stream. Additionally, models vary in how information is exchanged from visual to textual tokens. VLMs that only output text exhibit a distributed communication pattern, where information is exchanged through multiple image tokens. In contrast, models trained for image and text generation tend to rely on a single token that acts as a narrow gate for visual information. We demonstrate that ablating this single token significantly deteriorates performance on image understanding tasks. Furthermore, modifying this token enables effective steering of the image semantics, showing that targeted, local interventions can reliably control the model's global behavior.", "arxiv_id": "2412.06646v3", "arxiv_authors": ["Alessandro Serra", "Francesco Ortu", "Emanuele Panizon", "Lucrezia Valeriani", "Lorenzo Basile", "Alessio Ansuini", "Diego Doimo", "Alberto Cazzaniga"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a485"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 967088, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f4"}, "filepath": "data/2508.01119v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996815346208353, "type": "Poster", "name": "The Promise of RL for Autoregressive Image Editing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117430", "abstract": "While image generation techniques are now capable of producing high quality images that respect prompts which span multiple sentences, the task of text-guided image editing remains a challenge. Even edit requests that consist of only a few words often fail to be executed correctly. We explore three strategies to enhance performance on a wide range of image editing tasks: supervised fine-tuning (SFT), reinforcement learning (RL), and Chain-of-Thought (CoT) reasoning. In order to study all these components in one consistent framework we adopt an autoregressive multimodal model that processes textual and visual tokens in a unified manner.We find RL combined with a large multi-modal LLM verifier to be the most effective of these strategies.As a result, we release EARL: **E**diting with **A**utoregression and **RL**, a strong RL-based image editing model that performs competitively on a diverse range of edits compared to strong baselines with much more training data. Thus, EARL pushes the frontier of autoregressive multimodal models on image editing.", "arxiv_id": "2508.01119v2", "arxiv_authors": ["Saba Ahmadi", "Rabiul Awal", "Ankur Sikarwar", "Amirhossein Kazemnejad", "Ge Ya Luo", "Juan A. Rodriguez", "Sai Rajeswar", "Siva Reddy", "Christopher Pal", "Benno Krojer", "Aishwarya Agrawal"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a486"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097064, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f5"}, "filepath": "data/2509.24878v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999740664731775, "type": "Poster", "name": "ThermalGen: Style-Disentangled Flow-Based Generative Models for RGB-to-Thermal Image Translation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116089", "abstract": "Paired RGB-thermal data is crucial for visual-thermal sensor fusion and cross-modality tasks, including important applications such as multi-modal image alignment and retrieval. However, the scarcity of synchronized and calibrated RGB-thermal image pairs presents a major obstacle to progress in these areas. To overcome this challenge, RGB-to-Thermal (RGB-T) image translation has emerged as a promising solution, enabling the synthesis of thermal images from abundant RGB datasets for training purposes. In this study, we propose ThermalGen, an adaptive flow-based generative model for RGB-T image translation, incorporating an RGB image conditioning architecture and a style-disentangled mechanism. To support large-scale training, we curated eight public satellite-aerial, aerial, and ground RGB-T paired datasets, and introduced three new large-scale satellite-aerial RGB-T datasets\u2014DJI-day, BosonPlus-day, and BosonPlus-night\u2014captured across diverse times, sensor types, and geographic regions. Extensive evaluations across multiple RGB-T benchmarks demonstrate that ThermalGen achieves comparable or superior translation performance compared to existing GAN-based and diffusion-based methods. To our knowledge, ThermalGen is the first RGB-T image translation model capable of synthesizing thermal images that reflect significant variations in viewpoints, sensor characteristics, and environmental conditions. Code, model, and datasets will be publicly released.", "arxiv_id": "2509.24878v1", "arxiv_authors": ["Jiuhong Xiao", "Roshan Nayak", "Ning Zhang", "Daniel Tortei", "Giuseppe Loianno"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a487"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1036939, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f6"}, "filepath": "data/2510.23225v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995012875154251, "type": "Poster", "name": "Through the Lens: Benchmarking Deepfake Detectors Against Moir\u00e9-Induced Distortions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121737", "abstract": "Deepfake detection remains a pressing challenge, particularly in real-world settings where smartphone-captured media from digital screens often introduces Moir\u00e9 artifacts that can distort detection outcomes. This study systematically evaluates state-of-the-art (SOTA) deepfake detectors on Moir\u00e9-affected videos\u2014an issue that has received little attention. We collected a dataset of 12,832 videos, spanning 35.64 hours, from Celeb-DF, DFD, DFDC, UADFV, and FF++ datasets, capturing footage under diverse real-world conditions, including varying screens, smartphones, lighting setups, and camera angles. To further examine the influence of Moir\u00e9 patterns on deepfake detection, we conducted additional experiments using our DeepMoir\u00e9Fake, referred to as (DMF) dataset, and two synthetic Moir\u00e9 generation techniques. Across 15 top-performing detectors, our results show that Moir\u00e9 artifacts degrade performance by as much as 25.4\\%, while synthetically generated Moir\u00e9 patterns lead to a 21.4\\% drop in accuracy. Surprisingly, demoir\u00e9ing methods, intended as a mitigation approach, instead worsened the problem, reducing accuracy by up to 16\\%. These findings underscore the urgent need for detection models that can robustly handle Moir\u00e9 distortions alongside other real-world challenges, such as compression, sharpening, and blurring. By introducing the DMF dataset, we aim to drive future research toward closing the gap between controlled experiments and practical deepfake detection.", "arxiv_id": "2510.23225v1", "arxiv_authors": ["Razaib Tariq", "Minji Heo", "Simon S. Woo", "Shahroz Tariq"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a488"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087457, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f7"}, "filepath": "data/2507.07860v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994796108685363, "type": "Poster", "name": "THUNDER: Tile-level Histopathology image UNDERstanding benchmark", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121557", "abstract": "Progress in a research field can be hard to assess, in particular when many concurrent methods are proposed in a short period of time. This is the case in digital pathology, where many foundation models have been released recently to serve as feature extractors for tile-level images, being used in a variety of downstream tasks, both for tile- and slide-level problems. Benchmarking available methods then becomes paramount to get a clearer view of the research landscape. In particular, in critical domains such as healthcare, a benchmark should not only focus on evaluating downstream performance, but also provide insights about the main differences between methods, and importantly, further consider uncertainty and robustness to ensure a reliable usage of proposed models. For these reasons, we introduce *THUNDER*, a tile-level benchmark for digital pathology foundation models, allowing for efficient comparison of many models on diverse datasets with a series of downstream tasks, studying their feature spaces and assessing the robustness and uncertainty of predictions informed by their embeddings. *THUNDER* is a fast, easy-to-use, dynamic benchmark that can already support a large variety of state-of-the-art foundation, as well as local user-defined models for direct tile-based comparison. In this paper, we provide a comprehensive comparison of 23 foundation models on 16 different datasets covering diverse tasks, feature analysis, and robustness. The code for *THUNDER* is publicly available at https://github.com/MICS-Lab/thunder/tree/neurips2025_datasets_and_benchmark.", "arxiv_id": "2507.07860v2", "arxiv_authors": ["Pierre Marza", "Leo Fillioux", "Sofi\u00e8ne Boutaj", "Kunal Mahatha", "Christian Desrosiers", "Pablo Piantanida", "Jose Dolz", "Stergios Christodoulidis", "Maria Vakalopoulou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a489"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 961892, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f8"}, "filepath": "data/2510.16321v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992926753356463, "type": "Poster", "name": "Time-Embedded Algorithm Unrolling for Computational MRI", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119206", "abstract": "Algorithm unrolling methods have proven powerful for solving the regularized least squares problem in computational magnetic resonance imaging (MRI). These approaches unfold an iterative algorithm with a fixed number of iterations, typically alternating between a neural network-based proximal operator for regularization, a data fidelity operation and auxiliary updates with learnable parameters. While the connection to optimization methods dictate that the proximal operator network should be shared across unrolls, this can introduce artifacts or blurring. Heuristically, practitioners have shown that using distinct networks may be beneficial, but this significantly increases the number of learnable parameters, making it challenging to prevent overfitting. To address these shortcomings, by taking inspirations from proximal operators with varying thresholds in approximate message passing (AMP) and the success of time-embedding in diffusion models, we propose a time-embedded algorithm unrolling scheme for inverse problems. Specifically, we introduce a novel perspective on the iteration-dependent proximal operation in vector AMP (VAMP) and the subsequent Onsager correction in the context of algorithm unrolling, framing them as a time-embedded neural network. Similarly, the scalar weights in the data fidelity operation and its associated Onsager correction are cast as time-dependent learnable parameters. Our extensive experiments on the fastMRI dataset, spanning various acceleration rates and datasets, demonstrate that our method effectively reduces aliasing artifacts and mitigates noise amplification, achieving state-of-the-art performance. Furthermore, we show that our time-embedding strategy extends to existing algorithm unrolling approaches, enhancing reconstruction quality without increasing the computational complexity significantly.", "arxiv_id": "2510.16321v1", "arxiv_authors": ["Junno Yun", "Ya\u015far Utku Al\u00e7alar", "Mehmet Ak\u00e7akaya"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48a"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1036920, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8f9"}, "filepath": "data/2503.13377v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992366667388044, "type": "Poster", "name": "Time-R1: Post-Training Large Vision Language Model for Temporal Video Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116757", "abstract": "Temporal Video Grounding (TVG), the task of locating specific video segments based on language queries, is a core challenge in long-form video understanding. While recent Large Vision-Language Models (LVLMs) have shown early promise in tackling TVG through supervised fine-tuning (SFT), their ability to generalize remains limited. To address this, we propose a novel post-training framework that enhances the generalization capabilities of LVLMs via reinforcement learning (RL). Specifically, our contributions span three key directions: (1) Time-R1: we introduce a reasoning-guided post-training framework via RL with verifiable reward to enhance capabilities of LVLMs on the TVG task. (2) TimeRFT: we explore post-training strategies on our curated RL-friendly dataset, which trains the model to progressively comprehend more difficult samples, leading to better generalization and stable training processes. (3) TVGBench: we carefully construct a small but comprehensive and balanced benchmark suitable for LVLM evaluation, which is sourced from available public benchmarks. Extensive experiments demonstrate that Time-R1 achieves state-of-the-art performance across multiple downstream datasets using significantly less training data than prior LVLM approaches, while preserving and improving its general video understanding capabilities. Code: https://anonymous.4open.science/r/Time-R1/README.md.", "arxiv_id": "2503.13377v3", "arxiv_authors": ["Ye Wang", "Ziheng Wang", "Boshen Xu", "Yang Du", "Kejun Lin", "Zihan Xiao", "Zihao Yue", "Jianzhong Ju", "Liang Zhang", "Dingyi Yang", "Xiangnan Fang", "Zewen He", "Zhenbo Luo", "Wenxuan Wang", "Junqi Lin", "Jian Luan", "Qin Jin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1065546, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8fa"}, "filepath": "data/2505.13925v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990702056514906, "type": "Poster", "name": "Time Reversal Symmetry for Efficient Robotic Manipulations in Deep Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118280", "abstract": "Symmetry is pervasive in robotics and has been widely exploited to improve sample efficiency in deep reinforcement learning (DRL). However, existing approaches primarily focus on spatial symmetries\u2014such as reflection, rotation, and translation\u2014while largely neglecting temporal symmetries. To address this gap, we explore time reversal symmetry, a form of temporal symmetry commonly found in robotics tasks such as door opening and closing. We propose Time Reversal symmetry enhanced Deep Reinforcement Learning (TR-DRL), a framework that combines trajectory reversal augmentation and time reversal guided reward shaping to efficiently solve temporally symmetric tasks. Our method generates reversed transitions from fully reversible transitions, identified by a proposed dynamics-consistent filter, to augment the training data. For partially reversible transitions, we apply reward shaping to guide learning, according to successful trajectories from the reversed task. Extensive experiments on the Robosuite and MetaWorld benchmarks demonstrate that TR-DRL is effective in both single-task and multi-task settings, achieving higher sample efficiency and stronger final performance compared to baseline methods.", "arxiv_id": "2505.13925v2", "arxiv_authors": ["Yunpeng Jiang", "Jianshu Hu", "Paul Weng", "Yutong Ban"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48c"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 979320, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8fb"}, "filepath": "data/2507.06543v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990853204158171, "type": "Poster", "name": "Token Bottleneck: One Token to Remember Dynamics", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115268", "abstract": "Deriving compact and temporally aware visual representations from dynamic scenes is essential for successful execution of sequential scene understanding tasks such as visual tracking and robotic manipulation. In this paper, we introduce Token Bottleneck (ToBo), a simple yet intuitive self-supervised learning pipeline that squeezes a scene into a bottleneck token and predicts the subsequent scene using minimal patches as hints. The ToBo pipeline facilitates the learning of sequential scene representations by conservatively encoding the reference scene into a compact bottleneck token during the squeeze step. In the expansion step, we guide the model to capture temporal dynamics by predicting the target scene using the bottleneck token along with few target patches as hints. This design encourages the vision backbone to embed temporal dependencies, thereby enabling understanding of dynamic transitions across scenes. Extensive experiments in diverse sequential tasks, including video label propagation and robot manipulation in simulated environments demonstrate the superiority of ToBo over baselines. Moreover, deploying our pre-trained model on physical robots confirms its robustness and effectiveness in real-world environments. We further validate the scalability of ToBo across different model scales.", "arxiv_id": "2507.06543v1", "arxiv_authors": ["Taekyung Kim", "Dongyoon Han", "Byeongho Heo", "Jeongeun Park", "Sangdoo Yun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1016883, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8fc"}, "filepath": "data/2506.10036v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992164482130922, "type": "Poster", "name": "Token Perturbation Guidance for Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118293", "abstract": "Classifier-free guidance (CFG) has become an essential component of modern diffusion models to enhance both generation quality and alignment with input conditions. However, CFG requires specific training procedures and is limited to conditional generation. To address these limitations, we propose Token Perturbation Guidance (TPG), a novel method that applies perturbation matrices directly to intermediate token representations within the diffusion network. TPG employs a norm-preserving shuffling operation to provide effective and stable guidance signals that improve generation quality without architectural changes. As a result, TPG is training-free and agnostic to input conditions, making it readily applicable to both conditional and unconditional generation. We also analyze the guidance term provided by TPG and show that its effect on sampling more closely resembles CFG compared to existing training-free guidance techniques. We extensively evaluate TPG on SDXL and Stable Diffusion 2.1, demonstrating nearly a 2x improvement in FID for unconditional generation over the SDXL baseline and showing that TPG closely matches CFG in prompt alignment. Thus, TPG represents a general, condition-agnostic guidance method that extends CFG-like benefits to a broader class of diffusion models.", "arxiv_id": "2506.10036v1", "arxiv_authors": ["Javad Rajabi", "Soroush Mehraban", "Seyedmorteza Sadat", "Babak Taati"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48e"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106530, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8fd"}, "filepath": "data/2510.20162v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996472718731929, "type": "Poster", "name": "TOMCAT: Test-time Comprehensive Knowledge Accumulation for Compositional Zero-Shot Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117606", "abstract": "Compositional Zero-Shot Learning (CZSL) aims to recognize novel attribute-object compositions based on the knowledge learned from seen ones. Existing methods suffer from performance degradation caused by the distribution shift of label space at test time, which stems from the inclusion of unseen compositions recombined from attributes and objects. To overcome the challenge, we propose a novel approach that accumulates comprehensive knowledge in both textual and visual modalities from unsupervised data to update multi-modal prototypes at test time. Building on this, we further design an adaptive update weight to control the degree of prototype adjustment, enabling the model to flexibly adapt to distribution shift during testing. Moreover, a dynamic priority queue is introduced that stores high-confidence images to acquire visual knowledge from historical images for inference. Considering the semantic consistency of multimodal knowledge, we align textual and visual prototypes by multimodal collaborative representation learning. Extensive experiments indicate that our approach achieves state-of-the-art performance on four challenging benchmark datasets under both closed-world and open-world settings. We will release the source code.", "arxiv_id": "2510.20162v1", "arxiv_authors": ["Xudong Yan", "Songhe Feng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a48f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112012, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8fe"}, "filepath": "data/2505.17771v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998234304163145, "type": "Poster", "name": "TopoPoint: Enhance Topology Reasoning via Endpoint Detection in Autonomous Driving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119325", "abstract": "Topology reasoning, which unifies perception and structured reasoning, plays a vital role in understanding intersections for autonomous driving. However, its performance heavily relies on the accuracy of lane detection, particularly at connected lane endpoints. Existing methods often suffer from lane endpoints deviation, leading to incorrect topology construction. To address this issue, we propose TopoPoint, a novel framework that explicitly detects lane endpoints and jointly reasons over endpoints and lanes for robust topology reasoning. During training, we independently initialize point and lane query, and proposed Point-Lane Merge Self-Attention to enhance global context sharing through incorporating geometric distances between points and lanes as an attention mask . We further design Point-Lane Graph Convolutional Network to enable mutual feature aggregation between point and lane query. During inference, we introduce Point-Lane Geometry Matching algorithm that computes distances between detected points and lanes to refine lane endpoints, effectively mitigating endpoint deviation. Extensive experiments on the OpenLane-V2 benchmark demonstrate that TopoPoint achieves state-of-the-art performance in topology reasoning (48.8 on OLS). Additionally, we propose DET$_p$ to evaluate endpoint detection, under which our method significantly outperforms existing approaches (52.6 v.s. 45.2 on DET$_p$). The codes will be released soon.", "arxiv_id": "2505.17771v1", "arxiv_authors": ["Yanping Fu", "Xinyuan Liu", "Tianyu Li", "Yike Ma", "Yucheng Zhang", "Feng Dai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a490"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.654Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1075853, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a8ff"}, "filepath": "data/2503.16188v6.png", "tags": [], "_media_type": "image", "_rand": 0.9998019904185771, "type": "Poster", "name": "To Think or Not To Think: A Study of Thinking in Rule-Based Visual Reinforcement Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117405", "abstract": "This paper investigates the role of explicit thinking process in rule-based reinforcement fine-tuning (RFT) for multi-modal large language models (MLLMs). We first extend \\textit{Thinking-RFT} to image classification task, using verifiable rewards for fine-tuning~(FT). Experiments show {Thinking-RFT} significantly outperforms supervised FT and yields a cross-dataset generalization effect. We then rethink and question whether explicit thinking in RFT is always necessary and beneficial. Challenging the convention that explicit thinking is crucial for the success of RFT, we introduce \\textit{No-Thinking-RFT}, exploring RFT without thinking by introducing a simple equality accuracy reward. We evaluate No-Thinking-RFT on six diverse tasks across different model sizes and types. Experiment results reveal four key findings: \\textbf{(1).} Visual perception tasks do not require thinking during RFT, as No-Thinking-RFT consistently outperforms or matches Thinking-RFT across model sizes and types. \\textbf{(2).} Models with limited capabilities struggle to generate high-quality CoT for RFT, making Thinking-RFT less effective than No-Thinking-RFT. \\textbf{(3).} There are inconsistencies between the answers in the thinking tags and answer tags for some responses of Thinking-RFT, which show lower average accuracy than the overall accuracy. \\textbf{(4).} The performance gain of No-Thinking-RFT mainly stems from improved learning during no thinking FT and the avoidance of inference overthinking, as evidenced by the partial gains from appending empty thinking tags at inference time of Thinking-RFT. We hypothesize that explicit thinking before verifiable answers may hinder reward convergence and reduce performance in certain scenarios. To test this, we propose \\textit{Think-After-Answer}, which places thinking after the answer to mitigate this effect for experimental verification. Lastly, we conduct a pilot study to explore whether MLLMs can learn when to think during RFT, introducing an \\textit{Adaptive-Thinking} method. Experiments show that model converges to either thinking or not depending on model capability, achieving comparable or better performance than both Thinking and No-Thinking-RFT. Our findings suggest MLLMs can adaptively decide to think or not based on their capabilities and task complexity, offering insights into the thinking process in RFT.", "arxiv_id": "2503.16188v6", "arxiv_authors": ["Ming Li", "Jike Zhong", "Shitian Zhao", "Yuxiang Lai", "Haoquan Zhang", "Wang Bill Zhu", "Kaipeng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a491"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115482, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a900"}, "filepath": "data/2507.15062v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991831768529392, "type": "Poster", "name": "Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117587", "abstract": "Handheld grippers are increasingly used to collect human demonstrations due to their ease of deployment and versatility. However, most existing designs lack tactile sensing, despite the critical role of tactile feedback in precise manipulation. We present a portable, lightweight gripper with integrated tactile sensors that enables synchronized collection of visual and tactile data in diverse, real-world, and in-the-wild settings. Building on this hardware, we propose a cross-modal representation learning framework that integrates visual and tactile signals while preserving their distinct characteristics. The learned representations are interpretable and consistently emphasize contact regions during physical interactions. When used for downstream manipulation tasks, these representations enable more efficient and effective policy learning, supporting precise robotic manipulation based on multimodal feedback. We validate our approach on fine-grained tasks such as test tube insertion and pipette-based fluid transfer, demonstrating improved accuracy and robustness under external disturbances. Our project page is available at https://touchinthewild.github.io/ .", "arxiv_id": "2507.15062v1", "arxiv_authors": ["Xinyue Zhu", "Binghao Huang", "Yunzhu Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a492"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3310043, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a901"}, "filepath": "data/2509.24739v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994519110050961, "type": "Poster", "name": "Toward a Vision-Language Foundation Model for Medical Data: Multimodal Dataset and Benchmarks for Vietnamese PET/CT Report Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121676", "abstract": "Vision-Language Foundation Models (VLMs), trained on large-scale multimodal datasets, have driven significant advances in Artificial Intelligence by enabling rich cross-modal reasoning. Despite their success in general domains, applying these models to medical imaging remains challenging due to the limited availability of diverse imaging modalities and multilingual clinical data. Most existing medical VLMs are trained on a subset of imaging modalities and focus primarily on high-resource languages, thus limiting their generalizability and clinical utility. To address these limitations, we introduce a novel Vietnamese-language multimodal medical dataset comprising 1,567,062 paired CT-PET images and corresponding full-length clinical reports. This dataset is designed to fill two pressing gaps in medical AI development: (1) the lack of PET/CT imaging data in existing VLMs training corpora, which hinders the development of models capable of handling functional imaging tasks; and (2) the underrepresentation of low-resource languages, particularly the Vietnamese language, in medical vision-language research. To the best of our knowledge, this is the first dataset to provide comprehensive PET/CT-report pairs in Vietnamese. We further introduce a training framework to enhance VLMs' learning, including data augmentation and expert-validated test sets. We conduct comprehensive experiments benchmarking state-of-the-art VLMs on downstream tasks, including medical report generation and visual question answering. The experimental results show that incorporating our dataset significantly improves the performance of existing VLMs. However, despite these advancements, the models still underperform on clinically critical criteria, particularly the diagnosis of lung cancer, indicating substantial room for future improvement. We believe this dataset and benchmark will serve as a pivotal step in advancing the development of more robust VLMs for medical imaging, particularly in low-resource languages, and improving their clinical relevance in Vietnamese healthcare.", "arxiv_id": "2509.24739v2", "arxiv_authors": ["Huu Tien Nguyen", "Dac Thai Nguyen", "The Minh Duc Nguyen", "Trung Thanh Nguyen", "Thao Nguyen Truong", "Huy Hieu Pham", "Johan Barthelemy", "Minh Quan Tran", "Thanh Tam Nguyen", "Quoc Viet Hung Nguyen", "Quynh Anh Chau", "Hong Son Mai", "Thanh Trung Nguyen", "Phi Le Nguyen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a493"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1054724, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a902"}, "filepath": "data/2510.17686v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992030193435382, "type": "Poster", "name": "Towards 3D Objectness Learning in an Open World", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115346", "abstract": "Recent advancements in 3D object detection and novel category detection have made significant progress, yet research on learning generalized 3D objectness remains insufficient. In this paper, we delve into learning open-world 3D objectness, which focuses on detecting all objects in a 3D scene, including novel objects unseen during training. Traditional closed-set 3D detectors struggle to generalize to open-world scenarios, while directly incorporating 3D open-vocabulary models for open-world ability struggles with vocabulary expansion and semantic overlap. To achieve generalized 3D object discovery, We propose OP3Det, a class-agnostic Open-World Prompt-free 3D Detector to detect any objects within 3D scenes without relying on hand-crafted text prompts. We introduce the strong generalization and zero-shot capabilities of 2D foundation models, utilizing both 2D semantic priors and 3D geometric priors for class-agnostic proposals to broaden 3D object discovery. Then, by integrating complementary information from point cloud and RGB image in the cross-modal mixture of experts, OP3Det dynamically routes uni-modal and multi-modal features to learn generalized 3D objectness. Extensive experiments demonstrate the extraordinary performance of OP3Det, which significantly surpasses existing open-world 3D detectors by up to 16.0% in AR and achieves a 13.5% improvement compared to closed-world 3D detectors.", "arxiv_id": "2510.17686v1", "arxiv_authors": ["Taichi Liu", "Zhenyu Wang", "Ruofeng Liu", "Guang Wang", "Desheng Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a494"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1160081, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a903"}, "filepath": "data/2510.21512v1.png", "tags": [], "_media_type": "image", "_rand": 0.999913399863426, "type": "Poster", "name": "Towards a Golden Classifier-Free Guidance Path via Foresight Fixed Point Iterations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115124", "abstract": "Classifier-Free Guidance (CFG) is an essential component of text-to-image diffusion models, and understanding and advancing its operational mechanisms remain a central focus of research. Existing approaches stem from divergent theoretical interpretations, thereby limiting the design space and obscuring key design choices. To address this, we propose a unified perspective that reframes conditional guidance as fixed point iterations, seeking to identify a golden path where latents produce consistent outputs under both conditional and unconditional generation. We demonstrate that CFG and its variants constitute a special case of single-step short-sighted iteration, which is theoretically proven to exhibit inefficiency. To this end, we introduce Foresight Guidance (FSG), which prioritizes solving longer-interval subproblems in early diffusion stages with increased iterations. Extensive experiments across diverse datasets and model architectures validate the superiority of FSG over state-of-the-art methods in both image quality and computational efficiency. Our work offers novel perspectives for unlocking the potential of conditional guidance and adaptive design.", "arxiv_id": "2510.21512v1", "arxiv_authors": ["Kaibo Wang", "Jianda Mao", "Tong Wu", "Yang Xiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a495"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047436, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a904"}, "filepath": "data/2509.09254v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998957101110761, "type": "Poster", "name": "Towards Better Dental AI: A Multimodal Benchmark and Instruction Dataset for Panoramic X-ray Analysis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121469", "abstract": "Recent advances in large vision-language models (LVLMs) have demonstrated strong performance on general-purpose medical tasks. However, their effectiveness in specialized domains such as dentistry remains underexplored. In particular, panoramic X-rays, a widely used imaging modality in oral radiology, pose interpretative challenges due to dense anatomical structures and subtle pathological cues, which are not captured by existing medical benchmarks or instruction datasets. To this end, we introduce MMOral, the first large-scale multimodal instruction dataset and benchmark tailored for panoramic X-ray interpretation. MMOral consists of 20,563 annotated images paired with 1.3 million instruction-following instances across diverse task types, including attribute extraction, report generation, visual question answering, and image-grounded dialogue. In addition, we present MMOral-Bench, a comprehensive evaluation suite covering five key diagnostic dimensions in dentistry. We evaluate 64 LVLMs on MMOral-Bench and find that even the best-performing model, i.e., GPT-4o, only achieves 41.45% accuracy, revealing significant limitations of current models in this domain. To promote the progress of this specific domain, we provide the supervised fine-tuning (SFT) process utilizing our meticulously curated MMOral instruction dataset. Remarkably, a single epoch of SFT yields substantial performance enhancements for LVLMs, e.g., Qwen2.5-VL-7B demonstrates a 24.73% improvement. MMOral holds significant potential as a critical foundation for intelligent dentistry and enables more clinically impactful multimodal AI systems in the dental field.", "arxiv_id": "2509.09254v1", "arxiv_authors": ["Jing Hao", "Yuxuan Fan", "Yanpeng Sun", "Kaixin Guo", "Lizhuo Lin", "Jinrong Yang", "Qi Yong H. Ai", "Lun M. Wong", "Hao Tang", "Kuo Feng Hung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a496"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1112044, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a905"}, "filepath": "data/2510.09012v2.png", "tags": [], "_media_type": "image", "_rand": 0.999914353689955, "type": "Poster", "name": "Towards Better & Faster Autoregressive Image Generation: From the Perspective of Entropy", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118537", "abstract": "In this work, we first revisit the sampling issues in current autoregressive (AR) image generation models and identify that image tokens, unlike text tokens, exhibit lower information density and non-uniform spatial distribution. Accordingly, we present an entropy-informed decoding strategy that facilitates higher autoregressive generation quality with faster synthesis speed. Specifically, the proposed method introduces two main innovations: 1) dynamic temperature control guided by spatial entropy of token distributions, enhancing the balance between content diversity, alignment accuracy, and structural coherence in both mask-based and scale-wise models, without extra computational overhead, and 2) entropy-aware acceptance rules in speculative decoding, achieving near-lossless generation at about 85% of the inference cost of conventional acceleration methods. Extensive experiments across multiple benchmarks using diverse AR image generation models demonstrate the effectiveness and generalizability of our approach in enhancing both generation quality and sampling speed.", "arxiv_id": "2510.09012v2", "arxiv_authors": ["Xiaoxiao Ma", "Feng Zhao", "Pengyang Ling", "Haibo Qiu", "Zhixiang Wei", "Hu Yu", "Jie Huang", "Zhixiong Zeng", "Lin Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a497"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4410927, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a906"}, "filepath": "data/2505.21955v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991527747649616, "type": "Poster", "name": "Towards Comprehensive Scene Understanding: Integrating First and Third-Person Views for LVLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116235", "abstract": "Large vision-language models (LVLMs) are increasingly deployed in interactive applications such as virtual and augmented reality, where first-person (egocentric) view captured by head-mounted cameras serves as key input.While this view offer fine-grained cues about user attention and hand\u2013object interactions, their narrow field of view and lack of global context often lead to failures on spatially or contextually demanding queries.To address this, we introduce a framework that augments egocentric inputs with third-person (exocentric) views, providing complementary information such as global scene layout and object visibility to LVLMs.We present E3VQA, the first benchmark for multi-view question answering with 4K high-quality question\u2013answer pairs grounded in synchronized ego\u2013exo image pairs.Additionally, we propose M3CoT, a training-free prompting technique that constructs a unified scene representation by integrating scene graphs from three complementary perspectives.M3CoT enables LVLMs to reason more effectively across views, yielding consistent performance gains (4.84\\% for GPT-4o and 5.94\\% for Gemini 2.0 Flash) over a recent CoT baseline.Our extensive evaluation reveals key strengths and limitations of LVLMs in multi-view reasoning and highlights the value of leveraging both egocentric and exocentric inputs.", "arxiv_id": "2505.21955v2", "arxiv_authors": ["Insu Lee", "Wooje Park", "Jaeyun Jang", "Minyoung Noh", "Kyuhong Shim", "Byonghyo Shim"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a498"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1069837, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a907"}, "filepath": "data/2505.17677v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999865056229734, "type": "Poster", "name": "Towards Dynamic 3D Reconstruction of Hand-Instrument Interaction in Ophthalmic Surgery", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115963", "abstract": "Accurate 3D reconstruction of hands and instruments is critical for vision-based analysis of ophthalmic microsurgery, yet progress has been hampered by the lack of realistic, large-scale datasets and reliable annotation tools. In this work, we introduce OphNet-3D, the first extensive RGB-D dynamic 3D reconstruction dataset for ophthalmic surgery, comprising 41 sequences from 40 surgeons and totaling 7.1 million frames, with fine-grained annotations of 12 surgical phases, 10 instrument categories, dense MANO hand meshes, and full 6-DoF instrument poses. To scalably produce high-fidelity labels, we design a multi-stage automatic annotation pipeline that integrates multi-view data observation, data-driven motion prior with cross-view geometric consistency and biomechanical constraints, along with a combination of collision-aware interaction constraints for instrument interactions. Building upon OphNet-3D, we establish two challenging benchmarks\u2014bimanual hand pose estimation and hand\u2013instrument interaction reconstruction\u2014and propose two dedicated architectures: H-Net for dual-hand mesh recovery and OH-Net for joint reconstruction of two-hand\u2013two-instrument interactions. These models leverage a novel spatial reasoning module with weak-perspective camera modeling and collision-aware center-based representation. Both architectures outperform existing methods by substantial margins, achieving improvements of over 2mm in Mean Per Joint Position Error (MPJPE) and up to 23\\% in ADD-S metrics for hand and instrument reconstruction, respectively.", "arxiv_id": "2505.17677v2", "arxiv_authors": ["Ming Hu", "Zhengdi Yu", "Feilong Tang", "Kaiwen Chen", "Yulong Li", "Imran Razzak", "Junjun He", "Tolga Birdal", "Kaijing Zhou", "Zongyuan Ge"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a499"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1027123, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a908"}, "filepath": "data/2506.23434v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992547643673172, "type": "Poster", "name": "Towards foundational LiDAR world models with efficient latent flow matching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118252", "abstract": "LiDAR-based world models offer more structured and geometry-aware representations than their image-based counterparts. However, existing LiDAR world models are narrowly trained; each model excels only in the domain it was built. We conduct the first systematic domain transfer study across three demanding scenarios: (i) outdoor to indoor generalization, (ii) sparse-beam \\& dense-beam adaptation, and (iii) non-semantic to semantic transfer. Given different amounts of finetuning data, our experiments show that a single pre-trained model can bring up to 11\\% absolute (83\\% rel.) improvement over from-scratch training. This transferability of dynamic learning significantly reduces the reliance on manually annotated data for semantic occupancy forecasting: our method achieves state-of-the-art forecasting performance using only 5\\% of the labeled training data required by prior models. We also observed inefficiencies of current LiDAR world models, mainly through their under-compression of LiDAR data and inefficient training objective. To address this, we propose a latent flow matching (CFM)-based approach that achieves state-of-the-art reconstruction accuracy using only half the training data and a 6x compression ratio compared to prior methods. Based this compact latent, our model achieves SOTA performance on future-trajectory-conditioned semantic occupancy forecasting while being 23x more computationally efficient (a 28x FPS speedup); and achieves SOTA performance on semantic occupancy forecasting while being 2x more computationally efficient (a 1.1x FPS speedup).", "arxiv_id": "2506.23434v2", "arxiv_authors": ["Tianran Liu", "Shengwen Zhao", "Nicholas Rhinehart"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1050342, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a909"}, "filepath": "data/2510.20819v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991237003463754, "type": "Poster", "name": "Towards General Modality Translation with Contrastive and Predictive Latent Diffusion Bridge", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116815", "abstract": "Recent advances in generative modeling have positioned diffusion models as state-of-the-art tools for sampling from complex data distributions. While these models have shown remarkable success across single-modality domains such as images and audio, extending their capabilities to *Modality Translation (MT)*\u2014translating information across different sensory modalities\u2014remains an open challenge. Existing approaches often rely on restrictive assumptions, including shared dimensionality, Gaussian source priors, and modality-specific architectures, which limit their generality and theoretical grounding. In this work, we propose a general-purpose framework for modality translation based on a latent-variable extension of Denoising Diffusion Bridge Models. By operating in a shared latent space, our method learns bridge between arbitrary modalities without requiring aligned dimensions. We introduce a contrastive alignment loss to enforce semantic consistency between paired samples and design a domain-agnostic encoder-decoder architecture tailored for noise prediction in latent space. Additionally, we propose a predictive loss to guide training towards accurate cross-domain translation and explore several training strategies to improve stability. Our approach supports arbitrary modality pairs and demonstrates strong performance on diverse MT tasks, including multi-view to 3D shape generation, image super-resolution, and multi-view scene synthesis. Comprehensive experiments and ablations validate the effectiveness of our framework, establishing a new strong baseline in general modality translation.", "arxiv_id": "2510.20819v2", "arxiv_authors": ["Nimrod Berman", "Omkar Joglekar", "Eitan Kosman", "Dotan Di Castro", "Omri Azencot"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115652, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90a"}, "filepath": "data/2502.03639v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991964365922824, "type": "Poster", "name": "Towards Physical Understanding in Video Generation: A 3D Point Regularization Approach", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117171", "abstract": "We present a novel video generation framework that integrates 3-dimensional geometry and dynamic awareness. To achieve this, we augment 2D videos with 3D point trajectories and align them in pixel space. The resulting 3D-aware video dataset, PointVid, is then used to fine-tune a latent diffusion model, enabling it to track 2D objects with 3D Cartesian coordinates. Building on this, we regularize the shape and motion of objects in the video to eliminate undesired artifacts, e.g., non-physical deformation. Consequently, we enhance the quality of generated RGB videos and alleviate common issues like object morphing, which are prevalent in current video models due to a lack of shape awareness. With our 3D augmentation and regularization, our model is capable of handling contact-rich scenarios such as task-oriented videos, where 3D information is essential for perceiving shape and motion of interacting solids. Our method can be seamlessly integrated into existing video diffusion models to improve their visual plausibility.", "arxiv_id": "2502.03639v2", "arxiv_authors": ["Yunuo Chen", "Junli Cao", "Vidit Goel", "Sergei Korolev", "Chenfanfu Jiang", "Jian Ren", "Sergey Tulyakov", "Anil Kag"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085084, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90b"}, "filepath": "data/2510.21160v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996933307216345, "type": "Poster", "name": "Towards Physics-informed Visual-Spatial Intelligence with Human Priors: An Autonomous Driving Study", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115976", "abstract": "How to encode visual-spatial intelligence (VSI) into representative and informative features remain an open challenge. Instead of following traditional Visual Question Answering (VQA)-style representation, we introduce spatial intelligence grid (SIG): a structured, grid-based data schema that embeds geometrical spatial relationship among objects along with physical priors in human world. We further derive a set of SIG-based optimal evaluation metrics that rigorously quantify a model\u2019s true VSI capabilities. In few-shot in-context learning experiments on state-of-the-art multimodal LLMs (e.g. GPT-4o, Gemini-2.5-Pro), SIG yields consistently larger, more stable, and more comprehensive improvements across all VSI metrics compared to VQA-style representations, demonstrating its potential as a novel data schema for learning VSI. Based on SIG, we create SIGBench, a benchmark containing 1.4K driving frames annotated with ground-truth SIG labels and human gaze attention, supporting both grid-based machine VSI tasks and human-like attention-driven VSI tasks in autonomous-driving scenarios.", "arxiv_id": "2510.21160v1", "arxiv_authors": ["Guanlin Wu", "Boyan Su", "Yang Zhao", "Pu Wang", "Yichen Lin", "Hao Frank Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.655Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1158315, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90c"}, "filepath": "data/2505.17644v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995596106129262, "type": "Poster", "name": "Towards Prospective Medical Image Reconstruction via Knowledge-Informed Dynamic Optimal Transport", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115475", "abstract": "Medical image reconstruction from measurement data is a vital but challenging inverse problem. Deep learning approaches have achieved promising results, but often requires paired measurement and high-quality images, which is typically simulated through a forward model, i.e., retrospective reconstruction. However, training on simulated pairs commonly leads to performance degradation on real prospective data due to the retrospective-to-prospective gap caused by incomplete imaging knowledge in simulation. To address this challenge, this paper introduces imaging Knowledge-Informed Dynamic Optimal Transport (KIDOT), a novel dynamic optimal transport framework with optimality in the sense of preserving consistency with imaging physics in transport, that conceptualizes reconstruction as finding a dynamic transport path. KIDOT learns from unpaired data by modeling reconstruction as a continuous evolution path from measurements to images, guided by an imaging knowledge-informed cost function and transport equation. This dynamic and knowledge-aware approach enhances robustness and better leverages unpaired data while respecting acquisition physics. Theoretically, we demonstrate that KIDOT naturally generalizes dynamic optimal transport, ensuring its mathematical rationale and solution existence. Extensive experiments on MRI and CT reconstruction demonstrate KIDOT's superior performance.", "arxiv_id": "2505.17644v1", "arxiv_authors": ["Taoran Zheng", "Xing Li", "Yan Yang", "Xiang Gu", "Zongben Xu", "Jian Sun"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49e"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058378, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90d"}, "filepath": "data/2509.25989v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999516805730926, "type": "Poster", "name": "Towards Reliable and Holistic Visual In-Context Learning Prompt Selection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119157", "abstract": "Visual In-Context Learning (VICL) has emerged as a prominent approach for adapting visual foundation models to novel tasks, by effectively exploiting contextual information embedded in in-context examples, which can be formulated as a global ranking problem of potential candidates. Current VICL methods, such as Partial2Global and VPR, are grounded in the similarity-priority assumption that images more visually similar to a query image serve as better in-context examples. This foundational assumption, while intuitive, lacks sufficient justification for its efficacy in selecting optimal in-context examples. Furthermore, Partial2Global constructs its global ranking from a series of randomly sampled pairwise preference predictions. Such a reliance on random sampling can lead to incomplete coverage and redundant samplings of comparisons, thus further adversely impacting the final global ranking. To address these issues, this paper introduces an enhanced variant of Partial2Global designed for reliable and holistic selection of in-context examples in VICL. Our proposed method, dubbed RH-Partial2Global, leverages a jackknife conformal prediction-guided strategy to construct reliable alternative sets and a covering design-based sampling approach to ensure comprehensive and uniform coverage of pairwise preferences. Extensive experiments demonstrate that RH-Partial2Global achieves excellent performance and outperforms Partial2Global across diverse visual tasks.", "arxiv_id": "2509.25989v2", "arxiv_authors": ["Wenxiao Wu", "Jing-Hao Xue", "Chengming Xu", "Chen Liu", "Xinwei Sun", "Changxin Gao", "Nong Sang", "Yanwei Fu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a49f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081985, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90e"}, "filepath": "data/2506.05466v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994200146664477, "type": "Poster", "name": "Towards Reliable Identification of Diffusion-based Image Manipulations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117068", "abstract": "Changing facial expressions, gestures, or background details may dramatically alter the meaning conveyed by an image.Notably, recent advances in diffusion models greatly improve the quality of image manipulation while also opening the door to misuse.Identifying changes made to authentic images, thus, becomes an important task, constantly challenged by new diffusion-based editing tools.To this end, we propose a novel approach for ReliAble iDentification of inpainted AReas (RADAR).RADAR builds on existing foundation models and combines features from different image modalities.It also incorporates an auxiliary contrastive loss that helps to isolate manipulated image patches.We demonstrate these techniques to significantly improve both the accuracy of our method and its generalisation to a large number of diffusion models.To support realistic evaluation, we further introduce MIBench, a new comprehensive benchmark, with images tampered by 28 diffusion models. Our experiments show that RADAR achieves excellent results, outperforming the state-of-the-art in detecting and localising image edits made by both seen and unseen diffusion models. Our code, data and models will be publicly available.", "arxiv_id": "2506.05466v2", "arxiv_authors": ["Alex Costanzino", "Woody Bayliss", "Juil Sock", "Marc Gorriz Blanch", "Danijela Horak", "Ivan Laptev", "Philip Torr", "Fabio Pizzati"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1059371, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a90f"}, "filepath": "data/2510.08044v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994536013272669, "type": "Poster", "name": "Towards Reliable LLM-based Robots Planning via Combined Uncertainty Estimation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119810", "abstract": "Large language models (LLMs) demonstrate advanced reasoning abilities, enabling robots to understand natural language instructions and generate high-level plans with appropriate grounding. However, LLM hallucinations present a significant challenge, often leading to overconfident yet potentially misaligned or unsafe plans. While researchers have explored uncertainty estimation to improve the reliability of LLM-based planning, existing studies have not sufficiently differentiated between epistemic and intrinsic uncertainty, limiting the effectiveness of uncertainty estimation.In this paper, we present Combined Uncertainty estimation for Reliable Embodied planning (CURE), which decomposes the uncertainty into epistemic and intrinsic uncertainty, each estimated separately. Furthermore, epistemic uncertainty is subdivided into task clarity and task familiarity for more accurate evaluation. The overall uncertainty assessments are obtained using random network distillation and multi-layer perceptron regression heads driven by LLM features. We validated our approach in two distinct experimental settings: kitchen manipulation and tabletop rearrangement experiments. The results show that, compared to existing methods, our approach yields uncertainty estimates that are more closely aligned with the actual execution outcomes.", "arxiv_id": "2510.08044v1", "arxiv_authors": ["Shiyuan Yin", "Chenjia Bai", "Zihao Zhang", "Junwei Jin", "Xinxin Zhang", "Chi Zhang", "Xuelong Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a1"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1008913, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a910"}, "filepath": "data/2510.10487v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991083862180645, "type": "Poster", "name": "Towards Self-Refinement of Vision-Language Models with Triangular Consistency", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118260", "abstract": "Vision-Language Models (VLMs) integrate visual knowledge with the analytical capabilities of Large Language Models (LLMs) through supervised visual instruction tuning, using image-question-answer triplets. However, the potential of VLMs trained without supervised instruction remains largely unexplored. This study validates that VLMs possess inherent self-refinement capabilities, enabling them to generate high-quality supervised data without external inputs and thereby learn autonomously. Specifically, to stimulate the self-refinement ability of VLMs, we propose a self-refinement framework based on a Triangular Consistency principle: within the image-query-answer triangle, any masked elements should be consistently and accurately reconstructed. The framework involves three steps: (1) We enable the instruction generation ability of VLMs by adding multi-task instruction tuning like image$\\rightarrow$question-answer or image-answer$\\rightarrow$question. (2) We generate image-query-answer triplets from unlabeled images and use the Triangular Consistency principle for filtering. (3) The model is further updated using the filtered synthetic data. To investigate the underlying mechanisms behind this self-refinement capability, we conduct a theoretical analysis from a causal perspective. Using the widely recognized LLaVA-1.5 as our baseline, our experiments reveal that the model can autonomously achieve consistent, though deliberately modest, improvements across multiple benchmarks without any external supervision, such as human annotations or environmental feedback.We expect that the insights of this study on the self-refinement ability of VLMs can inspire future research on the learning mechanism of VLMs.", "arxiv_id": "2510.10487v1", "arxiv_authors": ["Yunlong Deng", "Guangyi Chen", "Tianpei Gu", "Lingjing Kong", "Yan Li", "Zeyu Tang", "Kun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1035949, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a911"}, "filepath": "data/2510.19487v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999994780224991, "type": "Poster", "name": "Towards Single-Source Domain Generalized Object Detection via Causal Visual Prompts", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119321", "abstract": "Single-Domain Generalized Object Detection (SDGOD), as a cutting-edge research topic in computer vision, aims to enhance model generalization capability in unseen target domains through single-source domain training. Current mainstream approaches attempt to mitigate domain discrepancies via data augmentation techniques. However, due to domain shift and limited domain\u2011specific knowledge, models tend to fall into the pitfall of spurious correlations. This manifests as the model's over-reliance on simplistic classification features (e.g., color) rather than essential domain-invariant representations like object contours. To address this critical challenge, we propose the Cauvis (Causal Visual Prompts) method. First, we introduce a Cross-Attention Prompts module that mitigates bias from spurious features by integrating visual prompts with cross-attention. To address the inadequate domain knowledge coverage and spurious feature entanglement in visual prompts for single-domain generalization, we propose a dual-branch adapter that disentangles causal-spurious features while achieving domain adaptation via high-frequency feature extraction. Cauvis achieves state-of-the-art performance with 15.9\u201331.4\\% gains over existing domain generalization methods on SDGOD datasets, while exhibiting significant robustness advantages in complex interference environments.", "arxiv_id": "2510.19487v1", "arxiv_authors": ["Chen Li", "Huiying Xu", "Changxin Gao", "Zeyu Wang", "Yun Liu", "Xinzhong Zhu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1075180, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a912"}, "filepath": "data/2504.15376v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997309644571323, "type": "Poster", "name": "Towards Understanding Camera Motions in Any Video", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121600", "abstract": "We introduce CameraBench, a large-scale dataset and benchmark designed to assess and improve camera motion understanding. CameraBench consists of ~3,000 diverse internet videos, annotated by experts through a rigorous multi-stage quality control process. One of our core contributions is a taxonomy or \"language\" of camera motion primitives, designed in collaboration with cinematographers. We find, for example, that some motions like \"follow\" (or tracking) require understanding scene content like moving subjects. We conduct a large-scale human study to quantify human performance, revealing that domain expertise and tutorial-based training can significantly enhance accuracy. For example, a novice may confuse zoom-in (a change of intrinsics) with translating forward (a change of extrinsics), but can be trained to differentiate the two. Using CameraBench, we evaluate Structure-from-Motion (SfM) and Video-Language Models (VLMs), finding that SfM models struggle to capture semantic primitives that depend on scene content, while generative VLMs struggle to capture geometric primitives that require precise estimation of trajectories. We then fine-tune a generative VLM on CameraBench to achieve the best of both worlds and showcase its applications, including motion-augmented captioning, video question answering, and video-text retrieval. We hope our taxonomy, benchmark, and tutorials will drive future efforts towards the ultimate goal of understanding camera motions in any video.", "arxiv_id": "2504.15376v2", "arxiv_authors": ["Zhiqiu Lin", "Siyuan Cen", "Daniel Jiang", "Jay Karhade", "Hewei Wang", "Chancharik Mitra", "Tiffany Ling", "Yuhan Huang", "Sifan Liu", "Mingyu Chen", "Rushikesh Zawar", "Xue Bai", "Yilun Du", "Chuang Gan", "Deva Ramanan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110105, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a913"}, "filepath": "data/2505.19210v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996641818962596, "type": "Poster", "name": "Towards Understanding the Mechanisms of Classifier-Free Guidance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117182", "abstract": "Classifier-free guidance (CFG) is a core technique powering state-of-the-art image generation systems, yet its underlying mechanisms remain poorly understood. In this work, we first analyze CFG in a simplified linear diffusion model, where we show its behavior closely resembles that observed in the nonlinear case. Our analysis reveals that linear CFG improves generation quality via three distinct components: (i) a mean-shift term that approximately steers samples in the direction of class means, (ii) a positive Contrastive Principal Components (CPC) term that amplifies class-specific features, and (iii) a negative CPC term that suppresses generic features prevalent in unconditional data. We then verify that these insights in real-world, nonlinear diffusion models: over a broad range of noise levels, linear CFG resembles the behavior of its nonlinear counterpart. Although the two eventually diverge at low noise levels, we discuss how the insights from the linear analysis still shed light on the CFG's mechanism within the nonlinear regime.", "arxiv_id": "2505.19210v1", "arxiv_authors": ["Xiang Li", "Rongrong Wang", "Qing Qu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 847001, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a914"}, "filepath": "data/2407.07221v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994513410266392, "type": "Poster", "name": "Tracing Back the Malicious Clients in Poisoning Attacks to Federated Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116956", "abstract": "Poisoning attacks compromise the training phase of federated learning (FL) such that the learned global model misclassifies attacker-chosen inputs called target inputs. Existing defenses mainly focus on protecting the training phase of FL such that the learnt global model is poison free. However, these defenses often achieve limited effectiveness when the clients' local training data is highly non-iid or the number of malicious clients is large, as confirmed in our experiments. In this work, we propose FLForensics, the first poison-forensics method for FL. FLForensics complements existing training-phase defenses. In particular, when training-phase defenses fail and a poisoned global model is deployed, FLForensics aims to trace back the malicious clients that performed the poisoning attack after a misclassified target input is identified. We theoretically show that FLForensics can accurately distinguish between benign and malicious clients under a formal definition of poisoning attack. Moreover, we empirically show the effectiveness of FLForensics at tracing back both existing and adaptive poisoning attacks on five benchmark datasets.", "arxiv_id": "2407.07221v2", "arxiv_authors": ["Yuqi Jia", "Minghong Fang", "Hongbin Liu", "Jinghuai Zhang", "Neil Zhenqiang Gong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1012799, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a915"}, "filepath": "data/2411.07449v2.png", "tags": [], "_media_type": "image", "_rand": 0.999944341138811, "type": "Poster", "name": "Tracing the Roots: Leveraging Temporal Dynamics in Diffusion Trajectories for Origin Attribution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116226", "abstract": "Diffusion models have transformed image synthesis through iterative denoising processes that define trajectories from noise to coherent data. While their generative capabilities are widely celebrated, a critical challenge remains unaddressed: ensuring responsible use by verifying whether an image originates from a model's training set, its novel generations, or external sources. We propose a framework that analyzes diffusion trajectories to trace data provenance. Unlike prior methods, we demonstrate that temporal dynamics across the entire trajectory encode discriminative signals for robust classification. This challenges the long-standing \"Goldilocks zone\" conjecture, which posits that membership inference is effective only within narrow denoising stages. More fundamentally, we expose critical flaws in current membership inference practices, showing how existing methods fail under distribution shifts or when model-generated data is present. For model attribution, we present the first approach applicable to diffusion that avoids foundation models and their potential data leakage. Ultimately, we unify membership inference and model attribution into a single, cohesive framework tailored to modern generative systems, making our assumptions explicit and establishing principled benchmarking standards. Our work prioritizes transparency and accountability in an era of increasingly opaque AI. Code and data given in the Supplementary Material.", "arxiv_id": "2411.07449v2", "arxiv_authors": ["Andreas Floros", "Seyed-Mohsen Moosavi-Dezfooli", "Pier Luigi Dragotti"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1604227, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a916"}, "filepath": "data/2510.23605v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997898992134466, "type": "Poster", "name": "Track, Inpaint, Resplat: Subject-driven 3D and 4D Generation with Progressive Texture Infilling", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116369", "abstract": "Current 3D/4D generation methods are usually optimized for photorealism, efficiency, and aesthetics. However, they often fail to preserve the semantic identity of the subject across different viewpoints. Adapting generation methods with one or few images of a specific subject (also known as Personalization or Subject-driven generation) allows generating visual content that align with the identity of the subject. However, personalized 3D/4D generation is still largely underexplored. In this work, we introduce TIRE (Track, Inpaint, REsplat), a novel method for subject-driven 3D/4D generation. It takes an initial 3D asset produced by an existing 3D generative model as input and uses video tracking to identify the regions that need to be modified. Then, we adopt a subject-driven 2D inpainting model for progressively infilling the identified regions. Finally, we resplat the modified 2D multi-view observations back to 3D while still maintaining consistency. Extensive experiments demonstrate that our approach significantly improves identity preservation in 3D/4D generation compared to state-of-the-art methods.", "arxiv_id": "2510.23605v1", "arxiv_authors": ["Shuhong Zheng", "Ashkan Mirzaei", "Igor Gilitschenski"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1146514, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a917"}, "filepath": "data/2509.16429v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995677303644659, "type": "Poster", "name": "TractoTransformer: Diffusion MRI Streamline Tractography using CNN and Transformer Networks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116741", "abstract": "White matter tractography is an advanced neuroimaging technique that reconstructs the 3D white matter pathways of the brain from diffusion MRI data. It can be framed as a pathfinding problem aiming to infer neural fiber trajectories from noisy and ambiguous measurements, facing challenges such as crossing, merging, and fanning white-matter configurations.In this paper, we propose a novel tractography method that leverages Transformers to model the sequential nature of white matter streamlines, enabling the prediction of fiber directions by integrating both the trajectory context and current diffusion MRI measurements. To incorporate spatial information, we utilize CNNs that extract microstructural features from local neighborhoods around each voxel. By combining these complementary sources of information, our approach improves the precision and completeness of neural pathway mapping compared to traditional tractography models. We evaluate our method with the Tractometer toolkit, achieving competitive performance against state-of-the-art approaches, and present qualitative results on the TractoInferno dataset, demonstrating strong generalization to real-world data. The code attached to this submission will be made publicly available upon acceptance.", "arxiv_id": "2509.16429v1", "arxiv_authors": ["Itzik Waizman", "Yakov Gusakov", "Itay Benou", "Tammy Riklin Raviv"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4a9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.656Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 983931, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a918"}, "filepath": "data/2505.16864v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992550105542212, "type": "Poster", "name": "Training-Free Efficient Video Generation via Dynamic Token Carving", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119278", "abstract": "Despite the remarkable generation quality of video Diffusion Transformer (DiT) models, their practical deployment is severely hindered by extensive computational requirements. This inefficiency stems from two key challenges: the quadratic complexity of self-attention with respect to token length and the multi-step nature of diffusion models. To address these limitations, we present Jenga, a novel inference pipeline that combines dynamic attention carving with progressive resolution generation. Our approach leverages two key insights: (1) early denoising steps do not require high-resolution latents, and (2) later steps do not require dense attention. Jenga introduces a block-wise attention mechanism that dynamically selects relevant token interactions using 3D space-filling curves, alongside a progressive resolution strategy that gradually increases latent resolution during generation. Experimental results demonstrate that Jenga achieves substantial speedups across multiple state-of-the-art video diffusion models while maintaining comparable generation quality (8.83$\\times$ speedup with 0.01\\% performance drop on VBench). As a plug-and-play solution, Jenga enables practical, high-quality video generation on modern hardware by reducing inference time from minutes to seconds---without requiring model retraining.", "arxiv_id": "2505.16864v1", "arxiv_authors": ["Yuechen Zhang", "Jinbo Xing", "Bin Xia", "Shaoteng Liu", "Bohao Peng", "Xin Tao", "Pengfei Wan", "Eric Lo", "Jiaya Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4aa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3352910, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a919"}, "filepath": "data/2510.16989v1.png", "tags": [], "_media_type": "image", "_rand": 0.999174628156991, "type": "Poster", "name": "Training-free Online Video Step Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116701", "abstract": "Given a task and a set of steps composing it, Video Step Grounding (VSG) aims to detect which steps are performed in a video. Standard approaches for this task require a labeled training set (e.g., with step-level annotations or narrations), which may be costly to collect. Moreover, they process the full video offline, limiting their applications for scenarios requiring online decisions. Thus, in this work, we explore how to perform VSG online and without training. We achieve this by exploiting the zero-shot capabilities of recent Large Multimodal Models (LMMs). In particular, we use LMMs to predict the step associated with a restricted set of frames, without access to the whole video. We show that this online strategy without task-specific tuning outperforms offline and training-based models. Motivated by this finding, we develop Bayesian Grounding with Large Multimodal Models (BAGLM), further injecting knowledge of past frames into the LMM-based predictions. BAGLM exploits Bayesian filtering principles, modeling step transitions via (i) a dependency matrix extracted through large language models and (ii) an estimation of step progress. Experiments on three datasets show superior performance of BAGLM over state-of-the-art training-based offline methods.", "arxiv_id": "2510.16989v1", "arxiv_authors": ["Luca Zanella", "Massimiliano Mancini", "Yiming Wang", "Alessio Tonioni", "Elisa Ricci"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ab"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1124353, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91a"}, "filepath": "data/2412.03054v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992521676354801, "type": "Poster", "name": "TREND: Unsupervised 3D Representation Learning via Temporal Forecasting for LiDAR Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119470", "abstract": "Labeling LiDAR point clouds is notoriously time-and-energy-consuming, which spurs recent unsupervised 3D representation learning methods to alleviate the labeling burden in LiDAR perception via pretrained weights. Existing work focus on either masked auto encoding or contrastive learning on LiDAR point clouds, which neglects the temporal LiDAR sequence that naturally accounts for object motion (and their semantics). Instead, we propose TREND, short for Temporal REndering with Neural fielD, to learn 3D representation via forecasting the future observation in an unsupervised manner. TREND integrates forecasting for 3D pre-training through a Recurrent Embedding scheme to generate 3D embeddings across time and a Temporal LiDAR Neural Field specifically designed for LiDAR modality to represent the 3D scene, with which we compute the loss using differentiable rendering. We evaluate TREND on 3D object detection and LiDAR semantic segmentation tasks on popular datasets, including Once, Waymo, NuScenes, and SemanticKITTI. Experiment results show that TREND brings up to 400% more improvement as compared to previous SOTA unsupervised 3D pre-training methods and generally improve different downstream tasks across datasets, demonstrating the effectiveness of TREND. Codes and models will be released.", "arxiv_id": "2412.03054v1", "arxiv_authors": ["Runjian Chen", "Hyoungseob Park", "Bo Zhang", "Wenqi Shao", "Ping Luo", "Alex Wong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ac"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2091442, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91b"}, "filepath": "data/2506.02860v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999147637725262, "type": "Poster", "name": "Tru-POMDP: Task Planning Under Uncertainty via Tree of Hypotheses and Open-Ended POMDPs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120243", "abstract": "Task planning under uncertainty is essential for home-service robots operating in the real world. Tasks involve ambiguous human instructions, hidden or unknown object locations, and open-vocabulary object types, leading to significant open-ended uncertainty and a boundlessly large planning space. To address these challenges, we propose Tru-POMDP, a planner that combines structured belief generation using Large Language Models (LLMs) with principled POMDP planning. Tru-POMDP introduces a hierarchical Tree of Hypotheses (TOH), which systematically queries an LLM to construct high-quality particle beliefs over possible world states and human goals. We further formulate an open-ended POMDP model that enables rigorous Bayesian belief tracking and efficient belief-space planning over these LLM-generated hypotheses. Experiments on complex object rearrangement tasks across diverse kitchen environments show that Tru-POMDP significantly outperforms state-of-the-art LLM-based and LLM-tree-search hybrid planners, achieving higher success rates with significantly better plans, stronger robustness to ambiguity and occlusion, and greater planning efficiency.", "arxiv_id": "2506.02860v1", "arxiv_authors": ["Wenjing Tang", "Xinyu He", "Yongxi Huang", "Yunxiao Xiao", "Cewu Lu", "Panpan Cai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ad"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1062466, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91c"}, "filepath": "data/2509.22813v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996286076751913, "type": "Poster", "name": "TRUST: Test-Time Refinement using Uncertainty-Guided SSM Traverses", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117665", "abstract": "State Space Models (SSMs) have emerged as efficient alternatives to Vision Transformers (ViTs), with VMamba standing out as a pioneering architecture designed for vision tasks. However, their generalization performance degrades significantly under distribution shifts. To address this limitation, we propose TRUST (Test-Time Refinement using Uncertainty-Guided SSM Traverses), a novel test-time adaptation (TTA) method that leverages diverse traversal permutations to generate multiple causal perspectives of the input image. Model predictions serve as pseudo-labels to guide updates of the Mamba-specific parameters, and the adapted weights are averaged to integrate the learned information across traversal scans. Altogether, TRUST is the first approach that explicitly leverages the unique architectural properties of SSMs for adaptation. Experiments on seven benchmarks show that TRUST consistently improves robustness and outperforms existing TTA methods.", "arxiv_id": "2509.22813v1", "arxiv_authors": ["Sahar Dastani", "Ali Bahri", "Gustavo Adolfo Vargas Hakim", "Moslem Yazdanpanah", "Mehrdad Noori", "David Osowiechi", "Samuel Barbeau", "Ismail Ben Ayed", "Herve Lombaert", "Christian Desrosiers"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ae"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058384, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91d"}, "filepath": "data/2507.18537v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993781151893523, "type": "Poster", "name": "TTS-VAR: A Test-Time Scaling Framework for Visual Auto-Regressive Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115886", "abstract": "Scaling visual generation models is essential for real-world content creation, yet requires substantial training and computational expenses. Alternatively, test-time scaling has garnered growing attention due to resource efficiency and promising performance. In this work, we present the first general test-time scaling framework for visual auto-regressive (VAR) models, TTS-VAR, modeling the generation process as a path searching problem. Inspired by VAR's hierarchical coarse-to-fine multi-scale generation, our framework integrates two key components: (i) At coarse scales, we observe that generated tokens are hard for evaluation, possibly leading to erroneous acceptance of inferior samples or rejection of superior samples. Noticing that the coarse scales contain sufficient structural information, we propose clustering-based diversity search. It preserves structural variety through semantic feature clustering, enabling later selection on samples with higher potential. (ii) In fine scales, resampling-based potential selection prioritizes promising candidates using potential scores, which are defined as reward functions incorporating multi-scale generation history. To dynamically balance computational efficiency with exploration capacity, we further introduce an adaptive descending batch size schedule throughout the causal generation process. Experiments on the powerful VAR model Infinity show a notable 8.7% GenEval score improvement (0.69\u21920.75). Key insights reveal that early-stage structural features effectively influence final quality, and resampling efficacy varies across generation scales.", "arxiv_id": "2507.18537v2", "arxiv_authors": ["Zhekai Chen", "Ruihang Chu", "Yukang Chen", "Shiwei Zhang", "Yujie Wei", "Yingya Zhang", "Xihui Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4af"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039382, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91e"}, "filepath": "data/2505.19853v2.png", "tags": [], "_media_type": "image", "_rand": 0.9994611208658607, "type": "Poster", "name": "Two Causally Related Needles in a Video Haystack", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121468", "abstract": "Evaluating the video understanding capabilities of Video-Language Models (VLMs) remains a significant challenge. We propose a long-context video understanding benchmark, Causal2Needles, that assesses two crucial abilities insufficiently evaluated by existing benchmarks: (1) the ability to extract information from two separate locations in a long video and understand them jointly, and (2) the ability to model the world in terms of cause and effect in human behaviors. Specifically, Causal2Needles introduces 2-needle questions, which require extracting information from both the cause and effect human-behavior events in a long video and the associated narration text. To prevent textual bias, these questions comprise two complementary formats: one asking to identify the video clip containing the answer, and one asking for the textual description of an unrelated visual detail from that video clip. Our experiments reveal that models excelling in pre-existing benchmarks struggle with 2-needle visual grounding, and the model performance is negatively correlated with the distance between the two needles. These findings highlight critical limitations in current VLMs.", "arxiv_id": "2505.19853v2", "arxiv_authors": ["Miaoyu Li", "Qin Chao", "Boyang Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:09.815Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1089828, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a91f"}, "filepath": "data/2510.21991v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992783545785622, "type": "Poster", "name": "Two-Steps Diffusion Policy for Robotic Manipulation via Genetic Denoising", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117411", "abstract": "Diffusion models, such as diffusion policy, have achieved state-of-the-art results in robotic manipulation by imitating expert demonstrations. While diffusion models were originally developed for vision tasks like image and video generation, many of their inference strategies have been directly transferred to control domains without adaptation. In this work, we show that by tailoring the denoising process to the specific characteristics of embodied AI tasks\u2014particularly the structured, low-dimensional nature of action distributions---diffusion policies can operate effectively with as few as 5 neural function evaluations.Building on this insight, we propose a population-based sampling strategy, genetic denoising, which enhances both performance and stability by selecting denoising trajectories with low out-of-distribution risk. Our method solves challenging tasks with only two neural function evaluations while improving or matching performance. We evaluate our approach across 14 robotic manipulation tasks from D4RL and Robomimic, spanning multiple action horizons and inference budgets. In over 2 million evaluations, our method consistently outperforms standard diffusion-based policies, achieving up to 20\\% performance gains with significantly fewer inference steps.", "arxiv_id": "2510.21991v1", "arxiv_authors": ["Mateo Clemente", "Leo Brunswic", "Rui Heng Yang", "Xuan Zhao", "Yasser Khalil", "Haoyu Lei", "Amir Rasouli", "Yinchuan Li"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b1"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 856599, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a920"}, "filepath": "data/2505.15725v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999661011132664, "type": "Poster", "name": "UAV-Flow Colosseo: A Real-World Benchmark for Flying-on-a-Word UAV Imitation Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121527", "abstract": "Unmanned Aerial Vehicles (UAVs) are evolving into language-interactive platforms, enabling more intuitive forms of human-drone interaction. While prior works have primarily focused on high-level planning and long-horizon navigation, we shift attention to language-guided fine-grained trajectory control, where UAVs execute short-range, reactive flight behaviors in response to language instructions. We formalize this problem as the Flying-on-a-Word (Flow) task and introduce UAV imitation learning as an effective approach. In this framework, UAVs learn fine-grained control policies by mimicking expert pilot trajectories paired with atomic language instructions. To support this paradigm, we present UAV-Flow, the first real-world benchmark for language-conditioned, fine-grained UAV control. It includes a task formulation, a large-scale dataset collected in diverse environments, a deployable control framework, and a simulation suite for systematic evaluation. Our design enables UAVs to closely imitate the precise, expert-level flight trajectories of human pilots and supports direct deployment without sim-to-real gap. We conduct extensive experiments on UAV-Flow, benchmarking VLN and VLA paradigms. Results show that VLA models are superior to VLN baselines and highlight the critical role of spatial grounding in the fine-grained Flow setting.", "arxiv_id": "2505.15725v2", "arxiv_authors": ["Xiangyu Wang", "Donglin Yang", "Yue Liao", "Wenhao Zheng", "wenjun wu", "Bin Dai", "Hongsheng Li", "Si Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b2"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5140111, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a921"}, "filepath": "data/2506.09278v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994213239376964, "type": "Poster", "name": "UFM: A Simple Path towards Unified Dense Correspondence with Flow", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117892", "abstract": "Dense image correspondence is central to many applications, such as visual odometry, 3D reconstruction, object association, and re-identification. Historically, dense correspondence has been tackled separately for wide-baseline scenarios and optical flow estimation, despite the common goal of matching content between two images. In this paper, we develop a Unified Flow \\& Matching model (UFM), which is trained on unified data for pixels that are co-visible in both source and target images. UFM uses a simple, generic transformer architecture that directly regresses the $(u,v)$ flow. It is easier to train and more accurate for large flows compared to the typical coarse-to-find cost volumes in prior work. UFM is 28\\% more accurate than state-of-the-art flow methods (Unimatch), while also having 62\\% less error and 6.7x faster than dense wide-baseline matchers (RoMa). UFM is the first to demonstrate that unified training can outperform specialized approaches across both domains. This result enables fast, general-purpose correspondence and opens new directions for multi-modal, long-range, and real-time correspondence tasks.", "arxiv_id": "2506.09278v1", "arxiv_authors": ["Yuchen Zhang", "Nikhil Keetha", "Chenwei Lyu", "Bhuvan Jhamb", "Yutian Chen", "Yuheng Qiu", "Jay Karhade", "Shreyas Jha", "Yaoyu Hu", "Deva Ramanan", "Sebastian Scherer", "Wenshan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5316085, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a922"}, "filepath": "data/2503.01342v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993005224014245, "type": "Poster", "name": "UFO: A Unified Approach to Fine-grained Visual Perception via Open-ended Language Interface", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119594", "abstract": "Generalist models have achieved remarkable success in both language and vision-language tasks, showcasing the potential of unified modeling. However, effectively integrating fine-grained perception tasks like detection and segmentation into these models remains a significant challenge. This is primarily because these tasks often rely heavily on task-specific designs and architectures that can complicate the modeling process. To address this challenge, we present UFO, a framework that unifies fine-grained visual perception tasks through an open-ended language interface. By transforming all perception targets into the language space, UFO unifies object-level detection, pixel-level segmentation, and image-level vision-language tasks into a single model. Additionally, we introduce a novel embedding retrieval approach that relies solely on the language interface to support segmentation tasks. Our framework bridges the gap between fine-grained perception and vision-language tasks, significantly simplifying architectural design and training strategies while achieving comparable or superior performance to methods with intricate task-specific designs. After multi-task training on five standard visual perception datasets, UFO outperforms the previous state-of-the-art generalist models by 12.3 mAP on COCO instance segmentation and 3.3 mIoU on ADE20K semantic segmentation. Furthermore, our method seamlessly integrates with existing MLLMs, effectively combining fine-grained perception capabilities with their advanced language abilities, thereby achieving superior performance on the challenging reasoning segmentation. Code and models will be publicly available.", "arxiv_id": "2503.01342v3", "arxiv_authors": ["Hao Tang", "Chenwei Xie", "Haiyang Wang", "Xiaoyi Bao", "Tingyu Weng", "Pandeng Li", "Yun Zheng", "Liwei Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.657Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1081492, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a923"}, "filepath": "data/2507.04638v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996804265414229, "type": "Poster", "name": "UGG-ReID: Uncertainty-Guided Graph Model for Multi-Modal Object Re-Identification", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117244", "abstract": "Multi-modal object Re-IDentification (ReID) has gained considerable attention with the goal of retrieving specific targets across cameras using heterogeneous visual data sources. At present, multi-modal object ReID faces two core challenges: (1) learning robust features under fine-grained local noise caused by occlusion, frame loss, and other disruptions; and (2) effectively integrating heterogeneous modalities to enhance multi-modal representation. To address the above challenges, we propose a robust approach named Uncertainty-Guided Graph model for multi-modal object ReID (UGG-ReID). UGG-ReID is designed to mitigate noise interference and facilitate effective multi-modal fusion by estimating both local and sample-level epistemic uncertainty and explicitly modeling their dependencies. Specifically, we first propose the Gaussian patch-graph representation model that leverages uncertainty to quantify fine-grained local cues and capture their structural relationships. This process boosts the expressiveness of modal-specific information, ensuring that the generated embeddings are both more informative and robust. Subsequently, we design an uncertainty-guided mixture of experts strategy that dynamically routes samples to experts exhibiting low uncertainty. This strategy effectively suppresses noise-induced instability, leading to enhanced robustness. Meanwhile, we design an uncertainty-guided routing to strengthen the multi-modal interaction, improving the performance. UGG-ReID is comprehensively evaluated on five representative multi-modal object ReID datasets, encompassing diverse spectral modalities. Experimental results show that the proposed method achieves excellent performance on all datasets and is significantly better than current methods in terms of noise immunity. Our code will be made public upon acceptance.", "arxiv_id": "2507.04638v2", "arxiv_authors": ["Xixi Wan", "Aihua Zheng", "Bo Jiang", "Beibei Wang", "Chenglong Li", "Jin Tang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042357, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a924"}, "filepath": "data/2505.11720v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999472548521382, "type": "Poster", "name": "UGoDIT: Unsupervised Group Deep Image Prior Via Transferable Weights", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116478", "abstract": "Recent advances in data-centric deep generative models have led to significant progress in solving inverse imaging problems. However, these models (e.g., diffusion models (DMs)) typically require large amounts of fully sampled (clean) training data, which is often impractical in medical and scientific settings such as dynamic imaging. On the other hand, training-data-free approaches like the Deep Image Prior (DIP) do not require clean ground-truth images but suffer from noise overfitting and can be computationally expensive as the network parameters need to be optimized for each measurement set independently. Moreover, DIP-based methods often overlook the potential of learning a prior using a small number of sub-sampled measurements (or degraded images) available during training. In this paper, we propose **UGoDIT**\u2014an **U**nsupervised **G**r**o**up **DI**P with **T**ransferable weights\u2014designed for the low-data regime where only a very small number, $M$, of sub-sampled measurement vectors are available during training. Our method learns a set of transferable weights by optimizing a shared encoder and $M$ disentangled decoders. At test time, we reconstruct the unseen degraded image using a DIP network, where part of the parameters are fixed to the learned weights, while the remaining are optimized to enforce measurement consistency. We evaluate \\our on both medical (multi-coil MRI) and natural (super resolution and non-linear deblurring) image recovery tasks under various settings. Compared to recent standalone DIP methods, \\our provides accelerated convergence and notable improvement in reconstruction quality. Furthermore, our method achieves performance competitive with SOTA DM-based and supervised approaches, despite not requiring large amounts of clean training data.", "arxiv_id": "2505.11720v1", "arxiv_authors": ["Shijun Liang", "Ismail R. Alkhouri", "Siddhant Gautam", "Qing Qu", "Saiprasad Ravishankar"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1171269, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a925"}, "filepath": "data/2510.20661v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995515297261002, "type": "Poster", "name": "UltraHR-100K: Enhancing UHR Image Synthesis with A Large-Scale High-Quality Dataset", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118549", "abstract": "Ultra-high-resolution (UHR) text-to-image (T2I) generation has seen notable progress. However, two key challenges remain : 1) the absence of a large-scale high-quality UHR T2I dataset, and (2) the neglect of tailored training strategies for fine-grained detail synthesis in UHR scenarios. To tackle the first challenge, we introduce \\textbf{UltraHR-100K}, a high-quality dataset of 100K UHR images with rich captions, offering diverse content and strong visual fidelity. Each image exceeds 3K resolution and is rigorously curated based on detail richness, content complexity, and aesthetic quality. To tackle the second challenge, we propose a frequency-aware post-training method that enhances fine-detail generation in T2I diffusion models. Specifically, we design (i) \\textit{Detail-Oriented Timestep Sampling (DOTS)} to focus learning on detail-critical denoising steps, and (ii) \\textit{Soft-Weighting Frequency Regularization (SWFR)}, which leverages Discrete Fourier Transform (DFT) to softly constrain frequency components, encouraging high-frequency detail preservation. Extensive experiments on our proposed UltraHR-eval4K benchmarks demonstrate that our approach significantly improves the fine-grained detail quality and overall fidelity of UHR image generation.", "arxiv_id": "2510.20661v1", "arxiv_authors": ["Chen Zhao", "En Ci", "Yunzhe Xu", "Tiehan Fan", "Shanyan Guan", "Yanhao Ge", "Jian Yang", "Ying Tai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5273018, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a926"}, "filepath": "data/2506.13691v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996457396560692, "type": "Poster", "name": "UltraVideo: High-Quality UHD Video Dataset with Comprehensive Captions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121373", "abstract": "The quality of the video dataset (image quality, resolution, and fine-grained caption) greatly influences the performance of the video generation model. %The growing demand for video applications sets higher requirements for high-quality video generation models. %For example, the generation of movie-level Ultra-High Definition (UHD) videos and the creation of 4K short video content. %However, the existing public datasets cannot support related research and applications. %In this paper, we first propose a high-quality open-sourced UHD-4K (22.4\\% of which are 8K) text-to-video dataset named UltraVideo, which contains a wide range of topics (more than 100 kinds), and each video has 9 structured captions with one summarized caption (average of 824 words). %Specifically, we carefully design a highly automated curation process with four stages to obtain the final high-quality dataset: \\textit{i)} collection of diverse and high-quality video clips. \\textit{ii)} statistical data filtering. \\textit{iii)} model-based data purification. \\textit{iv)} generation of comprehensive, structured captions. %In addition, we expand WAN to UltraWAN-1K/-4K, which can natively generate high-quality 1K/4K videos with more consistent text controllability, demonstrating the effectiveness of our data curation.%We believe that this work can make a significant contribution to future research on UHD video generation. UltraVideo dataset and UltraWAN models are available at \\href{https://zhangzjn.github.io/projects/UltraVideo}{project page}.", "arxiv_id": "2506.13691v1", "arxiv_authors": ["Zhucun Xue", "Jiangning Zhang", "Teng Hu", "Haoyang He", "Yinan Chen", "Yuxuan Cai", "Yabiao Wang", "Chengjie Wang", "Yong Liu", "Xiangtai Li", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6541231, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a927"}, "filepath": "data/2505.24517v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995377611992449, "type": "Poster", "name": "un$^2$CLIP: Improving CLIP's Visual Detail Capturing Ability via Inverting unCLIP", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116340", "abstract": "Contrastive Language-Image Pre-training (CLIP) has become a foundation model and has been applied to various vision and multimodal tasks. However, recent works indicate that CLIP falls short in distinguishing detailed differences in images and shows suboptimal performance on dense-prediction and vision-centric multimodal tasks. Therefore, this work focuses on improving existing CLIP models, aiming to capture as many visual details in images as possible. We find that a specific type of generative models, unCLIP, provides a suitable framework for achieving our goal. Specifically, unCLIP trains an image generator conditioned on the CLIP image embedding. In other words, it inverts the CLIP image encoder. Compared to discriminative models like CLIP, generative models are better at capturing image details because they are trained to learn the data distribution of images. Additionally, the conditional input space of unCLIP aligns with CLIP's original image-text embedding space. Therefore, we propose to invert unCLIP (dubbed un$^2$CLIP) to improve the CLIP model. In this way, the improved image encoder can gain unCLIP's visual detail capturing ability while preserving its alignment with the original text encoder simultaneously. We evaluate our improved CLIP across various tasks to which CLIP has been applied, including the challenging MMVP-VLM benchmark, the dense-prediction open-vocabulary segmentation task, and multimodal large language model tasks. Experiments show that un$^2$CLIP significantly improves the original CLIP and previous CLIP improvement methods.", "arxiv_id": "2505.24517v1", "arxiv_authors": ["Yinqi Li", "Jiahe Zhao", "Hong Chang", "Ruibing Hou", "Shiguang Shan", "Xilin Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4b9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1108179, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a928"}, "filepath": "data/2508.06317v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992692797368212, "type": "Poster", "name": "Uncertainty-quantified Rollout Policy Adaptation for Unlabelled Cross-domain Temporal Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118007", "abstract": "Video Temporal Grounding (TG) aims to temporally locate video segments matching a natural language description (a query) in a long video. While Vision-Language Models (VLMs) are effective at holistic semantic matching, they often struggle with fine-grained temporallocalisation. Recently, Group Relative Policy Optimisation (GRPO) reformulates the inference process as a reinforcement learning task, enabling fine-grained grounding and achieving strong in-domain performance. However, GRPO relies on labelled data, making it unsuitable in unlabelled domains. Moreover, because videos are large and expensive to store and process, performing full-scale adaptation introduces prohibitive latency and computational overhead, making it impractical for real-time deployment. To overcome both problems, we introduce a Data-Efficient Unlabelled Cross-domain Temporal Grounding method, from which a model is first trained on a labelled source domain, then adapted to a target domain using only a small number of {\\em unlabelled videos from the target domain}. This approach eliminates the need for target annotation and keeps both computational and storage overhead low enough to run in real time. Specifically, we introduce \\textbf{U}ncertainty-quantified \\textbf{R}ollout \\textbf{P}olicy \\textbf{A}daptation (\\textbf{URPA}) for cross-domain knowledge transfer in learning video temporal grounding without target labels. URPA generates multiple candidate predictions using GRPO rollouts, averages them to form a pseudo label, and estimates confidence from the variance across these rollouts. This confidence then weights the training rewards, guiding the model to focus on reliable supervision. Experiments on three datasets across six cross-domain settings show that URPA generalises well using only a few unlabelled target videos. Codes are given in supplemental materials.", "arxiv_id": "2508.06317v1", "arxiv_authors": ["Jian Hu", "Zixu Cheng", "Shaogang Gong", "Isabel Guan", "Jianye Hao", "Jun Wang", "Kun Shao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ba"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1089234, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a929"}, "filepath": "data/2509.15185v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991046007978425, "type": "Poster", "name": "Understand Before You Generate: Self-Guided Training for Autoregressive Image Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117379", "abstract": "Recent studies have demonstrated the importance of high-quality visual representations in image generation and have highlighted the limitations of generative models in image understanding. As a generative paradigm originally designed for natural language, autoregressive models face similar challenges. In this work, we present the first systematic investigation into the mechanisms of applying the next-token prediction paradigm to the visual domain. We identify three key properties that hinder the learning of high-level visual semantics: local and conditional dependence, inter-step semantic inconsistency, and spatial invariance deficiency. We show that these issues can be effectively addressed by introducing self-supervised objectives during training, leading to a novel training framework, Self-guided Training for AutoRegressive models (ST-AR). Without relying on pre-trained representation models, ST-AR significantly enhances the image understanding ability of autoregressive models and leads to improved generation quality. Specifically, ST-AR brings approximately 42% FID improvement for LlamaGen-L and 49% FID improvement for LlamaGen-XL, while maintaining the same sampling strategy.", "arxiv_id": "2509.15185v1", "arxiv_authors": ["Xiaoyu Yue", "Zidong Wang", "Yuqing Wang", "Wenlong Zhang", "Xihui Liu", "Wanli Ouyang", "Lei Bai", "Luping Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4bb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1053185, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92a"}, "filepath": "data/2502.13095v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998961135510502, "type": "Poster", "name": "Understanding and Rectifying Safety Perception Distortion in VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118667", "abstract": "Recent studies reveal that vision-language models (VLMs) become more susceptible to harmful requests and jailbreak attacks after integrating the vision modality, exhibiting greater vulnerability than their text-only LLM backbones. To uncover the root cause of this phenomenon, we conduct an in-depth analysis and identify a key issue: multimodal inputs introduce an modality-induced activation shift toward a \u201csafer\u201d direction compared to their text-only counterparts, leading VLMs to systematically overestimate the safety of harmful inputs. We refer to this issue as safety perception distortion. To mitigate such distortion, we propose Activation Shift Disentanglement and Calibration (ShiftDC), a training-free method that decomposes and calibrates the modality-induced activation shift to reduce its impact on safety. By isolating and removing the safety-relevant component, ShiftDC restores the inherent safety alignment of the LLM backbone while preserving the vision-language capabilities of VLMs. Experiments demonstrate that ShiftDC significantly enhances safety alignment without impairing model utility.", "arxiv_id": "2502.13095v1", "arxiv_authors": ["Xiaohan Zou", "Jian Kang", "George Kesidis", "Lu Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4bc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1717453, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92b"}, "filepath": "data/2506.00225v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994214146895269, "type": "Poster", "name": "Understanding while Exploring: Semantics-driven Active Mapping", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118000", "abstract": "Effective robotic autonomy in unknown environments demands proactive exploration and precise understanding of both geometry and semantics. In this paper, we propose ActiveSGM, an active semantic mapping framework designed to predict the informativeness of potential observations before execution. Built upon a 3D Gaussian Splatting (3DGS) mapping backbone, our approach employs semantic and geometric uncertainty quantification, coupled with a sparse semantic representation, to guide exploration. By enabling robots to strategically select the most beneficial viewpoints, ActiveSGM efficiently enhances mapping completeness, accuracy, and robustness to noisy semantic data, ultimately supporting more adaptive scene exploration. Our experiments on the Replica and Matterport3D datasets highlight the effectiveness of ActiveSGM in active semantic mapping tasks.", "arxiv_id": "2506.00225v1", "arxiv_authors": ["Liyan Chen", "Huangying Zhan", "Hairong Yin", "Yi Xu", "Philippos Mordohai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4bd"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1017565, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92c"}, "filepath": "data/2505.14671v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990308146596321, "type": "Poster", "name": "UniCTokens: Boosting Personalized Understanding and Generation via Unified Concept Tokens", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116748", "abstract": "Personalized models have demonstrated remarkable success in understanding and generating concepts provided by users. However, existing methods use separate concept tokens for understanding and generation, treating these tasks in isolation. This may result in limitations for generating images with complex prompts. For example, given the concept $\\langle bo\\rangle$, generating \"$\\langle bo\\rangle$ wearing its hat\" without additional textual descriptions of its hat. We call this kind of generation personalized knowledge-driven generation. To address the limitation, we present UniCTokens, a novel framework that effectively integrates personalized information into a unified vision language model (VLM) for understanding and generation. UniCTokens trains a set of unified concept tokens to leverage complementary semantics, boosting two personalized tasks. Moreover, we propose a progressive training strategy with three stages: understanding warm-up, bootstrapping generation from understanding, and deepening understanding from generation to enhance mutual benefits between both tasks. To quantitatively evaluate the unified VLM personalization, we present UnifyBench, the first benchmark for assessing concept understanding, concept generation, and knowledge-driven generation. Experimental results on UnifyBench indicate that UniCTokens shows competitive performance compared to leading methods in concept understanding, concept generation, and achieving state-of-the-art results in personalized knowledge-driven generation. Our research demonstrates that enhanced understanding improves generation, and the generation process can yield valuable insights into understanding. Our code and dataset will be released.", "arxiv_id": "2505.14671v3", "arxiv_authors": ["Ruichuan An", "Sihan Yang", "Renrui Zhang", "Zijun Shen", "Ming Lu", "Gaole Dai", "Hao Liang", "Ziyu Guo", "Shilin Yan", "Yulin Luo", "Bocheng Zou", "Chaoqun Yang", "Wentao Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4be"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3832810, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92d"}, "filepath": "data/2507.21545v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992826453404269, "type": "Poster", "name": "UniDomain: Pretraining a Unified PDDL Domain from Real-World Demonstrations for Generalizable Robot Task Planning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116644", "abstract": "Robotic task planning in real-world environments requires reasoning over implicit constraints from language and vision. While LLMs and VLMs offer strong priors, they struggle with long-horizon structure and symbolic grounding. Existing meth-ods that combine LLMs with symbolic planning often rely on handcrafted or narrow domains, limiting generalization. We propose UniDomain, a framework that pre-trains a PDDL domain from robot manipulation demonstrations and applies it for online robotic task planning. It extracts atomic domains from 12,393 manipulation videos to form a unified domain with 3137 operators, 2875 predicates, and 16481 causal edges. Given a target class of tasks, it retrieves relevant atomics from the unified domain and systematically fuses them into high-quality meta-domains for zero-shot planning. Experiments on diverse real-world tasks show that UniDomain solves complex, unseen tasks in a zero-shot manner, achieving up to 58% higher task success and 160% improvement in plan optimality over state-of-the-art LLM and LLM-PDDL baselines.", "arxiv_id": "2507.21545v2", "arxiv_authors": ["Haoming Ye", "Yunxiao Xiao", "Cewu Lu", "Panpan Cai"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4bf"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1084545, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92e"}, "filepath": "data/2505.03318v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998357156234193, "type": "Poster", "name": "Unified Multimodal Chain-of-Thought Reward Model through Reinforcement Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117965", "abstract": "Recent advances in multimodal Reward Models (RMs) have shown significant promise in delivering reward signals to align vision models with human preferences. However, current RMs are generally restricted to providing direct responses or engaging in shallow reasoning processes with limited depth, often leading to inaccurate reward signals. We posit that incorporating explicit long chains of thought (CoT) into the reward reasoning process can significantly strengthen their reliability and robustness. Furthermore, we believe that once RMs internalize CoT reasoning, their direct response accuracy can also be improved through implicit reasoning capabilities. To this end, this paper proposes UnifiedReward-Think, the first unified multimodal CoT-based reward model, capable of multi-dimensional, step-by-step long-chain reasoning for both visual understanding and generation reward tasks. Specifically, we adopt an exploration-driven reinforcement fine-tuning approach to elicit and incentivize the model's latent complex reasoning ability: (1) We first use a small amount of image generation preference data to distill the reasoning process of GPT-4o, which is then used for the model's cold start to learn the format and structure of CoT reasoning. (2) Subsequently, by leveraging the model's prior knowledge and generalization capabilities, we prepare large-scale unified multimodal preference data to elicit the model's reasoning process across various vision tasks. During this phase, correct reasoning outputs are retained for rejection sampling to refine the model (3) while incorrect predicted samples are finally used for Group Relative Policy Optimization (GRPO) based reinforcement fine-tuning, enabling the model to explore diverse reasoning paths and optimize for correct and robust solutions. Extensive experiments confirm that incorporating long CoT reasoning significantly enhances the accuracy of reward signals. Notably, after mastering CoT reasoning, the model exhibits implicit reasoning capabilities, allowing it to surpass existing baselines even without explicit reasoning traces.", "arxiv_id": "2505.03318v2", "arxiv_authors": ["Yibin Wang", "Zhimin Li", "Yuhang Zang", "Chunyu Wang", "Qinglin Lu", "Cheng Jin", "Jiaqi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1607508, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a92f"}, "filepath": "data/2510.19307v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996861734066967, "type": "Poster", "name": "Unified Reinforcement and Imitation Learning for Vision-Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119665", "abstract": "Vision-Language Models (VLMs) have achieved remarkable progress, yet their large scale often renders them impractical for resource-constrained environments. This paper introduces Unified Reinforcement and Imitation Learning (RIL), a novel and efficient training algorithm designed to create powerful, lightweight VLMs. RIL distinctively combines the strengths of reinforcement learning with adversarial imitation learning. This enables smaller student VLMs not only to mimic the sophisticated text generation of large teacher models but also to systematically improve their generative capabilities through reinforcement signals. Key to our imitation framework is a LLM-based discriminator that adeptly distinguishes between student and teacher outputs, complemented by guidance from multiple large teacher VLMs to ensure diverse learning. This unified learning strategy, leveraging both reinforcement and imitation, empowers student models to achieve significant performance gains, making them competitive with leading closed-source VLMs. Extensive experiments on diverse vision-language benchmarks demonstrate that RIL significantly narrows the performance gap with state-of-the-art open- and closed-source VLMs and, in several instances, surpasses them.", "arxiv_id": "2510.19307v1", "arxiv_authors": ["Byung-Kwan Lee", "Ryo Hachiuma", "Yong Man Ro", "Yu-Chiang Frank Wang", "Yueh-Hua Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.658Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 972222, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a930"}, "filepath": "data/2504.01792v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993335417153342, "type": "Poster", "name": "Unified Vision Transformer with Native Resolution", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118217", "abstract": "Conventional Vision Transformer streamlines visual modeling by employing a uniform input resolution, which underestimates the inherent variability of natural visual data and incurs a cost in spatial-contextual fidelity. While preliminary explorations have superficially investigated native resolution modeling, existing works still lack systematic training recipe from the visual representation perspective. To bridge this gap, we introduce Unified Vision Transformer with Native Resolution, i.e. UniViTAR, a family of homogeneous vision foundation models tailored for unified visual modality and native resolution scenario in the era of multimodal. Our framework first conducts architectural upgrades to the vanilla paradigm by integrating multiple advanced components. Building upon these improvements, a progressive training paradigm is introduced, which strategically combines two core mechanisms: (1) resolution curriculum learning, transitioning from fixed-resolution pretraining to native resolution tuning, thereby leveraging ViT\u2019s inherent adaptability to variable-length sequences, and (2) visual modality adaptation via inter-batch image-video switching, which balances computational efficiency with enhanced temporal reasoning. In parallel, a hybrid training framework further synergizes sigmoid-based contrastive loss with feature distillation from a frozen teacher model, thereby accelerating early-stage convergence. Finally, trained exclusively on public accessible image-caption data, our UniViTAR family across multiple model scales from 0.3B to 1B achieves state-of-the-art performance on a wide variety of visual-related tasks. The code and models will be available soon.", "arxiv_id": "2504.01792v2", "arxiv_authors": ["Limeng Qiao", "Yiyang Gan", "Bairui Wang", "Jie Qin", "Shuang Xu", "Siqi Yang", "Lin Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2074707, "mime_type": "image/png", "width": 4134, "height": 5847, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a931"}, "filepath": "data/2510.18825v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995038032822798, "type": "Poster", "name": "Unifying and Enhancing Graph Transformers via a Hierarchical Mask Framework", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117123", "abstract": "Graph Transformers (GTs) have emerged as a powerful paradigm for graph representation learning due to their ability to model diverse node interactions.However, existing GTs often rely on intricate architectural designs tailored to specific interactions, limiting their flexibly.To address this, we propose a unified hierarchical mask framework that reveals an underlying equivalence between model architecture and attention mask construction.This framework enables a consistent modeling paradigm by capturing diverse interactions through carefully designed attention masks.Theoretical analysis under this framework demonstrates that the probability of correct classification positively correlates with the receptive field size and label consistency, leading to a fundamental design principle:An effective attention mask should ensure both a sufficiently large receptive field and a high level of label consistency.While no single existing mask satisfies this principle across all scenarios, our analysis reveals that hierarchical masks offer complementary strengths\u2014motivating their effective integration.Then, we introduce M$^3$Dphormer, a Mixture-of-Experts based Graph Transformer with Multi-Level Masking and Dual Attention Computation.M$^3$Dphormer incorporates three theoretically grounded hierarchical masks and employs a bi-level expert routing mechanism to adaptively integrate multi-level interaction information.To ensure scalability, we further introduce a dual attention computation scheme that dynamically switches between dense and sparse modes based on local mask sparsity.Extensive experiments across multiple benchmarks demonstrate that M$^3$Dphormer achieves state-of-the-art performance, validating the effectiveness of our unified framework and model design.", "arxiv_id": "2510.18825v1", "arxiv_authors": ["Yujie Xing", "Xiao Wang", "Bin Wu", "Hai Huang", "Chuan Shi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1078845, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a932"}, "filepath": "data/2506.05280v3.png", "tags": [], "_media_type": "image", "_rand": 0.9993787570932032, "type": "Poster", "name": "Unifying Appearance Codes and Bilateral Grids for Driving Scene Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115990", "abstract": "Neural rendering techniques, including NeRF and Gaussian Splatting (GS), rely on photometric consistency to produce high-quality reconstructions. However, in real-world scenarios, it is challenging to guarantee perfect photometric consistency in acquired images. Appearance codes have been widely used to address this issue, but their modeling capability is limited, as a single code is applied to the entire image. Recently, the bilateral grid was introduced to perform pixel-wise color mapping, but it is difficult to optimize and constrain effectively. In this paper, we propose a novel multi-scale bilateral grid that unifies appearance codes and bilateral grids. We demonstrate that this approach significantly improves geometric accuracy in dynamic, decoupled autonomous driving scene reconstruction, outperforming both appearance codes and bilateral grids. This is crucial for autonomous driving, where accurate geometry is important for obstacle avoidance and control. Our method shows strong results across four datasets: Waymo, NuScenes, Argoverse, and PandaSet. We further demonstrate that the improvement in geometry is driven by the multi-scale bilateral grid, which effectively reduces floaters caused by photometric inconsistency.", "arxiv_id": "2506.05280v3", "arxiv_authors": ["Nan Wang", "Yuantao Chen", "Lixing Xiao", "Weiqing Xiao", "Bohan Li", "Zhaoxi Chen", "Chongjie Ye", "Shaocong Xu", "Saining Zhang", "Ziyang Yan", "Pierre Merriaux", "Lei Lei", "Tianfan Xue", "Hao Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1965231, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a933"}, "filepath": "data/2505.14682v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993760264165369, "type": "Poster", "name": "UniGen: Enhanced Training & Test-Time Strategies for Unified Multimodal Understanding and Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116520", "abstract": "We introduce UniGen, a unified multimodal large language model (MLLM) capable of image understanding and generation. We study the full training pipeline of UniGen from a data-centric perspective, including multi-stage pre-training, supervised fine-tuning, and direct preference optimization. More importantly, we propose a new Chain-of-Thought Verification (CoT-V) strategy for test-time scaling, which significantly boosts UniGen's image generation quality using a simple Best-of-N test-time strategy. Specifically, CoT-V enables UniGen to act as both image generator and verifier at test time, assessing the semantic alignment between a text prompt and its generated image in a step-by-step CoT manner. Trained entirely on open-source datasets across all stages, UniGen achieves state-of-the-art performance on a range of image understanding and generation benchmarks, with a final score of 0.78 on GenEval and 85.19 on DPG-Bench. Through extensive ablation studies, our work provides actionable insights and addresses key challenges in the full life cycle of building unified MLLMs, contributing meaningful directions to the future research.", "arxiv_id": "2505.14682v1", "arxiv_authors": ["Rui Tian", "Mingfei Gao", "Mingze Xu", "Jiaming Hu", "Jiasen Lu", "Zuxuan Wu", "Yinfei Yang", "Afshin Dehghan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1039471, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a934"}, "filepath": "data/2509.16170v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995714449379056, "type": "Poster", "name": "UniMRSeg: Unified Modality-Relax Segmentation via Hierarchical Self-Supervised Compensation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119265", "abstract": "Multi-modal image segmentation faces real-world deployment challenges from incomplete/corrupted modalities degrading performance. While existing methods address training-inference modality gaps via specialized per-combination models, they introduce high deployment costs by requiring exhaustive model subsets and model-modality matching. In this work, we propose a unified modality-relax segmentation network (UniMRSeg) through hierarchical self-supervised compensation (HSSC). Our approach hierarchically bridges representation gaps between complete and incomplete modalities across input, feature and output levels. First, we adopt modality reconstruction with the hybrid shuffled-masking augmentation, encouraging the model to learn the intrinsic modality characteristics and generate meaningful representations for missing modalities through cross-modal fusion.Next, modality-invariant contrastive learning implicitly compensates the feature space distance among incomplete-complete modality pairs. Furthermore, the proposed lightweight reverse attention adapter explicitly compensates for the weak perceptual semantics in the frozen encoder. Last, UniMRSeg is fine-tuned under the hybrid consistency constraint to ensure stable prediction under all modality combinations without large performance fluctuations. Without bells and whistles, UniMRSeg significantly outperforms the state-of-the-art methods under diverse missing modality scenarios on MRI-based brain tumor segmentation, RGB-D semantic segmentation, RGB-D/T salient object segmentation. The code will be released to facilitate further research.", "arxiv_id": "2509.16170v1", "arxiv_authors": ["Xiaoqi Zhao", "Youwei Pang", "Chenyang Yu", "Lihe Zhang", "Huchuan Lu", "Shijian Lu", "Georges El Fakhri", "Xiaofeng Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076993, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a935"}, "filepath": "data/2505.23566v4.png", "tags": [], "_media_type": "image", "_rand": 0.9998623818035205, "type": "Poster", "name": "Uni-MuMER: Unified Multi-Task Fine-Tuning of Vision-Language Model for Handwritten Mathematical Expression Recognition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116052", "abstract": "Handwritten Mathematical Expression Recognition (HMER) remains a persistent challenge in Optical Character Recognition(OCR) due to the inherent freedom of symbol layout and variability in handwriting styles. Prior methods have faced performance bottlenecks, proposing isolated architectural modifications that are difficult to integrate coherently into a unified framework. Meanwhile, recent advances in pretrained vision-language models (VLMs) have demonstrated strong cross-task generalization, offering a promising foundation for developing unified solutions. In this paper, we introduce Uni-MuMER, which fully fine-tunes the Qwen2.5-VL-3B model for the HMER task without modifying its architecture, effectively injecting domain-specific knowledge into a generalist framework. Our method integrates three data-driven tasks: Tree-Aware Chain-of-Thought (Tree-CoT) for structured spatial reasoning, Error-Driven Learning (EDL) for reducing confusion among visually similar characters, and Symbol Counting (SC) for improving recognition consistency in long expressions. Experiments on the CROHME and HME100K datasets show that Uni-MuMER achieves new state-of-the-art performance, surpassing the best lightweight specialized model SSAN by 16.31\\% and the top-performing VLM Gemini2.5-flash by 24.42\\% in the zero-shot setting.", "arxiv_id": "2505.23566v4", "arxiv_authors": ["Yu Li", "Jin Jiang", "Jianhua Zhu", "Shuai Peng", "Baole Wei", "Yuxuan Zhou", "Liangcai Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1177990, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a936"}, "filepath": "data/2509.18094v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997443745863324, "type": "Poster", "name": "UniPixel: Unified Object Referring and Segmentation for Pixel-Level Visual Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120066", "abstract": "Recent advances in Large Multi-modal Models (LMMs) have demonstrated their remarkable success as general-purpose multi-modal assistants, with particular focuses on holistic image- and video-language understanding. Conversely, less attention has been given to scaling fine-grained pixel-level understanding capabilities, where the models are expected to realize pixel-level alignment between visual signals and language semantics. Some previous studies have applied LMMs to related tasks such as region-level captioning and referring expression segmentation, however, these models are limited to performing either referring or segmentation tasks independently and fail to integrate these fine-grained perception capabilities into visual reasoning. To bridge this gap, we propose UniPixel, a large multi-modal model capable of flexibly comprehending visual prompt inputs and generating mask-grounded responses. Our model distinguishes itself by seamlessly integrating pixel-level perception with general visual understanding capabilities. Specifically, UniPixel processes visual prompts and generates relevant masks on demand, and performs subsequent reasoning conditioning on these intermediate pointers during inference, thereby enabling fine-grained pixel-level reasoning. The effectiveness of the proposed method has been verified on 10 benchmarks across a diverse set of tasks, including pixel-level referring/segmentation and object-centric understanding in images/videos. A novel PixelQA task is also introduced to verify the significance of our method. Code and models will be publicly available.", "arxiv_id": "2509.18094v3", "arxiv_authors": ["Ye Liu", "Zongyang Ma", "Junfu Pu", "Zhongang Qi", "Yang Wu", "Ying Shan", "Chang Wen Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4504866, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a937"}, "filepath": "data/2506.15673v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996548112144994, "type": "Poster", "name": "UniRelight: Learning Joint Decomposition and Synthesis for Video Relighting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120215", "abstract": "We address the challenge of relighting a single image or video, a task that demands precise scene intrinsic understanding and high-quality light transport synthesis. Existing end-to-end relighting models are often limited by the scarcity of paired multi-illumination data, restricting their ability to generalize across diverse scenes. Conversely, two-stage pipelines that combine inverse and forward rendering can mitigate data requirements but are susceptible to error accumulation and often fail to produce realistic outputs under complex lighting conditions or with sophisticated materials. In this work, we introduce a general-purpose approach that jointly estimates albedo and synthesizes relit outputs in a single pass, harnessing the generative capabilities of video diffusion models. This joint formulation enhances implicit scene comprehension and facilitates the creation of realistic lighting effects and intricate material interactions, such as shadows, reflections, and transparency. Trained on synthetic multi-illumination data and extensive automatically labeled real-world videos, our model demonstrates strong generalization across diverse domains and surpasses previous methods in both visual fidelity and temporal consistency.", "arxiv_id": "2506.15673v1", "arxiv_authors": ["Kai He", "Ruofan Liang", "Jacob Munkberg", "Jon Hasselgren", "Nandita Vijaykumar", "Alexander Keller", "Sanja Fidler", "Igor Gilitschenski", "Zan Gojcic", "Zian Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4c9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 957002, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a938"}, "filepath": "data/2502.20321v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995118393673383, "type": "Poster", "name": "UniTok: a Unified Tokenizer for Visual Generation and Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116864", "abstract": "Visual generative and understanding models typically rely on distinct tokenizers to process images, presenting a key challenge for unifying them within a single framework. Recent studies attempt to address this by connecting the training of VQVAE (for autoregressive generation) and CLIP (for understanding) to build a unified tokenizer. However, directly combining these training objectives has been observed to cause severe loss conflicts. In this paper, we show that reconstruction and semantic supervision do not inherently conflict. Instead, the underlying bottleneck stems from limited representational capacity of discrete token space. Building on these insights, we introduce UniTok, a unified tokenizer featuring a novel multi-codebook quantization mechanism that effectively scales up the vocabulary size and bottleneck dimension. In terms of final performance, UniTok sets a new record of 0.38 rFID and 78.6\\% zero-shot accuracy on ImageNet. Besides, UniTok can be seamlessly integrated into MLLMs to unlock native visual generation capability, without compromising the understanding performance. Additionally, we show that UniTok favors cfg-free generation, reducing gFID from 14.6 to 2.5 on ImageNet 256$\\times$256 benchmark. All codes and models have been made publicly available.", "arxiv_id": "2502.20321v3", "arxiv_authors": ["Chuofan Ma", "Yi Jiang", "Junfeng Wu", "Jihan Yang", "Xin Yu", "Zehuan Yuan", "Bingyue Peng", "Xiaojuan Qi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ca"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106240, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a939"}, "filepath": "data/2509.21086v1.png", "tags": [], "_media_type": "image", "_rand": 0.999469447760688, "type": "Poster", "name": "UniTransfer: Video Concept Transfer via Progressive Spatio-Temporal Decomposition", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115547", "abstract": "Recent advancements in video generation models have enabled the creation of diverse and realistic videos, with promising applications in advertising and film production. However, as one of the essential tasks of video generation models, video concept transfer remains significantly challenging.Existing methods generally model video as an entirety, leading to limited flexibility and precision when solely editing specific regions or concepts. To mitigate this dilemma, we propose a novel architecture UniTransfer, which introduces both spatial and diffusion timestep decomposition in a progressive paradigm, achieving precise and controllable video concept transfer. Specifically, in terms of spatial decomposition, we decouple videos into three key components: the foreground subject, the background, and the motion flow. Building upon this decomposed formulation, we further introduce a dual-to-single-stream DiT-based architecture for supporting fine-grained control over different components in the videos. We also introduce a self-supervised pretraining strategy based on random masking to enhance the decomposed representation learning from large-scale unlabeled video data. Inspired by the Chain-of-Thought reasoning paradigm, we further revisit the denoising diffusion process and propose a Chain-of-Prompt (CoP) mechanism to achieve the timestep decomposition. We decompose the denoising process into three stages of different granularity and leverage large language models (LLMs) for stage-specific instructions to guide the generation progressively. We also curate an animal-centric video dataset called OpenAnimal to facilitate the advancement and benchmarking of research in video concept transfer. Extensive experiments demonstrate that our method achieves high-quality and controllable video concept transfer across diverse reference images and scenes, surpassing existing baselines in both visual fidelity and editability.", "arxiv_id": "2509.21086v1", "arxiv_authors": ["Guojun Lei", "Rong Zhang", "Chi Wang", "Tianhang Liu", "Hong Li", "Zhiyuan Ma", "Weiwei Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4cb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3930113, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93a"}, "filepath": "data/2509.07530v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994976291550534, "type": "Poster", "name": "Universal Few-shot Spatial Control for Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116889", "abstract": "Spatial conditioning in pretrained text-to-image diffusion models has significantly improved fine-grained control over the structure of generated images. However, existing control adapters exhibit limited adaptability and incur high training costs when encountering novel spatial control conditions that differ substantially from the training tasks.To address this limitation, we propose Universal Few-Shot Control (UFC), a versatile few-shot control adapter capable of generalizing to novel spatial conditions.Given a few image-condition pairs of an unseen task and a query condition, UFC leverage the analogy between query and support conditions to construct task-specific control feature, instantiated by a matching mechanism and an update on a small set of task-specific parameters. Experiments on six spatial control tasks show that UFC, finetuned with only 30 annotated examples, achieves fine-grained control consistent with the spatial conditions.Notably, when finetuned with 0.1% of the full training data, UFC can even be on par with the fully supervised baseline, i.e., Uni-ControlNet, on the Normal, Depth, and Canny tasks.These results highlight UFC's effectiveness in controlling diffusion models when faced with novel spatial conditions.", "arxiv_id": "2509.07530v1", "arxiv_authors": ["Kiet T. Nguyen", "Chanhuyk Lee", "Donggyun Kim", "Dong Hoon Lee", "Seunghoon Hong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4cc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5259006, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93b"}, "filepath": "data/2506.18883v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998453101749449, "type": "Poster", "name": "Universal Video Temporal Grounding with Generative Multi-modal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119042", "abstract": "This paper presents a computational model for universal video temporal grounding, which accurately localizes temporal moments in videos based on natural language queries (e.g., questions or descriptions). Unlike existing methods that are often limited to specific video domains or durations, we propose **UniTime**, a robust and universal video grounding model leveraging the strong vision-language understanding capabilities of generative Multi-modal Large Language Models (MLLMs).Our model effectively handles videos of diverse views, genres, and lengths while comprehending complex language queries.The key contributions include:(i) We consider steering strong MLLMs for temporal grounding in videos. To enable precise timestamp outputs, we incorporate temporal information by interleaving timestamp tokens with video tokens.(ii) By training the model to handle videos with different input granularities through adaptive frame scaling, our approach achieves robust temporal grounding for both short and long videos.(iii) Comprehensive experiments show that UniTime outperforms state-of-the-art approaches in both zero-shot and dataset-specific finetuned settings across five public temporal grounding benchmarks.(iv) When employed as a preliminary moment retriever for long-form video question-answering (VideoQA), UniTime significantly improves VideoQA accuracy, highlighting its value for complex video understanding tasks.", "arxiv_id": "2506.18883v1", "arxiv_authors": ["Zeqian Li", "Shangzhe Di", "Zhonghua Zhai", "Weilin Huang", "Yanfeng Wang", "Weidi Xie"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4cd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1024775, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93c"}, "filepath": "data/2505.22566v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992878473444626, "type": "Poster", "name": "Universal Visuo-Tactile Video Understanding for Embodied Interaction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117271", "abstract": "Tactile perception is essential for embodied agents to understand the physical attributes of objects that cannot be determined through visual inspection alone. While existing methods have made progress in visual and language modalities for physical understanding, they fail to effectively incorporate tactile information that provides crucial haptic feedback for real-world interaction. In this paper, we present VTV-LLM, the first multi-modal large language model that enables universal Visuo-Tactile Video (VTV) understanding, bridging the gap between tactile perception and natural language. To address the challenges of cross-sensor and cross-modal integration, we contribute VTV150K, a comprehensive dataset comprising 150,000 video frames from 100 diverse objects captured across three different tactile sensors (GelSight Mini, DIGIT, and Tac3D), annotated with four fundamental tactile attributes (hardness, protrusion, elasticity, and friction). We develop a novel three-stage training paradigm that includes VTV enhancement for robust visuo-tactile representation, VTV-text alignment for cross-modal correspondence, and text prompt finetuning for natural language generation. Our framework enables sophisticated tactile reasoning capabilities including feature assessment, comparative analysis, and scenario-based decision-making. Extensive experimental evaluations demonstrate that VTV-LLM achieves superior performance in tactile reasoning tasks, establishing a foundation for more intuitive human-machine interaction in tactile domains.", "arxiv_id": "2505.22566v1", "arxiv_authors": ["Yifan Xie", "Mingyang Li", "Shoujie Li", "Xingting Li", "Guangyu Chen", "Fei Ma", "Fei Richard Yu", "Wenbo Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ce"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 952549, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93d"}, "filepath": "data/2506.03195v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994465505881368, "type": "Poster", "name": "Unlabeled Data Improves Fine-Grained Image Zero-shot Classification with Multimodal LLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117692", "abstract": "Despite Multimodal Large Language Models (MLLMs) showing promising results on general zero-shot image classification tasks, fine-grained image classification remains challenging. It demands precise attention to subtle visual details to distinguish between visually similar subcategories\u2014details that MLLMs may easily overlook without explicit guidance. To address this, we introduce AutoSEP, an iterative self-supervised prompt learning framework designed to enhance MLLM fine-grained classification capabilities in a fully unsupervised manner. Our core idea is to leverage unlabeled data to learn a description prompt that guides MLLMs in identifying crucial discriminative features within an image, and boost classification accuracy. We developed an automatic self-enhancing prompt learning framework called AutoSEP to iteratively improve the description prompt using unlabeled data, based on instance-level classification scoring function. AutoSEP only requires black-box access to MLLMs, eliminating the need for any training or fine-tuning. We evaluate our approach on multiple fine-grained classification datasets. It consistently outperforms other unsupervised baselines, demonstrating the effectiveness of our self-supervised optimization framework. Notably, AutoSEP in average improves 13\\% over standard zero-shot classification and 5\\% over the best-performing baselines.", "arxiv_id": "2506.03195v1", "arxiv_authors": ["Yunqi Hong", "Sohyun An", "Andrew Bai", "Neil Y. C. Lin", "Cho-Jui Hsieh"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4cf"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.659Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 944077, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93e"}, "filepath": "data/2505.18584v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994710137418008, "type": "Poster", "name": "Unleashing Diffusion Transformers for Visual Correspondence by Modulating Massive Activations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115725", "abstract": "Pre-trained stable diffusion models (SD) have shown great advances in visual correspondence. In this paper, we investigate the capabilities of Diffusion Transformers (DiTs) for accurate dense correspondence. Distinct from SD, DiTs exhibit a critical phenomenon in which very few feature activations exhibit significantly larger values than others, known as massive activations, leading to uninformative representations and significant performance degradation for DiTs.The massive activations consistently concentrate at very few fixed dimensions across all image patch tokens, holding little local information. We trace these dimension-concentrated massive activations and find that such concentration can be effectively localized by the zero-initialized Adaptive Layer Norm (AdaLN-zero).Building on these findings, we propose Diffusion Transformer Feature (DiTF), a training-free framework designed to extract semantic-discriminative features from DiTs. Specifically, DiTF employs AdaLN to adaptively localize and normalize massive activations with channel-wise modulation. In addition, we develop a channel discard strategy to further eliminate the negative impacts from massive activations. Experimental results demonstrate that our DiTF outperforms both DINO and SD-based models and establishes a new state-of-the-art performance for DiTs in different visual correspondence tasks (e.g., with +9.4% on Spair-71k and +4.4% on AP-10K-C.S.).", "arxiv_id": "2505.18584v1", "arxiv_authors": ["Chaofan Gan", "Yuanpeng Tu", "Xi Chen", "Tieyuan Chen", "Yuxi Li", "Mehrtash Harandi", "Weiyao Lin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1217927, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a93f"}, "filepath": "data/2506.05332v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997947691132447, "type": "Poster", "name": "Unleashing Hour-Scale Video Training for Long Video-Language Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120098", "abstract": "Recent long-form video-language understanding benchmarks have driven progress in video large multimodal models (Video-LMMs).However, the scarcity of well-annotated long videos has left the training of hour-long Video-LLMs underexplored. To close this gap, we present VideoMarathon, a large-scale hour-long video instruction-following dataset. This dataset includes around 9,700 hours of long videos sourced from diverse domains, ranging from 3 to 60 minutes per video. Specifically, it contains 3.3M high-quality QA pairs, spanning six fundamental topics: temporality, spatiality, object, action, scene, and event. Compared to existing video instruction datasets, VideoMarathon significantly extends training video durations up to 1 hour, and supports 22 diverse tasks requiring both short- and long-term video comprehension. Building on VideoMrathon, we further propose Hour-LLaVA, a powerful and efficient Video-LMM for hour-scale video-language modeling. Hour-LLaVA enables more effective hour-long video training and inference at 1-FPS sampling by leveraging a memory augmentation (MemAug) module, which adaptively integrates user question-relevant and spatiotemporal-informative semantics from a cached full video context. Empirically, Hour-LLaVA achieves the best performance on multiple long video-language benchmarks, demonstrating the superiority of both the VideoMarathon dataset and the Hour-LLaVA framework.", "arxiv_id": "2506.05332v1", "arxiv_authors": ["Jingyang Lin", "Jialian Wu", "Ximeng Sun", "Ze Wang", "Jiang Liu", "Yusheng Su", "Xiaodong Yu", "Hao Chen", "Jiebo Luo", "Zicheng Liu", "Emad Barsoum"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1058092, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a940"}, "filepath": "data/2509.15178v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995049552808207, "type": "Poster", "name": "Unleashing the Potential of Multimodal LLMs for Zero-Shot Spatio-Temporal Video Grounding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119064", "abstract": "Spatio-temporal video grounding (STVG) aims at localizing the spatio-temporal tube of a video, as specified by the input text query.In this paper, we utilize multimodal large language models (MLLMs) to explore a zero-shot solution in STVG.We reveal two key insights about MLLMs: (1) MLLMs tend to dynamically assign special tokens, referred to as \\textit{grounding tokens}, for grounding the text query; and(2) MLLMs often suffer from suboptimal grounding due to the inability to fully integrate the cues in the text query (\\textit{e.g.}, attributes, actions) for inference. Based on these insights, we propose a MLLM-based zero-shot framework for STVG, which includes novel decomposed spatio-temporal highlighting (DSTH) and temporal-augmented assembling (TAS) strategies to unleash the reasoning ability of MLLMs.The DSTH strategy first decouples the original query into attribute and action sub-queries for inquiring the existence of the target both spatially and temporally.It then uses a novel logit-guided re-attention (LRA) module to learn latent variables as spatial and temporal prompts, by regularizing token predictions for each sub-query.These prompts highlight attribute and action cues, respectively, directing the model's attention to reliable spatial and temporal related visual regions.In addition, as the spatial grounding by the attribute sub-query should be temporally consistent,we introduce the TAS strategy to assemble the predictions using the original video frames and the temporal-augmented frames as inputs to help improve temporal consistency.We evaluate our method on various MLLMs, and show that it outperforms SOTA methods on three common STVG benchmarks.", "arxiv_id": "2509.15178v1", "arxiv_authors": ["Zaiquan Yang", "Yuhao Liu", "Gerhard Hancke", "Rynson W. H. Lau"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1087495, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a941"}, "filepath": "data/2410.04224v3.png", "tags": [], "_media_type": "image", "_rand": 0.9990191833094454, "type": "Poster", "name": "Unleashing the Power of One-Step Diffusion based Image Super-Resolution via a Large-Scale Diffusion Discriminator", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120327", "abstract": "Diffusion models have demonstrated excellent performance for real-world image super-resolution (Real-ISR), albeit at high computational costs. Most existing methods are trying to derive one-step diffusion models from multi-step counterparts through knowledge distillation (KD) or variational score distillation (VSD). However, these methods are limited by the capabilities of the teacher model, especially if the teacher model itself is not sufficiently strong. To tackle these issues, we propose a new One-Step \\textbf{D}iffusion model with a larger-scale \\textbf{D}iffusion \\textbf{D}iscriminator for SR, called D$^3$SR. Our discriminator is able to distill noisy features from any time step of diffusion models in the latent space. In this way, our diffusion discriminator breaks through the potential limitations imposed by the presence of a teacher model. Additionally, we improve the perceptual loss with edge-aware DISTS (EA-DISTS) to enhance the model's ability to generate fine details. Our experiments demonstrate that, compared with previous diffusion-based methods requiring dozens or even hundreds of steps, our D$^3$SR attains comparable or even superior results in both quantitative metrics and qualitative evaluations. Moreover, compared with other methods, D$^3$SR achieves at least $3\\times$ faster inference speed and reduces parameters by at least 30\\%.", "arxiv_id": "2410.04224v3", "arxiv_authors": ["Jianze Li", "Jiezhang Cao", "Zichen Zou", "Xiongfei Su", "Xin Yuan", "Yulun Zhang", "Yong Guo", "Xiaokang Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1767160, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a942"}, "filepath": "data/2403.03881v4.png", "tags": [], "_media_type": "image", "_rand": 0.9999723559465833, "type": "Poster", "name": "Unlocking Dataset Distillation with Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117126", "abstract": "Dataset distillation seeks to condense datasets into smaller but highly representative synthetic samples. While diffusion models now lead all generative benchmarks, current distillation methods avoid them and rely instead on GANs or autoencoders, or, at best, sampling from a fixed diffusion prior. This trend arises because naive backpropagation through the long denoising chain leads to vanishing gradients, which prevents effective synthetic sample optimization. To address this limitation, we introduce LD3M, the first method to learn gradient-based distilled latents and class embeddings end-to-end through a pre-trained latent diffusion model. A linearly decaying skip connection, injected from the initial noisy state into every reverse step, preserves the gradient signal across dozens of timesteps without requiring diffusion weight fine-tuning. Across multiple ImageNet subsets at $128\\times128$ and $256\\times256$, LD3M improves downstream accuracy by up to 4.8 percentage points (1 IPC) and 4.2 points (10 IPC) over the prior state-of-the-art. The code for LD3M is provided at https://github.com/DOUBLE_BLIND/ld3m.", "arxiv_id": "2403.03881v4", "arxiv_authors": ["Brian B. Moser", "Federico Raue", "Sebastian Palacio", "Stanislav Frolov", "Andreas Dengel"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 973310, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a943"}, "filepath": "data/2510.03548v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997532114150951, "type": "Poster", "name": "Unmasking Puppeteers: Leveraging Biometric Leakage to Expose Impersonation in AI-Based Videoconferencing", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117596", "abstract": "AI-based talking-head videoconferencing systems reduce bandwidth by transmitting a latent representation of a speaker\u2019s pose and expression, which is used to synthesize frames on the receiver's end. However, these systems are vulnerable to \u201cpuppeteering\u201d attacks, where an adversary controls the identity of another person in real-time. Traditional deepfake detectors fail here, as all video content is synthetic. We propose a novel biometric defense that detects identity leakage in the transmitted latent representation. Our metric-learning approach disentangles identity cues from pose and expression, enabling detection of unauthorized swaps. Experiments across multiple talking-head models show that our method consistently outperforms prior defenses, operates in real time on consumer GPUs, and generalizes well to out-of-distribution data. By targeting the latent features shared during normal operation, our method offers a practical and robust safeguard against puppeteering.", "arxiv_id": "2510.03548v2", "arxiv_authors": ["Danial Samadi Vahdati", "Tai Duc Nguyen", "Ekta Prashnani", "Koki Nagano", "David Luebke", "Orazio Gallo", "Matthew Stamm"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1049908, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a944"}, "filepath": "data/2509.19003v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994036927230655, "type": "Poster", "name": "Unveiling Chain of Step Reasoning for Vision-Language Models with Fine-grained Rewards", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119237", "abstract": "Chain of thought reasoning has demonstrated remarkable success in large language models, yet its adaptation to vision-language reasoning remains an open challenge with unclear best practices. Existing attempts typically employ reasoning chains at a coarse-grained level, which struggles to perform fine-grained structured reasoning and, more importantly, are difficult to evaluate the reward and quality of intermediate reasoning. In this work, we delve into chain of step reasoning for vision-language models, enabling assessing reasoning step quality accurately and leading to effective reinforcement learning and inference-time scaling with fine-grained rewards. We present a simple, effective, and fully transparent framework, including the step-level reasoning data, process reward model (PRM), and reinforcement learning training. With the proposed approaches, our models set strong baselines with consistent improvements on challenging vision-language benchmarks. More importantly, we conduct a thorough empirical analysis and ablation study, unveiling the impact of each component and several intriguing properties of inference-time scaling. We believe this paper serves as a baseline for vision-language models and offers insights into more complex multimodal reasoning. Our dataset, PRM, and code will be made publicly available.", "arxiv_id": "2509.19003v1", "arxiv_authors": ["Honghao Chen", "Xingzhou Lou", "Xiaokun Feng", "Kaiqi Huang", "Xinlong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080132, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a945"}, "filepath": "data/2412.02542v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991393978082842, "type": "Poster", "name": "Unveiling Concept Attribution in Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117005", "abstract": "Diffusion models have shown remarkable abilities in generating realistic and high-quality images from text prompts. However, a trained model remains largely black-box; little do we know about the roles of its components in exhibiting a concept such as objects or styles. Recent works employ causal tracing to localize knowledge-storing layers in generative models without showing how other layers contribute to the target concept. In this work, we approach diffusion models' interpretability problem from a more general perspective and pose a question: \\textit{``How do model components work jointly to demonstrate knowledge?''}. To answer this question, we decompose diffusion models using component attribution, systematically unveiling the importance of each component (specifically the model parameter) in generating a concept. The proposed framework, called \\textbf{C}omponent \\textbf{A}ttribution for \\textbf{D}iffusion Model (CAD), discovers the localization of concept-inducing (positive) components, while interestingly uncovers another type of components that contribute negatively to generating a concept, which is missing in the previous knowledge localization work. Based on this holistic understanding of diffusion models, we present and empirically evaluate one utility of component attribution in controlling the generation process. Specifically, we introduce two fast, inference-time model editing algorithms, CAD-Erase and CAD-Amplify; in particular, CAD-Erase enables erasure and CAD-Amplify allows amplification of a generated concept by ablating the positive and negative components, respectively, while retaining knowledge of other concepts. Extensive experimental results validate the significance of both positive and negative components pinpointed by our framework, demonstrating the potential of providing a complete view of interpreting generative models.", "arxiv_id": "2412.02542v2", "arxiv_authors": ["Quang H. Nguyen", "Hoang Phan", "Khoa D. Doan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1760502, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a946"}, "filepath": "data/2510.23478v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998595816323227, "type": "Poster", "name": "UrbanIng-V2X: A Large-Scale Multi-Vehicle, Multi-Infrastructure Dataset Across Multiple Intersections for Cooperative Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121507", "abstract": "Recent cooperative perception datasets have played a crucial role in advancing smart mobility applications by enabling information exchange between intelligent agents, helping to overcome challenges such as occlusions and improving overall scene understanding. While some existing real-world datasets incorporate both vehicle-to-vehicle and vehicle-to-infrastructure interactions, they are typically limited to a single intersection or a single vehicle. A comprehensive perception dataset featuring multiple connected vehicles and infrastructure sensors across several intersections remains unavailable, limiting the benchmarking of algorithms in diverse traffic environments. Consequently, overfitting can occur, and models may demonstrate misleadingly high performance due to similar intersection layouts and traffic participant behavior. To address this gap, we introduce UrbanIng-V2X, the first large-scale, multi-modal dataset supporting cooperative perception involving vehicles and infrastructure sensors deployed across three urban intersections in Ingolstadt, Germany. UrbanIng-V2X consists of 34 temporally aligned and spatially calibrated sensor sequences, each lasting 20 seconds. All sequences contain recordings from one of three intersections, involving two vehicles and up to three infrastructure-mounted sensor poles operating in coordinated scenarios. In total, UrbanIng-V2X provides data from 12 vehicle-mounted RGB cameras, 2 vehicle LiDARs, 17 infrastructure thermal cameras, and 12 infrastructure LiDARs. All sequences are annotated at a frequency of 10 Hz with 3D bounding boxes spanning 13 object classes, resulting in approximately 712k annotated instances across the dataset. We provide comprehensive evaluations using state-of-the-art cooperative perception methods and publicly release the codebase, dataset, HD map, and a digital twin of the complete data collection environment.", "arxiv_id": "2510.23478v1", "arxiv_authors": ["Karthikeyan Chandra Sekaran", "Markus Geisler", "Dominik R\u00f6\u00dfle", "Adithya Mohan", "Daniel Cremers", "Wolfgang Utschick", "Michael Botsch", "Werner Huber", "Torsten Sch\u00f6n"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 964519, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a947"}, "filepath": "data/2503.18414v1.png", "tags": [], "_media_type": "image", "_rand": 0.999200228621049, "type": "Poster", "name": "U-REPA: Aligning Diffusion U-Nets to ViTs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116530", "abstract": "Representation Alignment (REPA) that aligns Diffusion Transformer (DiT) hidden-states with ViT visual encoders has proven highly effective in DiT training, demonstrating superior convergence properties, but it has not been validated on the canonical diffusion U-Net architecture that shows faster convergence compared to DiTs. However, adapting REPA to U-Net architectures presents unique challenges: (1) different block functionalities necessitate revised alignment strategies; (2) spatial-dimension inconsistencies emerge from U-Net's spatial downsampling operations; (3) space gaps between U-Net and ViT hinder the effectiveness of tokenwise alignment. To encounter these challenges, we propose U-REPA, a representation alignment paradigm that bridges U-Net hidden states and ViT features as follows: Firstly, we propose via observation that due to skip connection, the middle stage of U-Net is the best alignment option. Secondly, we propose upsampling of U-Net features after passing them through MLPs. Thirdly, we observe difficulty when performing tokenwise similarity alignment, and further introduces a manifold loss that regularizes the relative similarity between samples. Experiments indicate that the resulting U-REPA could achieve excellent generation quality and greatly accelerates the convergence speed. With CFG guidance interval, U-REPA could reach FID<1.5 in 200 epochs or 1M iterations on ImageNet 256 $\\times$ 256, and needs only half the total epochs to perform better than REPA under \\textit{sd-vae-ft-ema}.", "arxiv_id": "2503.18414v1", "arxiv_authors": ["Yuchuan Tian", "Hanting Chen", "Mengyu Zheng", "Yuchen Liang", "Chao Xu", "Yunhe Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4d9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1137871, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a948"}, "filepath": "data/2503.09949v3.png", "tags": [], "_media_type": "image", "_rand": 0.9999266209379485, "type": "Poster", "name": "UVE: Are MLLMs Unified Evaluators for AI-Generated Videos?", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121498", "abstract": "With the rapid growth of video generative models (VGMs), it is essential to develop reliable and comprehensive automatic metrics for AI-generated videos (AIGVs). Existing methods either use off-the-shelf models optimized for other tasks or rely on human assessment data to train specialized evaluators. These approaches are constrained to specific evaluation aspects and are difficult to scale with the increasing demands for finer-grained and more comprehensive evaluations. To address this issue, this work investigates the feasibility of using multimodal large language models (MLLMs) as a unified evaluator for AIGVs, leveraging their strong visual perception and language understanding capabilities. To evaluate the performance of automatic metrics in unified AIGV evaluation, we introduce a benchmark called UVE-Bench. UVE-Bench collects videos generated by state-of-the-art VGMs and provides pairwise human preference annotations across 15 evaluation aspects. Using UVE-Bench, we extensively evaluate 16 MLLMs. Our empirical results suggest that while advanced MLLMs (e.g., Qwen2VL-72B and InternVL2.5-78B) still lag behind human evaluators, they demonstrate promising ability in unified AIGV evaluation, significantly surpassing existing specialized evaluation methods. Additionally, we conduct an in-depth analysis of key design choices that impact the performance of MLLM-driven evaluators, offering valuable insights for future research on AIGV evaluation.", "arxiv_id": "2503.09949v3", "arxiv_authors": ["Yuanxin Liu", "Rui Zhu", "Shuhuai Ren", "Jiacong Wang", "Haoyuan Guo", "Xu Sun", "Lu Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4da"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110713, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a949"}, "filepath": "data/2505.16797v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998180455662226, "type": "Poster", "name": "V2V: Scaling Event-Based Vision through Efficient Video-to-Voxel Simulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118917", "abstract": "Event-based cameras offer unique advantages such as high temporal resolution, high dynamic range, and low power consumption. However, the massive storage requirements and I/O burdens of existing synthetic data generation pipelines and the scarcity of real data prevent event-based training datasets from scaling up, limiting the development and generalization capabilities of event vision models. To address this challenge, we introduce Video-to-Voxel (V2V), an approach that directly converts conventional video frames into event-based voxel grid representations, bypassing the storage-intensive event stream generation entirely. V2V enables a 150\u00d7 reduction in storage requirements while supporting on-the-fly parameter randomization for enhanced model robustness. Leveraging this efficiency, we train several video reconstruction and optical flow estimation model architectures on 10,000 diverse videos totaling 52 hours\u2014an order of magnitude larger than existing event datasets, yielding substantial improvements.", "arxiv_id": "2505.16797v1", "arxiv_authors": ["Hanyue Lou", "Jinxiu Liang", "Minggui Teng", "Yi Wang", "Boxin Shi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4db"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1056384, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94a"}, "filepath": "data/2411.10962v3.png", "tags": [], "_media_type": "image", "_rand": 0.9997791271860507, "type": "Poster", "name": "V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121426", "abstract": "Modern autonomous vehicle perception systems often struggle with occlusions and limited perception range. Previous studies have demonstrated the effectiveness of cooperative perception in extending the perception range and overcoming occlusions, thereby enhancing the safety of autonomous driving. In recent years, a series of cooperative perception datasets have emerged; however, these datasets primarily focus on cameras and LiDAR, neglecting 4D Radar\u2014a sensor used in single-vehicle autonomous driving to provide robust perception in adverse weather conditions. In this paper, to bridge the gap created by the absence of 4D Radar datasets in cooperative perception, we present V2X-Radar, the first large-scale, real-world multi-modal dataset featuring 4D Radar. V2X-Radar dataset is collected using a connected vehicle platform and an intelligent roadside unit equipped with 4D Radar, LiDAR, and multi-view cameras. The collected data encompasses sunny and rainy weather conditions, spanning daytime, dusk, and nighttime, as well as various typical challenging scenarios. The dataset consists of 20K LiDAR frames, 40K camera images, and 20K 4D Radar data, including 350K annotated boxes across five categories. To support various research domains, we have established V2X-Radar-C for cooperative perception, V2X-Radar-I for roadside perception, and V2X-Radar-V for single-vehicle perception. Furthermore, we provide comprehensive benchmarks across these three sub-datasets.", "arxiv_id": "2411.10962v3", "arxiv_authors": ["Lei Yang", "Xinyu Zhang", "Jun Li", "Chen Wang", "Jiaqi Ma", "Zhiying Song", "Tong Zhao", "Ziying Song", "Li Wang", "Mo Zhou", "Yang Shen", "Kai Wu", "Chen Lv"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4dc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.660Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085252, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94b"}, "filepath": "data/2505.19877v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997606227274475, "type": "Poster", "name": "Vad-R1: Towards Video Anomaly Reasoning via Perception-to-Cognition Chain-of-Thought", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119777", "abstract": "Recent advancements in reasoning capability of Multimodal Large Language Models (MLLMs) demonstrate its effectiveness in tackling complex visual tasks. However, existing MLLM-based Video Anomaly Detection (VAD) methods remain limited to shallow anomaly descriptions without deep reasoning. In this paper, we propose a new task named Video Anomaly Reasoning (VAR), which aims to enable deep analysis and understanding of anomalies in the video by requiring MLLMs to think explicitly before answering. To this end, we propose Vad-R1, an end-to-end MLLM-based framework for VAR. Specifically, we design a Perception-to-Cognition Chain-of-Thought (P2C-CoT) that simulates the human process of recognizing anomalies, guiding the MLLM to reason anomaly step-by-step. Based on the structured P2C-CoT, we construct Vad-Reasoning, a dedicated dataset for VAR. Furthermore, we propose an improved reinforcement learning algorithm AVA-GRPO, which explicitly incentivizes the anomaly reasoning capability of MLLMs through a self-verification mechanism with limited annotations. Experimental results demonstrate that Vad-R1 achieves superior performance, outperforming both open-source and proprietary models on VAD and VAR tasks.", "arxiv_id": "2505.19877v1", "arxiv_authors": ["Chao Huang", "Benfeng Wang", "Jie Wen", "Chengliang Liu", "Wei Wang", "Li Shen", "Xiaochun Cao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4dd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1131719, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94c"}, "filepath": "data/2510.22693v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999184982979856, "type": "Poster", "name": "VADTree: Explainable Training-Free Video Anomaly Detection via Hierarchical Granularity-Aware Tree", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116838", "abstract": "Video anomaly detection (VAD) focuses on identifying anomalies in videos. Supervised methods demand substantial in-domain training data and fail to deliver clear explanation for anomalies. In contrast, training-free methods leverage the knowledge reserve and language interactivity of large pre-trained models to detect anomalies. However, the current fixed-length temporal window sampling approaches struggle to accurately capture anomalies with varying temporal spans.Therefore, we propose \\textbf{\\methodshort}~that utilizes a Hierarchical Granularity-aware Tree (HGTree) structure for adaptive sampling VAD. \\methodshort~ leverages the knowledge embedded in a pre-trained Generic Event Boundary Detection (GEBD) model to characterize potential anomaly event boundaries.Specifically, \\methodshort~first decomposes the video into an HGTree based on generic event nodes using boundary confidence, and performs adaptive coarse-fine stratification and redundancy removal. Then, the multi-dimensional priors are injected into the vision-language models (VLMs) to enhance the abnormal perception of the node-wise video description, and robust anomaly reasoning is achieved for generic event nodes based on the large language models (LLMs).Finally, an intra-cluster correlation method is used to integrate the multi-granular anomaly scores. Extensive experiments on UCF-Crime and XD-Violence datasets demonstrate that \\methodshort~achieves state-of-the-art performance in training-free settings while drastically reducing the amount of video samples. Our code is publicly available at\\url{https://anonymous.4open.science/r/\\methodshort-6E11/}.", "arxiv_id": "2510.22693v1", "arxiv_authors": ["Wenlong Li", "Yifei Xu", "Yuan Rao", "Zhenhua Wang", "Shuiguang Deng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4de"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1101512, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94d"}, "filepath": "data/2510.11473v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998939907851889, "type": "Poster", "name": "VA-GS: Enhancing the Geometric Representation of Gaussian Splatting via View Alignment", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117322", "abstract": "3D Gaussian Splatting has recently emerged as an efficient solution for high-quality and real-time novel view synthesis. However, its capability for accurate surface reconstruction remains underexplored. Due to the discrete and unstructured nature of Gaussians, supervision based solely on image rendering loss often leads to inaccurate geometry and inconsistent multi-view alignment. In this work, we propose a novel method that enhances the geometric representation of 3D Gaussians through view alignment (VA). Specifically, we incorporate edge-aware image cues into the rendering loss to improve surface boundary delineation. To enforce geometric consistency across views, we introduce a visibility-aware photometric alignment loss that models occlusions and encourages accurate spatial relationships among Gaussians. To further mitigate ambiguities caused by lighting variations, we incorporate normal-based constraints to refine the spatial orientation of Gaussians and improve local surface estimation. Additionally, we leverage deep image feature embeddings to enforce cross-view consistency, enhancing the robustness of the learned geometry under varying viewpoints and illumination. Extensive experiments on standard benchmarks demonstrate that our method achieves state-of-the-art performance in both surface reconstruction and novel view synthesis. We will release our code to support future research.", "arxiv_id": "2510.11473v1", "arxiv_authors": ["Qing Li", "Huifang Feng", "Xun Gong", "Yu-Shen Liu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4df"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1085760, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94e"}, "filepath": "data/2509.16567v1.png", "tags": [], "_media_type": "image", "_rand": 0.999607095384692, "type": "Poster", "name": "V-CECE: Visual Counterfactual Explanations via Conceptual Edits", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118380", "abstract": "Recent black-box counterfactual generation frameworks fail to take into account the semantic content of the proposed edits, while relying heavily on training to guide the generation process. We propose a novel, plug-and-play black-box counterfactual generation framework, which suggests step-by-step edits based on theoretical guarantees of optimal edits to produce human-level counterfactual explanations with zero training. Our framework utilizes a pre-trained image editing diffusion model, and operates without access to the internals of the classifier, leading to an explainable counterfactual generation process. Throughout our experimentation, we showcase the explanatory gap between human reasoning and neural model behavior by utilizing both Convolutional Neural Network (CNN), Vision Transformer (ViT) and Large Vision Language Model (LVLM) classifiers, substantiated through a comprehensive human evaluation.", "arxiv_id": "2509.16567v1", "arxiv_authors": ["Nikolaos Spanos", "Maria Lymperaiou", "Giorgos Filandrianos", "Konstantinos Thomas", "Athanasios Voulodimos", "Giorgos Stamou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1099630, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a94f"}, "filepath": "data/2505.12053v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998150699345423, "type": "Poster", "name": "VFRTok: Variable Frame Rates Video Tokenizer with Duration-Proportional Information Assumption", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115303", "abstract": "Modern video generation frameworks based on Latent Diffusion Models suffer from inefficiencies in tokenization due to the Frame-Proportional Information Assumption.Existing tokenizers provide fixed temporal compression rates, causing the computational cost of the diffusion model to scale linearly with the frame rate.The paper proposes the Duration-Proportional Information Assumption: the upper bound on the information capacity of a video is proportional to the duration rather than the number of frames.Based on this insight, the paper introduces VFRTok, a Transformer-based video tokenizer, that enables variable frame rate encoding and decoding through asymmetric frame rate training between the encoder and decoder.Furthermore, the paper proposes Partial Rotary Position Embeddings (RoPE) to decouple position and content modeling, which groups correlated patches into unified tokens.The Partial RoPE effectively improves content-awareness, enhancing the video generation capability.Benefiting from the compact and continuous spatio-temporal representation, VFRTok achieves competitive reconstruction quality and state-of-the-art generation fidelity while using only $1/8$ tokens compared to existing tokenizers.", "arxiv_id": "2505.12053v2", "arxiv_authors": ["Tianxiong Zhong", "Xingye Tian", "Boyuan Jiang", "Xuebo Wang", "Xin Tao", "Pengfei Wan", "Zhiwei Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1013410, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a950"}, "filepath": "data/2510.14032v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994415928478597, "type": "Poster", "name": "Vgent: Graph-based Retrieval-Reasoning-Augmented Generation For Long Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119823", "abstract": "Understanding and reasoning over long videos pose significant challenges for large video language models (LVLMs) due to the difficulty in processing intensive video tokens beyond context window and retaining long-term sequential information. Retrieval-Augmented Generation (RAG) has demonstrated effectiveness in processing long context for Large Language Models (LLMs); however, applying RAG to long video faces challenges such as disrupted temporal dependencies and inclusion of irrelevant information that can hinder accurate reasoning. To address these limitations, we propose Vgent, a novel \\textbf{graph-based retrieval-reasoning-augmented generation framework} to enhance LVLMs for long video understanding. Our approach introduces two key innovations: (i) It represents videos by structured graphs with semantic relationships across video clips preserved to improve retrieval effectiveness. (ii) It introduces an intermediate reasoning step to mitigate the reasoning limitation of LVLMs, which leverages structured verification to reduce retrieval noise and facilitate the explicit aggregation of relevant information across clips, resulting in more accurate and context-aware responses.We comprehensively evaluate our framework with various open-source LVLMs on three long-video understanding benchmarks. Our approach yielded an overall performance improvement of $3.0\\%\\sim 5.4\\%$ over base models on MLVU, and outperformed state-of-the-art video RAG methods by $8.6\\%$. Our code will be made publicly available\\footnote{Please refer to the \\href{https://anonymous.4open.science/r/Vgent-83E7}{ anonymous GitHub link} for access to the code.}.", "arxiv_id": "2510.14032v1", "arxiv_authors": ["Xiaoqian Shen", "Wenxuan Zhang", "Jun Chen", "Mohamed Elhoseiny"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1113688, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a951"}, "filepath": "data/2505.12549v2.png", "tags": [], "_media_type": "image", "_rand": 0.9995273204205422, "type": "Poster", "name": "VGGT-SLAM: Dense RGB SLAM Optimized on the SL(4) Manifold", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119817", "abstract": "We present VGGT-SLAM, a dense RGB SLAM system constructed by incrementally and globally aligning submaps created from the feed-forward scene reconstruction approach VGGT using only uncalibrated monocular cameras. While related works align submaps using similarity transforms (i.e., translation, rotation, and scale), we show that such approaches are inadequate in the case of uncalibrated cameras. In particular, we revisit the idea of reconstruction ambiguity, where given a set of uncalibrated cameras with no assumption on the camera motion or scene structure, the scene can only be reconstructed up to a 15-degrees-of-freedom projective transformation of the true geometry. This inspires us to recover a consistent scene reconstruction across submaps by optimizing over the SL(4) manifold, thus estimating 15-degrees-of-freedom homography transforms between sequential submaps while accounting for potential loop closure constraints. As verified by extensive experiments, we demonstrate that VGGT-SLAM achieves improved map quality using long video sequences that are infeasible for VGGT due to its high GPU requirements.", "arxiv_id": "2505.12549v2", "arxiv_authors": ["Dominic Maggio", "Hyungtae Lim", "Luca Carlone"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1034998, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a952"}, "filepath": "data/2506.10128v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994601937561923, "type": "Poster", "name": "ViCrit: A Verifiable Reinforcement Learning Proxy Task for Visual Perception in VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116500", "abstract": "Reinforcement learning (RL) has shown great effectiveness for fine-tuning large language models (LLMs) using tasks that are challenging yet easily verifiable, such as math reasoning or code generation. However, extending this success to visual perception in vision\u2013language models (VLMs) has been impeded by the scarcity of vision-centric tasks that are simultaneously challenging and unambiguously verifiable. To this end, we introduce \\textbf{ViCrit} (\\textit{Visual Caption Hallucination Critic}), an RL proxy task that trains VLMs to localize a subtle, synthetic visual hallucination injected into paragraphs of human-written image captions. Starting from a 200-word captions, we inject a single, subtle visual description error\u2014altering a few words on objects, attributes, counts, or spatial relations\u2014and task the model to pinpoint the corrupted span given the image and the modified caption. This formulation preserves the full perceptual difficulty while providing a binary, exact-match reward that is easy to compute and unambiguous. Models trained with the \\textbf{ViCrit Task} exhibit substantial gains across a variety of VL benchmarks. Crucially, the improvements transfer beyond natural-image training data to abstract image reasoning and visual math, showing promises of learning to perceive rather than barely memorizing seen objects. To facilitate evaluation, we further introduce \\textbf{ViCrit-Bench}, a category-balanced diagnostic benchmark that systematically probes perception errors across diverse image domains and error types. Together, our results demonstrate that fine-grained hallucination criticism is an effective and generalizable objective for enhancing visual perception in VLMs.", "arxiv_id": "2506.10128v1", "arxiv_authors": ["Xiyao Wang", "Zhengyuan Yang", "Chao Feng", "Yongyuan Liang", "Yuhang Zhou", "Xiaoyu Liu", "Ziyi Zang", "Ming Li", "Chung-Ching Lin", "Kevin Lin", "Linjie Li", "Furong Huang", "Lijuan Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1204441, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a953"}, "filepath": "data/2506.18792v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992493752042251, "type": "Poster", "name": "ViDAR: Video Diffusion-Aware 4D Reconstruction From Monocular Inputs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116216", "abstract": "Dynamic Novel View Synthesis aims to generate photorealistic views of moving subjects from arbitrary viewpoints. This task is particularly challenging when relying on monocular video, where disentangling structure from motion is ill-posed and supervision is scarce. We introduce Video Diffusion-Aware Reconstruction (ViDAR), a novel 4D reconstruction framework that leverages personalised diffusion models to synthesise a pseudo multi-view supervision signal for training a Gaussian splatting representation. By conditioning on scene-specific features, ViDAR recovers fine-grained appearance details while mitigating artefacts introduced by monocular ambiguity. To address the spatio-temporal inconsistency of diffusion-based supervision, we propose a diffusion-aware loss function and a camera pose optimisation strategy that aligns synthetic views with the underlying scene geometry. Experiments on DyCheck, a challenging benchmark with extreme viewpoint variation, show that ViDAR outperforms all state-of-the-art baselines in visual quality and geometric consistency. We further highlight ViDAR\u2019s strong improvement over baselines on dynamic regions and provide a new benchmark to compare performance in reconstructing motion-rich parts of the scene.", "arxiv_id": "2506.18792v1", "arxiv_authors": ["Michal Nazarczuk", "Sibi Catley-Chandar", "Thomas Tanay", "Zhensong Zhang", "Gregory Slabaugh", "Eduardo P\u00e9rez-Pellitero"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4689142, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a954"}, "filepath": "data/2505.24838v1.png", "tags": [], "_media_type": "image", "_rand": 0.9992237003049249, "type": "Poster", "name": "VideoCAD: A Large-Scale Video Dataset for Learning UI Interactions and 3D Reasoning from CAD Software", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121820", "abstract": "Computer-Aided Design (CAD) is a time-consuming and complex process, requiring precise, long-horizon user interactions with intricate 3D interfaces. While recent advances in AI-driven user interface (UI) agents show promise, most existing datasets and methods focus on short, low-complexity tasks in mobile or web applications, failing to capture the demands of professional engineering tools. In this work, we introduce VideoCAD, the first attempt at engineering UI interaction learning for precision tasks. Specifically, VideoCAD is a large-scale synthetic dataset consisting of over 41K annotated video recordings of CAD operations, generated using an automated framework for collecting high-fidelity UI action data from human-made CAD designs. Compared to existing datasets, VideoCAD offers an order of magnitude higher complexity in UI interaction learning for real-world engineering tasks, having up to a $20\\times$ longer time horizon than other datasets. We show two important downstream applications of VideoCAD: learning UI interactions from professional precision 3D CAD tools and a visual question-answering (VQA) benchmark designed to evaluate multimodal large language models' (LLM) spatial reasoning and video understanding abilities. To learn the UI interactions, we propose VideoCADFormer - a state-of-the-art model in learning CAD interactions directly from video, which outperforms multiple behavior cloning baselines. Both VideoCADFormer and the VQA benchmark derived from VideoCAD reveal key challenges in the current state of video-based UI understanding, including the need for precise action grounding, multi-modal and spatial reasoning, and long-horizon dependencies. Dataset and code available at: https://github.com/BrandonMan123/VideoCAD.", "arxiv_id": "2505.24838v1", "arxiv_authors": ["Brandon Man", "Ghadi Nehme", "Md Ferdous Alam", "Faez Ahmed"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1106550, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a955"}, "filepath": "data/2505.15952v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996441309927052, "type": "Poster", "name": "VideoGameQA-Bench: Evaluating Vision-Language Models for Video Game Quality Assurance", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121740", "abstract": "With video games leading in entertainment revenues, optimizing game development workflows is critical to the industry\u2019s long-term success. Recent advances in vision-language models (VLMs) hold significant potential to automate and enhance various aspects of game development\u2014particularly video game quality assurance (QA), which remains one of the most labor-intensive processes with limited automation. To effectively measure VLM performance in video game QA tasks and evaluate their ability to handle real-world scenarios, there is a clear need for standardized benchmarks, as current ones fall short in addressing this domain. To bridge this gap, we introduce VideoGameQA-Bench - a comprehensive benchmark designed to encompass a wide range of game QA activities, including visual unit testing, visual regression testing, needle-in-a-haystack, glitch detection, and bug report generation for both images and videos.", "arxiv_id": "2505.15952v1", "arxiv_authors": ["Mohammad Reza Taesiri", "Abhijay Ghildyal", "Saman Zadtootaghaj", "Nabajeet Barman", "Cor-Paul Bezemer"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1028783, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a956"}, "filepath": "data/2505.01481v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995357342967831, "type": "Poster", "name": "VideoHallu: Evaluating and Mitigating Multi-modal Hallucinations on Synthetic Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118334", "abstract": "Synthetic video generation has gained significant attention for its realism and broad applications, but remains prone to violations of common sense and physical laws. This highlights the need for reliable abnormality detectors that understand such principles and are robust to hallucinations. To address this, we introduce VideoHallu, a benchmark of over 3,000 video QA pairs built from synthetic videos generated by models like Veo2, Sora, and Kling, paired with expert-crafted counterintuitive QA to evaluate the critical thinking abilities of Multi-modal Large Language Models (MLLMs) on abnormalities that are perceptually obvious to humans but often hallucinated due to language priors. VideoHallu evaluates MLLMs' abnormality detection abilities with examples across alignment, consistency, commonsense, and physics. We benchmark SOTA MLLMs, including GPT-4o, Gemini-2.5-Pro, Qwen-2.5-VL, Video-R1, and VideoChat-R1.We observe that these models perform well on many real-world benchmarks like MVBench and MovieChat, but still struggle with basic physics-based and commonsense reasoning in synthetic videos. We further show that post-training with Group Relative Policy Optimization (GRPO), using curriculum learning on datasets combining video QA with counterintuitive commonsense and physics reasoning over real and synthetic videos, improves MLLMs\u2019 abnormality detection and critical thinking, demonstrating the value of targeted training for improving their understanding of commonsense and physical laws.", "arxiv_id": "2505.01481v4", "arxiv_authors": ["Zongxia Li", "Xiyang Wu", "Guangyao Shi", "Yubin Qin", "Hongyang Du", "Fuxiao Liu", "Tianyi Zhou", "Dinesh Manocha", "Jordan Lee Boyd-Graber"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.661Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1021963, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a957"}, "filepath": "data/2510.12422v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993485874444554, "type": "Poster", "name": "VideoLucy: Deep Memory Backtracking for Long Video Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117816", "abstract": "Recent studies have shown that agent-based systems leveraging large language models (LLMs) for key information retrieval and integration have emerged as a promising approach for long video understanding. However, these systems face two major challenges. First, they typically perform modeling and reasoning on individual frames, struggling to capture the temporal context of consecutive frames. Second, to reduce the cost of dense frame-level captioning, they adopt sparse frame sampling, which risks discarding crucial information. To overcome these limitations, we propose VideoLucy, a deep memory backtracking framework for long video understanding. Inspired by the human recollection process from coarse to fine, VideoLucy employs a hierarchical memory structure with progressive granularity. This structure explicitly defines the detail level and temporal scope of memory at different hierarchical depths. Through an agent-based iterative backtracking mechanism, VideoLucy systematically mines video-wide, question-relevant deep memories until sufficient information is gathered to provide a confident answer. This design enables effective temporal understanding of consecutive frames while preserving critical details. In addition, we introduce EgoMem, a new benchmark for long video understanding. EgoMem is designed to comprehensively evaluate a model's ability to understand complex events that unfold over time and capture fine-grained details in extremely long videos. Extensive experiments demonstrate the superiority of VideoLucy. Built on open-source models, VideoLucy significantly outperforms state-of-the-art methods on multiple long video understanding benchmarks, achieving performance even surpassing the latest proprietary models such as GPT-4o. Our code and dataset will be made publicly available.", "arxiv_id": "2510.12422v1", "arxiv_authors": ["Jialong Zuo", "Yongtai Deng", "Lingdong Kong", "Jingkang Yang", "Rui Jin", "Yiwei Zhang", "Nong Sang", "Liang Pan", "Ziwei Liu", "Changxin Gao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4e9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1068671, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a958"}, "filepath": "data/2506.14168v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991065405484689, "type": "Poster", "name": "VideoMAR: Autoregressive Video Generation with Continuous Tokens", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116570", "abstract": "Masked-based autoregressive models have demonstrated promising image generation capability in continuous space. However, their potential for video generation remains under-explored. Masked-based autoregressive models have demonstrated promising image generation capability in continuous space. However, their potential for video generation remains under-explored. In this paper, we propose \\textbf{VideoMAR}, a concise and efficient decoder-only autoregressive image-to-video model with continuous tokens, composing temporal frame-by-frame and spatial masked generation.We first identify temporal causality and spatial bi-directionality as the first principle of video AR models, and propose the next-frame diffusion loss for the integration of mask and video generation. Besides, the huge cost and difficulty of long sequence autoregressive modeling is a basic but crucial issue. To this end, we propose the temporal short-to-long curriculum learning and spatial progressive resolution training, and employ progressive temperature strategy at inference time to mitigate the accumulation error. Furthermore, VideoMAR replicates several unique capacities of language models to video generation. It inherently bears high efficiency due to simultaneous temporal-wise KV cache and spatial-wise parallel generation, and presents the capacity of spatial and temporal extrapolation via 3D rotary embeddings. On the VBench-I2V benchmark, VideoMAR surpasses the previous state-of-the-art (Cosmos I2V) while requiring significantly fewer parameters ($9.3\\%$), training data ($0.5\\%$), and GPU resources ($0.2\\%$).", "arxiv_id": "2506.14168v2", "arxiv_authors": ["Hu Yu", "Biao Gong", "Hangjie Yuan", "DanDan Zheng", "Weilong Chai", "Jingdong Chen", "Kecheng Zheng", "Feng Zhao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ea"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5226909, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a959"}, "filepath": "data/2506.20601v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997940043557113, "type": "Poster", "name": "Video Perception Model for 3D Scene Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119253", "abstract": "3D scene synthesis has traditionally required expert knowledge and considerable manual effort; automating this process holds the potential to greatly advance applications in architectural design, robotics simulation, virtual reality, and gaming.Most recent 3D scene synthesis models tap into the commonsense knowledge encoded in Large Language Models (LLMs) or leverage strong appearance priors of modern image generation models.However, current LLMs exhibit limited 3D spatial reasoning ability,which hinders their ability to generate realistic and coherent 3D scenes.Meanwhile, image generation-based methods often suffer from multi-view inconsistencies.In this work, we introduce ${\\textbf{Vi}}$deo ${\\textbf{P}}$erception Models for 3D $\\textbf{Scene}$ synthesis ($\\textbf{VIPScene}$), a novel framework that uses video generation models to leverage the encoded commonsense knowledge of the 3D physical world. VIPScene accepts both text and image prompts and seamlessly integrates video generation, feedforward 3D reconstruction, and open-vocabulary perception models to semantically and geometrically analyze each object in a scene. This enables flexible scene synthesis with high realism and structural consistency.We further introduce $\\textbf{F}$irst-$\\textbf{P}$erson $\\textbf{V}$iew $\\textbf{Score}$ $\\textbf{(FPVScore)}$ for consistency and reality evaluation, utilizing continuous first-person perspective to capitalize on the reasoning ability of Multimodal Large Language Models.Extensive experiments show that VIPScene outperforms existing methods on indoor scene generation, and generalizes well to diverse room configurations. The code will be released.", "arxiv_id": "2506.20601v1", "arxiv_authors": ["Rui Huang", "Guangyao Zhai", "Zuria Bauer", "Marc Pollefeys", "Federico Tombari", "Leonidas Guibas", "Gao Huang", "Francis Engelmann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4eb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1094793, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95a"}, "filepath": "data/2503.21776v4.png", "tags": [], "_media_type": "image", "_rand": 0.9995564470488458, "type": "Poster", "name": "Video-R1: Reinforcing Video Reasoning in MLLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117305", "abstract": "Inspired by DeepSeek-R1's success in eliciting reasoning abilities through rule-based reinforcement learning (RL), we introduce Video-R1 as the first attempt to systematically explore the R1 paradigm for incentivizing video reasoning within multimodal large language models (MLLMs). However, directly applying RL training with the GRPO algorithm to video reasoning presents two primary challenges: (i) a lack of temporal modeling for video reasoning, and (ii) the scarcity of high-quality video-reasoning data. To address these issues, we first propose the T-GRPO algorithm, which encourages models to utilize temporal information in videos for reasoning. Additionally, instead of relying solely on video data, we incorporate high-quality image-reasoning data into the training process. We have constructed two datasets: Video-R1-CoT-165k for SFT cold start and Video-R1-260k for RL training, both comprising image and video data. Experimental results demonstrate that Video-R1 achieves significant improvements on video reasoning benchmarks such as VideoMMMU and VSI-Bench, as well as on general video benchmarks including MVBench and TempCompass, etc. Notably, Video-R1-7B attains a 37.1\\% accuracy on video spatial reasoning benchmark VSI-bench, surpassing the commercial proprietary model GPT-4o. All code, models, and data will be released.", "arxiv_id": "2503.21776v4", "arxiv_authors": ["Kaituo Feng", "Kaixiong Gong", "Bohao Li", "Zonghao Guo", "Yibing Wang", "Tianshuo Peng", "Junfei Wu", "Xiaoying Zhang", "Benyou Wang", "Xiangyu Yue"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ec"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1143692, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95b"}, "filepath": "data/2411.13093v3.png", "tags": [], "_media_type": "image", "_rand": 0.9992144030282917, "type": "Poster", "name": "Video-RAG: Visually-aligned Retrieval-Augmented Long Video Comprehension", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118120", "abstract": "Existing large video-language models (LVLMs) struggle to comprehend long videos correctly due to limited context. To address this problem, fine-tuning long-context LVLMs and employing GPT-based agents have emerged as promising solutions. However, fine-tuning LVLMs would require extensive high-quality data and substantial GPU resources, while GPT-based agents would rely on proprietary models (e.g., GPT-4o). In this paper, we propose Video Retrieval-Augmented Generation (Video-RAG), a training-free and cost-effective pipeline that employs visually-aligned auxiliary texts to help facilitate cross-modality alignment while providing additional information beyond the visual content. Specifically, we leverage open-source external tools to extract visually-aligned information from pure video data (e.g., audio, optical character, and object detection), and incorporate the extracted information into an existing LVLM as auxiliary texts, alongside video frames and queries, in a plug-and-play manner. Our Video-RAG offers several key advantages: (i) lightweight with low computing overhead due to single-turn retrieval; (ii) easy implementation and compatibility with any LVLM; and (iii) significant, consistent performance gains across long video understanding benchmarks, including Video-MME, MLVU, and LongVideoBench. Notably, our model demonstrates superior performance over proprietary models like Gemini-1.5-Pro and GPT-4o when utilized with a 72B model.", "arxiv_id": "2411.13093v3", "arxiv_authors": ["Yongdong Luo", "Xiawu Zheng", "Xiao Yang", "Guilin Li", "Haojia Lin", "Jinfa Huang", "Jiayi Ji", "Fei Chao", "Jiebo Luo", "Rongrong Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ed"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1709259, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95c"}, "filepath": "data/2505.23656v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993904406089901, "type": "Poster", "name": "VideoREPA: Learning Physics for Video Generation through Relational Alignment with Foundation Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116051", "abstract": "Recent advancements in text-to-video (T2V) diffusion models have enabled high-fidelity and realistic video synthesis. However, current T2V models often struggle to generate physically plausible content due to their limited inherent ability to accurately understand physics. We found that while the representations within T2V models possess some capacity for physics understanding, they lag significantly behind those from recent video self-supervised learning methods. To this end, we propose a novel framework called {VideoREPA}, which distills physics understanding capability from video understanding foundation models into T2V models by aligning token-level relations. This closes the physics understanding gap and enable more physics-plausible generation. Specifically, we introduce the {Token Relation Distillation (TRD) loss}, leveraging spatio-temporal alignment to provide soft guidance suitable for finetuning powerful pre-trained T2V models\u2014a critical departure from prior representation alignment (REPA) methods. To our knowledge, VideoREPA is the first REPA method designed for finetuning T2V models and specifically for injecting physical knowledge. Empirical evaluations show that VideoREPA substantially enhances the physics commonsense of baseline method, CogVideoX, achieving significant improvement on relevant benchmarks and demonstrating a strong capacity for generating videos consistent with intuitive physics. More video results are available at this anonymous link: https://anonymous.4open.science/r/VideoREPA-Video-Generation-26EB.", "arxiv_id": "2505.23656v1", "arxiv_authors": ["Xiangdong Zhang", "Jiaqi Liao", "Shaofeng Zhang", "Fanqing Meng", "Xiangpeng Wan", "Junchi Yan", "Yu Cheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ee"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3501922, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95d"}, "filepath": "data/2505.12434v4.png", "tags": [], "_media_type": "image", "_rand": 0.9996370710262801, "type": "Poster", "name": "VideoRFT: Incentivizing Video Reasoning Capability in MLLMs via Reinforced Fine-Tuning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119996", "abstract": "Reinforcement fine-tuning (RFT) has shown great promise in achieving human-level reasoning capabilities of Large Language Models (LLMs), and has recently been extended to MLLMs. Nevertheless, reasoning about videos, which is a fundamental aspect of human intelligence, remains a persistent challenge due to the complex logic, temporal and causal structures inherent in video data. To fill this gap, we propose VideoRFT, a novel approach that extends the RFT paradigm to cultivate human-like video reasoning capabilities in MLLMs. VideoRFT follows the standard two-stage scheme in RFT: supervised fine-tuning (SFT) with chain-of-thought (CoT) annotations, followed by online reinforcement learning (RL) to improve generalization. A central challenge to achieve this in the video domain lies in the scarcity of large-scale, high-quality video CoT datasets. We address this by building a fully automatic CoT curation pipeline. First, we devise a cognition-inspired prompting strategy to elicit a reasoning LLM to generate preliminary CoTs based solely on rich, structured, and literal representations of video content. Subsequently, these CoTs are revised by a visual-language model conditioned on the actual video, ensuring visual consistency and reducing visual hallucinations. This pipeline results in two new datasets -- VideoRFT-CoT-102K for SFT and VideoRFT-RL-310K for RL. To further strength the RL phase, we introduce a novel semantic-consistency reward that explicitly promotes the alignment between textual reasoning with visual evidence. This reward encourages the model to produce coherent, context-aware reasoning outputs grounded in visual input. Extensive experiments show that VideoRFT achieves state-of-the-art performance on six video reasoning benchmarks. Code, model, and data will be released.", "arxiv_id": "2505.12434v4", "arxiv_authors": ["Qi Wang", "Yanrui Yu", "Ye Yuan", "Rui Mao", "Tianfei Zhou"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ef"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074436, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95e"}, "filepath": "data/2505.11842v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999622042827773, "type": "Poster", "name": "Video-SafetyBench: A Benchmark for Safety Evaluation of Video LVLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121473", "abstract": "The increasing deployment of Large Vision-Language Models (LVLMs) raises safety concerns under potential malicious inputs. However, existing multimodal safety evaluations primarily focus on model vulnerabilities exposed by static image inputs, ignoring the temporal dynamics of video that may induce distinct safety risks. To bridge this gap, we introduce Video-SafetyBench, the first comprehensive benchmark designed to evaluate the safety of LVLMs under video-text attacks. It comprises 2,264 video-text pairs spanning 48 fine-grained unsafe categories, each pairing a synthesized video with either a harmful query, which contains explicit malice, or a benign query, which appears harmless but triggers harmful behavior when interpreted alongside the video. To generate semantically accurate videos for safety evaluation, we design a controllable pipeline that decomposes video semantics into subject images (what is shown) and motion text (how it moves), which jointly guide the synthesis of query-relevant videos. To effectively evaluate uncertain or borderline harmful outputs, we propose RJScore, a novel LLM-based metric that incorporates the confidence of judge models and human-aligned decision threshold calibration. Extensive experiments show that benign-query video composition achieves average attack success rates of 67.2%, revealing consistent vulnerabilities to video-induced attacks. We believe Video-SafetyBench will catalyze future research into video-based safety evaluation and defense strategies.", "arxiv_id": "2505.11842v2", "arxiv_authors": ["Xuannan Liu", "Zekun Li", "Zheqi He", "Peipei Li", "Shuhan Xia", "Xing Cui", "Huaibo Huang", "Xi Yang", "Ran He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f0"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1513069, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a95f"}, "filepath": "data/2503.01739v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993905021441257, "type": "Poster", "name": "VideoUFO: A Million-Scale User-Focused Dataset for Text-to-Video Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121391", "abstract": "Text-to-video generative models convert textual prompts into dynamic visual content, offering wide-ranging applications in film production, gaming, and education. However, their real-world performance often falls short of user expectations. One key reason is that these models have not been trained on videos related to some topics users want to create. In this paper, we propose VideoUFO, the first Video dataset specifically curated to align with Users' FOcus in real-world scenarios. Beyond this, our VideoUFO also features: (1) minimal (0.29\\%) overlap with existing video datasets, and (2) videos searched exclusively via YouTube's official API under the Creative Commons license. These two attributes provide future researchers with greater freedom to broaden their training sources. The VideoUFO comprises over 1.09 million video clips, each paired with both a brief and a detailed caption (description). Specifically, through clustering, we first identify 1,291 user-focused topics from the million-scale real text-to-video prompt dataset, VidProM. Then, we use these topics to retrieve videos from YouTube, split the retrieved videos into clips, and generate both brief and detailed captions for each clip. After verifying the clips with specified topics, we are left with about 1.09 million video clips. Our experiments reveal that (1) current 16 text-to-video models do not achieve consistent performance across all user-focused topics; and (2) a simple model trained on VideoUFO outperforms others on worst-performing topics. The dataset and code are publicly available at https://huggingface.co/datasets/WenhaoWang/VideoUFO and https://github.com/WangWenhao0716/BenchUFO under the CC BY 4.0 License.", "arxiv_id": "2503.01739v2", "arxiv_authors": ["Wenhao Wang", "Yi Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f1"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5418864, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a960"}, "filepath": "data/2506.05284v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996831809904139, "type": "Poster", "name": "Video World Models with Long-term Spatial Memory", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118886", "abstract": "Emerging world models autoregressively generate video frames in response to actions, such as camera movements and text prompts, among other control signals. Due to limited temporal context window sizes, these models often struggle to maintain scene consistency during revisits, leading to severe forgetting of previously generated environments. Inspired by the mechanisms of human memory, we introduce a novel framework to enhancing long-term consistency of video world models through a geometry-grounded long-term spatial memory. Our framework includes mechanisms to store and retrieve information from the long-term spatial memory and we curate custom datasets to train and evaluate world models with explicitly stored 3D memory mechanisms. Our evaluations show improved quality, consistency, and context length compared to relevant baselines, paving the way towards long-term consistent world generation.", "arxiv_id": "2506.05284v1", "arxiv_authors": ["Tong Wu", "Shuai Yang", "Ryan Po", "Yinghao Xu", "Ziwei Liu", "Dahua Lin", "Gordon Wetzstein"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f2"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4458630, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a961"}, "filepath": "data/2505.19492v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998862610746723, "type": "Poster", "name": "ViewCraft3D: High-fidelity and View-Consistent 3D Vector Graphics Synthesis", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119843", "abstract": "3D vector graphics play a crucial role in various applications including 3D shape retrieval, conceptual design, and virtual reality interactions due to their ability to capture essential structural information with minimal representation.While recent approaches have shown promise in generating 3D vector graphics, they often suffer from lengthy processing times and struggle to maintain view consistency.To address these limitations, we propose VC3D (**V**iew**C**raft**3D**), an efficient method that leverages 3D priors to generate 3D vector graphics.Specifically, our approach begins with 3D object analysis, employs a geometric extraction algorithm to fit 3D vector graphics to the underlying structure, and applies view-consistent refinement process to enhance visual quality.Our comprehensive experiments demonstrate that VC3D outperforms previous methods in both qualitative and quantitative evaluations, while significantly reducing computational overhead. The resulting 3D sketches maintain view consistency and effectively capture the essential characteristics of the original objects.", "arxiv_id": "2505.19492v1", "arxiv_authors": ["Chuang Wang", "Haitao Zhou", "Ling Luo", "Qian Yu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f3"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1297156, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a962"}, "filepath": "data/2506.23513v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999834069058001, "type": "Poster", "name": "ViewPoint: Panoramic Video Generation with Pretrained Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119145", "abstract": "Panoramic video generation aims to synthesize 360-degree immersive videos, holding significant importance in the fields of VR, world models, and spatial intelligence. Existing works fail to synthesize high-quality panoramic videos due to the inherent modality gap between panoramic data and perspective data, which constitutes the majority of the training data for modern diffusion models. In this paper, we propose a novel framework utilizing pretrained perspective video models for generating panoramic videos. Specifically, we design a novel panorama representation named ViewPoint map, which possesses global spatial continuity and fine-grained visual details simultaneously. With our proposed Pano-Perspective attention mechanism, the model benefits from pretrained perspective priors and captures the panoramic spatial correlations of the ViewPoint map effectively. Extensive experiments demonstrate that our method can synthesize highly dynamic and spatially consistent panoramic videos, achieving state-of-the-art performance and surpassing previous methods.", "arxiv_id": "2506.23513v1", "arxiv_authors": ["Zixun Fang", "Kai Zhu", "Zhiheng Liu", "Yu Liu", "Wei Zhai", "Yang Cao", "Zheng-Jun Zha"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f4"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3796031, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a963"}, "filepath": "data/2508.12081v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992727946195045, "type": "Poster", "name": "VimoRAG: Video-based Retrieval-augmented 3D Motion Generation for Motion Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120298", "abstract": "This paper introduces **VimoRAG**, a novel video-based retrieval-augmented motion generation framework for motion large language models (LLMs). As motion LLMs face severe out-of-domain/out-of-vocabulary issues due to limited annotated data, **VimoRAG** leverages large-scale in-the-wild video databases to enhance 3D motion generation by retrieving relevant 2D human motion signals. While video-based motion RAG is nontrivial, we address two key bottlenecks: (1) developing an effective motion-centered video retrieval model that distinguishes human poses and actions, and (2) mitigating the issue of error propagation caused by suboptimal retrieval results.We design the Gemini Motion Video Retriever mechanism and the Motion-centric Dual-alignment DPO Trainer, enabling effective retrieval and generation processes. Experimental results show that **VimoRAG** significantly boosts the performance of motion LLMs constrained to text-only input.", "arxiv_id": "2508.12081v2", "arxiv_authors": ["Haidong Xu", "Guangwei Xu", "Zhedong Zheng", "Xiatian Zhu", "Wei Ji", "Xiangtai Li", "Ruijie Guo", "Meishan Zhang", "Min zhang", "Hao Fei"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f5"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.662Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1133611, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a964"}, "filepath": "data/2510.16446v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990318245484079, "type": "Poster", "name": "VIPAMIN: Visual Prompt Initialization via Embedding Selection and Subspace Expansion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116091", "abstract": "In the era of large-scale foundation models, fully fine-tuning pretrained networks for each downstream task is often prohibitively resource-intensive. Prompt tuning offers a lightweight alternative by introducing tunable prompts while keeping the backbone frozen. However, existing visual prompt tuning methods often fail to specialize the prompts or enrich the representation space--especially when applied to self-supervised backbones. We show that these limitations become especially pronounced in challenging tasks and data-scarce settings, where effective adaptation is most critical. In this work, we introduce VIPAMIN, a visual prompt initialization strategy that enhances adaptation of self-supervised models by (1) aligning prompts with semantically informative regions in the embedding space, and (2) injecting novel representational directions beyond the pretrained subspace. Despite its simplicity--requiring only a single forward pass and lightweight operations--VIPAMIN consistently improves performance across diverse tasks and dataset sizes, setting a new state of the art in visual prompt tuning.", "arxiv_id": "2510.16446v1", "arxiv_authors": ["Jaekyun Park", "Hye Won Chung"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f6"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1015221, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a965"}, "filepath": "data/2509.04450v1.png", "tags": [], "_media_type": "image", "_rand": 0.999478937280781, "type": "Poster", "name": "Virtual Fitting Room: Generating Arbitrarily Long Videos of Virtual Try-On from a Single Image", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116202", "abstract": "This paper proposes Virtual Fitting Room (VFR), a novel video generative model that produces arbitrarily long virtual try-on videos. Our VFR models long video generation tasks as an auto-regressive, segment-by-segment generation process, eliminating the need for resource-intensive generation and lengthy video data, while providing the flexibility to generate videos of arbitrary length. The key challenges of this task are twofold: ensuring local smoothness between adjacent segments and maintaining global temporal consistency across different segments. To address these challenges, we propose our VFR framework, which ensures smoothness through a prefix video condition and enforces consistency with the anchor video \u2014 a 360\u00b0-view video that comprehensively captures the human's whole-body appearance. Our VFR generates minute-scale virtual try-on videos with both local smoothness and global temporal consistency under various motions, making it a pioneering work in long virtual try-on video generation.", "arxiv_id": "2509.04450v1", "arxiv_authors": ["Jun-Kun Chen", "Aayush Bansal", "Minh Phuoc Vo", "Yu-Xiong Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f7"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1116432, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a966"}, "filepath": "data/2506.18898v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995590923621458, "type": "Poster", "name": "Vision as a Dialect: Unifying Visual Understanding and Generation via Text-Aligned Representations", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118811", "abstract": "This paper presents a multimodal framework that attempts to unify visual understanding and generation within a shared discrete semantic representation. At its core is the Text-Aligned Tokenizer (TA-Tok), which converts images into discrete tokens using a text-aligned codebook projected from a large language model's (LLM) vocabulary. By integrating vision and text into a unified space with an expanded vocabulary, our multimodal LLM, **Tar**, enables cross-modal input and output through a shared interface, without the need for modality-specific designs. Additionally, we propose scale-adaptive encoding and decoding to balance efficiency and visual detail, along with a generative de-tokenizer to produce high-fidelity visual outputs. To address diverse decoding needs, we utilize two complementary de-tokenizers: a fast autoregressive model and a diffusion-based model. To enhance modality fusion, we investigate advanced pre-training tasks, demonstrating improvements in both visual understanding and generation. Experiments across benchmarks show that **Tar** matches or surpasses existing multimodal LLM methods, achieving faster convergence and greater training efficiency. All code, models, and data will be made publicly available.", "arxiv_id": "2506.18898v1", "arxiv_authors": ["Jiaming Han", "Hao Chen", "Yang Zhao", "Hanyu Wang", "Qi Zhao", "Ziyan Yang", "Hao He", "Xiangyu Yue", "Lu Jiang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f8"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1032150, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a967"}, "filepath": "data/2507.08441v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991551829073341, "type": "Poster", "name": "Vision Foundation Models as Effective Visual Tokenizers for Autoregressive Generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118231", "abstract": "Leveraging the powerful representations of pre-trained vision foundation models-- traditionally used for visual comprehension---we explore a novel direction: building an image tokenizer directly atop such models, a largely underexplored area. Specifically, we employ a frozen vision foundation model as the encoder of our tokenizer. To enhance its effectiveness, we introduce two key components: (1) a region-adaptive quantization framework that reduces redundancy in the pre-trained features on regular 2D grids, and (2) a semantic reconstruction objective that aligns the tokenizer\u2019s outputs with the foundation model\u2019s representations to preserve semantic fidelity. Based on these designs, our proposed image tokenizer, \\textbf{VFMTok}, achieves substantial improvements in image reconstruction and generation quality, while also enhancing token efficiency. It further boosts autoregressive (AR) generation---achieving a gFID of \\textbf{2.07} on ImageNet benchmarks, while accelerating model convergence by \\textbf{three times}, and enabling high-fidelity class-conditional synthesis without the need for classifier-free guidance (CFG). The code will be released publicly to benefit the community.", "arxiv_id": "2507.08441v2", "arxiv_authors": ["Anlin Zheng", "Xin Wen", "Xuanyang Zhang", "Chuofan Ma", "Tiancai Wang", "Gang Yu", "Xiangyu Zhang", "Xiaojuan Qi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4f9"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1152518, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a968"}, "filepath": "data/2509.24791v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990265909971522, "type": "Poster", "name": "Vision Function Layer in Multimodal LLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116134", "abstract": "This study identifies that visual-related functional decoding is distributed across different decoder layers in Multimodal Large Language Models (MLLMs). Typically, each function, such as counting, grounding, or OCR recognition, narrows down to two or three layers, which we define as Vision Function Layers (VFL). Additionally, the depth and its order of different VFLs exhibits a consistent pattern across different MLLMs, which is well-aligned with human behaviors (e.g., recognition occurs first, followed by counting, and then grounding). These findings are derived from Visual Token Swapping, our novel analytical framework that modifies targeted KV cache entries to precisely elucidate layer-specific functions during decoding. Furthermore, these insights offer substantial utility in tailoring MLLMs for real-world downstream applications. For instance, when LoRA training is selectively applied to VFLs whose functions align with the training data, VFL-LoRA not only outperform full-LoRA but also prevent out-of-domain function forgetting. Moreover, by analyzing the performance differential on training data when particular VFLs are ablated, VFL-select automatically classifies data by function, enabling highly efficient data selection to directly bolster corresponding capabilities. Consequently, VFL-select surpasses human experts in data selection, and achieves 98% of full-data performance with only 20% of the original dataset. This study delivers deeper comprehension of MLLM visual processing, fostering the creation of more efficient, interpretable, and robust models. Code will be released.", "arxiv_id": "2509.24791v1", "arxiv_authors": ["Cheng Shi", "Yizhou Yu", "Sibei Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4fa"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1097415, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a969"}, "filepath": "data/2507.07104v2.png", "tags": [], "_media_type": "image", "_rand": 0.999933302829378, "type": "Poster", "name": "Vision\u2011Language\u2011Vision Auto\u2011Encoder: Scalable Knowledge Distillation from Diffusion Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116899", "abstract": "Building state-of-the-art Vision-Language Models (VLMs) with strong captioning capabilities typically necessitates training on billions of high-quality image-text pairs, requiring millions of GPU hours. This paper introduces the Vision-Language-Vision **(VLV)** auto-encoder framework, which strategically leverages key pretrained components: a vision encoder, the decoder of a Text-to-Image (T2I) diffusion model, and subsequently, a Large Language Model (LLM). Specifically, we establish an information bottleneck by regularizing the language representation space, achieved through freezing the pretrained T2I diffusion decoder. Our VLV pipeline effectively distills knowledge from the text-conditioned diffusion model using continuous embeddings, demonstrating comprehensive semantic understanding via high-quality reconstructions. Furthermore, by fine-tuning a pretrained LLM to decode the intermediate language representations into detailed descriptions, we construct a state-of-the-art (SoTA) captioner comparable to leading models like GPT-4o and Gemini 2.0 Flash. Our method demonstrates exceptional cost-efficiency and significantly reduces data requirements; by primarily utilizing single-modal images for training and maximizing the utility of existing pretrained models (image encoder, T2I diffusion model, and LLM), it circumvents the need for massive paired image-text datasets, keeping the total training expenditure under $1,000 USD.", "arxiv_id": "2507.07104v2", "arxiv_authors": ["Tiezheng Zhang", "Yitong Li", "Yu-cheng Chou", "Jieneng Chen", "Alan Yuille", "Chen Wei", "Junfei Xiao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4fb"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2169104, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96a"}, "filepath": "data/2507.13348v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994087423368284, "type": "Poster", "name": "VisionThink: Smart and Efficient Vision Language Model via Reinforcement Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118060", "abstract": "Recent advancements in vision-language models (VLMs) have improved performance by increasing the number of visual tokens, which are often significantly longer than text tokens.However, we observe that most real-world scenarios do not require such an extensive number of visual tokens. While the performance drops significantly in a small subset of OCR-related tasks, models still perform accurately in most other general VQA tasks with only 1/4 resolution.Therefore, we propose to dynamically process distinct samples with different resolutions, and present a new paradigm for visual token compression, namely, VisionThink.It starts with a downsampled image and smartly decides whether it is sufficient for problem solving. Otherwise, the model could output a special token to request the higher-resolution image. Compared to existing Efficient VLM methods that compress tokens using fixed pruning ratios or thresholds, VisionThink autonomously decides whether to compress tokens case by case. As a result, it demonstrates strong fine-grained visual understanding capability on OCR-related tasks, and meanwhile saves substantial visual tokens on simpler tasks.We adopt reinforcement learning and propose the LLM-as-Judge strategy to successfully apply RL to general VQA tasks. Moreoever, we carefully design a reward function and penalty mechanism to achieve a stable and reasonable image resize call ratio.Extensive experiments demonstrate the superiority, efficiency, and effectiveness of our method.All our code and data will be open-sourced.", "arxiv_id": "2507.13348v1", "arxiv_authors": ["Senqiao Yang", "Junyi Li", "Xin Lai", "Bei Yu", "Hengshuang Zhao", "Jiaya Jia"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4fc"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1100256, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96b"}, "filepath": "data/2506.08010v5.png", "tags": [], "_media_type": "image", "_rand": 0.9993911604530175, "type": "Poster", "name": "Vision Transformers Don't Need Trained Registers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117199", "abstract": "We investigate the mechanism underlying a previously identified phenomenon in Vision Transformers: the emergence of high-norm tokens that lead to noisy attention maps (Darcet et al., 2024). We observe that in multiple models (e.g., CLIP, DINOv2), a sparse set of neurons is responsible for concentrating high-norm activations on outlier tokens, leading to anomalous attention patterns and degrading downstream visual processing. While the existing solution for removing these outliers involves retraining models from scratch with additional learned register tokens, we use our findings to create a training-free approach to mitigate these artifacts. By shifting the high norm activations from our discovered $\\textit{register neurons}$ into an additional untrained token, we can mimic the effect of register tokens on a model already trained without registers. Our method produces cleaner attention and feature maps, enhances performance over base models across multiple downstream tasks, and achieves results comparable to models explicitly trained with register tokens. This suggests that our approach effectively takes on the role of register tokens at test-time, offering a training-free solution for any pre-trained off-the-shelf model released without them.", "arxiv_id": "2506.08010v5", "arxiv_authors": ["Nick Jiang", "Amil Dravid", "Alexei Efros", "Yossi Gandelsman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4fd"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1074065, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96c"}, "filepath": "data/2505.21501v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998071982877784, "type": "Poster", "name": "Vision Transformers with Self-Distilled Registers", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117648", "abstract": "Vision Transformers (ViTs) have emerged as the dominant architecture for visual processing tasks, demonstrating excellent scalability with increased training data and model size.However, recent work has identified the emergence of artifact tokens in ViTs that are incongruous with the local semantics.These anomalous tokens degrade ViT performance in tasks that require fine-grained localization or structural coherence. An effective mitigation of this issue is to the addition of register tokens to ViTs, which implicitly \"absorb\" the artifact term during training.Given the availability of various large-scale pre-trained ViTs,in this paper we aim at equipping them with such register tokens without the need of re-training them from scratch, which are infeasible considering their size.Specifically, we propose Post Hoc Registers (**PH-Reg**), an efficient self-distillationmethod that integrates registers into an existing ViT without requiring additional labeled data and full retraining. PH-Reg initializes both teacher and student networks from the same pre-trained ViT. The teacher remains frozen and unmodified, while the student is augmented with randomly initialized register tokens. By applying test-time augmentation to the teacher\u2019s inputs, we generate denoised dense embeddings free of artifacts, which are then used to optimize only a small subset of unlocked student weights. We show that our approach can effectively reduce the number of artifact tokens, improving the segmentation and depth prediction of the student ViT under zero-shot and linear probing.", "arxiv_id": "2505.21501v1", "arxiv_authors": ["Yinjie Chen", "Zipeng Yan", "Chong Zhou", "Bo Dai", "Andrew F. Luo"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4fe"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1051359, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96d"}, "filepath": "data/2509.15235v5.png", "tags": [], "_media_type": "image", "_rand": 0.9994649775010881, "type": "Poster", "name": "ViSpec: Accelerating Vision-Language Models with Vision-Aware Speculative Decoding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115277", "abstract": "Speculative decoding is a widely adopted technique for accelerating inference in large language models (LLMs), yet its application to vision-language models (VLMs) remains underexplored, with existing methods achieving only modest speedups ($<1.5\\times$). This gap is increasingly significant as multimodal capabilities become central to large-scale models. We hypothesize that large VLMs can effectively filter redundant image information layer by layer without compromising textual comprehension, whereas smaller draft models struggle to do so. To address this, we introduce Vision-Aware Speculative Decoding (ViSpec), a novel framework tailored for VLMs. ViSpec employs a lightweight vision adaptor module to compress image tokens into a compact representation, which is seamlessly integrated into the draft model's attention mechanism while preserving original image positional information. Additionally, we extract a global feature vector for each input image and augment all subsequent text tokens with this feature to enhance multimodal coherence. To overcome the scarcity of multimodal datasets with long assistant responses, we curate a specialized training dataset by repurposing existing datasets and generating extended outputs using the target VLM with modified prompts. Our training strategy mitigates the risk of the draft model exploiting direct access to the target model's hidden states, which could otherwise lead to shortcut learning when training solely on target model outputs. Extensive experiments validate ViSpec, achieving, to our knowledge, the first substantial speedup in VLM speculative decoding.", "arxiv_id": "2509.15235v5", "arxiv_authors": ["Jialiang Kang", "Han Shu", "Wenshuo Li", "Yingjie Zhai", "Xinghao Chen"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a4ff"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076581, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96e"}, "filepath": "data/2507.00493v2.png", "tags": [], "_media_type": "image", "_rand": 0.999189930687445, "type": "Poster", "name": "Visual Anagrams Reveal Hidden Differences in Holistic Shape Processing Across Vision Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119428", "abstract": "Humans are able to recognize objects based on both local texture cues and the configuration of object parts, yet contemporary vision models primarily harvest local texture cues, yielding brittle, non-compositional features. Work on shape-vs-texture bias has pitted shape and texture representations in opposition, measuring shape relative to texture, ignoring the possibility that models (and humans) can simultaneously rely on both types of cues, and obscuring the absolute quality of both types of representation. We therefore recast shape evaluation as a matter of absolute configural competence, operationalized by the Configural Shape Score (CSS), which (i) measures the ability to recognize both images in Object-Anagram pairs that preserve local texture while permuting global part arrangement to depict different object categories. Across 86 convolutional, transformer, and hybrid models, CSS (ii) uncovers a broad spectrum of configural sensitivity with fully self-supervised and language-aligned transformers -- exemplified by DINOv2, SigLIP2 and EVA-CLIP -- occupying the top end of the CSS spectrum. Mechanistic probes reveal that (iii) high-CSS networks depend on long-range interactions: radius-controlled attention masks abolish performance showing a distinctive U-shaped integration profile, and representational-similarity analyses expose a mid-depth transition from local to global coding. A BagNet control, whose receptive fields straddle patch seams, remains at chance (iv), ruling out any ``border-hacking'' strategies. Finally, (v) we show that configural shape score also predicts other shape-dependent evals (e.g., foreground bias, spectral and noise robustness). Overall, we propose that the path toward truly robust, generalizable, and human-like vision systems may not lie in forcing an artificial choice between shape and texture, but rather in architectural and learning frameworks that seamlessly integrate both local-texture and global configural shape.", "arxiv_id": "2507.00493v2", "arxiv_authors": ["Fenil R. Doshi", "Thomas Fel", "Talia Konkle", "George Alvarez"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a500"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 999686, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a96f"}, "filepath": "data/2503.21770v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998913241185408, "type": "Poster", "name": "Visual Jenga: Discovering Object Dependencies via Counterfactual Inpainting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115152", "abstract": "This paper proposes a novel scene understanding task called Visual Jenga. Drawing inspiration from the game Jenga, the proposed task involves progressively removing objects from a single image until only the background remains. Just as Jenga players must understand structural dependencies to maintain tower stability, our task reveals the intrinsic relationships between scene elements by systematically exploring which objects can be removed while preserving scene coherence in both physical and geometric sense. As a starting point for tackling the Visual Jenga task, we propose a simple, data-driven, training-free approach that is surprisingly effective on a range of real-world images. The principle behind our approach is to utilize the asymmetry in the pairwise relationships between objects within a scene and employ a large inpainting model to generate a set of counterfactuals to quantify the asymmetry.", "arxiv_id": "2503.21770v1", "arxiv_authors": ["Anand Bhattad", "Konpat Preechakul", "Alexei A. Efros"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a501"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1667877, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a970"}, "filepath": "data/2411.16034v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997291663428182, "type": "Poster", "name": "VisualLens: Personalization through Task-Agnostic Visual History", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119005", "abstract": "Existing recommendation systems either rely on user interaction logs, such as online shopping history for shopping recommendations, or focus on text signals. However, item-based histories are not always accessible and generalizable for multimodal recommendation.We hypothesize that a user's visual history --- comprising images from daily life --- can offer rich, task-agnostic insights into their interests and preferences, and thus be leveraged for effective personalization.To this end, we propose VisualLens, a novel framework that leverages multimodal large language models (MLLMs) to enable personalization using task-agnostic visual history.VisualLens extracts, filters, and refines a spectrum user profile from the visual history to support personalized recommendation.We created two new benchmarks, Google-Review-V and Yelp-V, with task-agnostic visual histories, and show that VisualLens improves over state-of-the-art item-based multimodal recommendations by 5-10\\% on Hit@3, and outperforms GPT-4o by 2-5\\%.Further analysis shows that VisualLens is robust across varying history lengths and excels at adapting to both longer histories and unseen content categories.", "arxiv_id": "2411.16034v2", "arxiv_authors": ["Wang Bill Zhu", "Deqing Fu", "Kai Sun", "Yi Lu", "Zhaojiang Lin", "Seungwhan Moon", "Kanika Narang", "Mustafa Canim", "Yue Liu", "Anuj Kumar", "Xin Luna Dong"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a502"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.663Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1268818, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a971"}, "filepath": "data/2505.14460v2.png", "tags": [], "_media_type": "image", "_rand": 0.999137147471933, "type": "Poster", "name": "VisualQuality-R1: Reasoning-Induced Image Quality Assessment via Reinforcement Learning to Rank", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115506", "abstract": "DeepSeek-R1 has demonstrated remarkable effectiveness in incentivizing reasoning and generalization capabilities of large language models (LLMs) through reinforcement learning. Nevertheless, the potential of reasoning-induced computational modeling has not been thoroughly explored in the context of image quality assessment (IQA), a task critically dependent on visual reasoning. In this paper, we introduce VisualQuality-R1, a reasoning-induced no-reference IQA (NR-IQA) model, and we train it with reinforcement learning to rank, a learning algorithm tailored to the intrinsically relative nature of visual quality. Specifically, for a pair of images, we employ group relative policy optimization to generate multiple quality scores for each image. These estimates are then used to compute comparativeprobabilities of one image having higher quality than the other under the Thurstone model. Rewards for each quality estimate are defined using continuous fidelity measures rather than discretized binary labels. Extensive experiments show that the proposed VisualQuality-R1 consistently outperforms discriminative deep learning-based NR-IQA models as well as a recent reasoning-induced quality regression method. Moreover, VisualQuality-R1 is capable of generating contextually rich, human-aligned quality descriptions, and supportsmulti-dataset training without requiring perceptual scale realignment. These features make VisualQuality-R1 especially well-suited for reliably measuring progress in a wide range of image processing tasks like super-resolution and image generation. Code will be made publicly available.", "arxiv_id": "2505.14460v2", "arxiv_authors": ["Tianhe Wu", "Jian Zou", "Jie Liang", "Lei Zhang", "Kede Ma"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a503"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1532247, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a972"}, "filepath": "data/2506.22146v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995099414929605, "type": "Poster", "name": "Visual Structures Helps Visual Reasoning: Addressing the Binding Problem in VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117879", "abstract": "Despite progress in Vision-Language Models (VLMs), their capacity for visual reasoning is often limited by the \\textit{binding problem}: the failure to reliably associate perceptual features with their correct visual referents. This limitation underlies persistent errors in tasks such as counting, visual search, scene description, and spatial relationship understanding. A key factor is that current VLMs process visual features largely in parallel, lacking mechanisms for spatially grounded, serial attention.This paper introduces a simple yet effective intervention: augmenting visual inputs with low-level spatial structures (e.g., horizontal lines) and pairing this with a textual prompt that encourages sequential, spatially-aware parsing. We empirically demonstrate substantial performance improvements across core visual reasoning tasks. Specifically, our method improves GPT-4o visual search accuracy by 25.00\\%, increases counting accuracy by 26.83\\%, reduces edit distance error in scene description by 0.32, and enhances performance on spatial relationship tasks by 9.50\\% on a a 2D synthetic dataset.Furthermore, we find that the visual modification is essential for these gains; purely textual strategies, including Chain-of-Thought prompting, are insufficient and can even degrade performance. Our method enhances binding only with a single-query inference, underscoring the importance of visual input design over purely linguistically-based approaches. These findings suggest that low-level visual structuring is a powerful and underexplored direction for improving compositional visual reasoning and could serve as a general strategy for enhancing VLM performance on spatially grounded tasks.", "arxiv_id": "2506.22146v3", "arxiv_authors": ["Amirmohammad Izadi", "Mohammad Ali Banayeeanzade", "Fatemeh Askari", "Ali Rahimiakbar", "Mohammad Mahdi Vahedi", "Hosein Hasani", "Mahdieh Soleymani Baghshah"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a504"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1042677, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a973"}, "filepath": "data/2505.15510v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990519856121239, "type": "Poster", "name": "Visual Thoughts: A Unified Perspective of Understanding Multimodal Chain-of-Thought", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115243", "abstract": "Large Vision-Language Models (LVLMs) have achieved significant success in multimodal tasks, with multimodal chain-of-thought (MCoT) further enhancing performance and interpretability. Recent MCoT methods fall into two categories: (i) Textual-MCoT (T-MCoT), which takes multimodal input and produces textual output; and (ii) Interleaved-MCoT (I-MCoT), which generates interleaved image-text outputs. Despite advances in both approaches, the mechanisms driving these improvements are not fully understood. To fill this gap, we first reveal that MCoT boosts LVLMs by incorporating $\\textit{visual thoughts}$, which convey image information to the reasoning process regardless of the MCoT format, depending only on clarity and conciseness of expression. Furthermore, to explore visual thoughts systematically, we define four distinct forms of visual thought expressions and analyze them comprehensively. Our findings demonstrate that these forms differ in clarity and conciseness, yielding varying levels of MCoT improvement. Additionally, we explore the internal nature of visual thoughts, finding that visual thoughts serve as intermediaries between the input image and reasoning to deeper transformer layers, enabling more advanced visual information transmission. We hope that the visual thoughts can inspire further breakthroughs for future MCoT research.", "arxiv_id": "2505.15510v2", "arxiv_authors": ["Zihui Cheng", "Qiguang Chen", "Xiao Xu", "Jiaqi Wang", "Weiyun Wang", "Hao Fei", "Yidong Wang", "Alex Jinpeng Wang", "Zhi Chen", "Wanxiang Che", "Libo Qin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a505"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1149409, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a974"}, "filepath": "data/2501.01957v4.png", "tags": [], "_media_type": "image", "_rand": 0.99951958620268, "type": "Poster", "name": "VITA-1.5: Towards GPT-4o Level Real-Time Vision and Speech Interaction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119619", "abstract": "Recent Multimodal Large Language Models (MLLMs) have typically focused on integrating visual and textual modalities, with less emphasis placed on the role of speech in enhancing interaction. However, speech plays a crucial role in multimodal dialogue systems, and implementing high-performance in both vision and speech tasks remains a significant challenge due to the fundamental modality differences. In this paper, we propose a carefully designed multi-stage training methodology that progressively trains LLM to understand both visual and speech information, ultimately enabling fluent vision and speech interaction. Our approach not only preserves strong vision-language capacity, but also enables efficient speech-to-speech dialogue capabilities without separate ASR and TTS modules, significantly accelerating multimodal end-to-end response speed. By comparing our method against state-of-the-art counterparts across benchmarks for image, video, and speech tasks, we demonstrate that our model is equipped with both strong visual and speech capabilities, making near real-time vision and speech interaction.", "arxiv_id": "2501.01957v4", "arxiv_authors": ["Chaoyou Fu", "Haojia Lin", "Xiong Wang", "Yi-Fan Zhang", "Yunhang Shen", "Xiaoyu Liu", "Haoyu Cao", "Zuwei Long", "Heting Gao", "Ke Li", "Long Ma", "Xiawu Zheng", "Rongrong Ji", "Xing Sun", "Caifeng Shan", "Ran He"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a506"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 978111, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a975"}, "filepath": "data/2502.02175v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998457705483333, "type": "Poster", "name": "VLA-Cache: Efficient Vision-Language-Action Manipulation via Adaptive Token Caching", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118121", "abstract": "Vision-Language-Action (VLA) models have demonstrated strong multi-modal reasoning capabilities, enabling direct action generation from visual perception and language instructions in an end-to-end manner. However, their substantial computational cost poses a challenge for real-time robotic control, where rapid decision-making is essential. This paper introduces VLA-Cache, a training-free inference acceleration method that reduces computational overhead by adaptively caching and reusing static visual tokens across frames. Exploiting the temporal continuity in robotic manipulation, VLA-Cache identifies minimally changed tokens between adjacent frames and reuses their cached key-value representations, thereby circumventing redundant computations. Additionally, to maintain action precision, VLA-Cache selectively re-computes task-relevant tokens that are environmentally sensitive, ensuring the fidelity of critical visual information. To further optimize efficiency, we introduce a layer adaptive token reusing strategy that dynamically adjusts the reuse ratio based on attention concentration across decoder layers, prioritizing critical tokens for recomputation. Extensive experiments on two simulation platforms (LIBERO and SIMPLER) and a real-world robotic system demonstrate that VLA-Cache achieves up to 1.7\u00d7 speedup in CUDA latency and a 15% increase in control frequency, with negligible loss on task success rate. The manipulation videos are available at the following anonymous link: https://anonymous-5408-neurips-2025.glitch.me/.", "arxiv_id": "2502.02175v2", "arxiv_authors": ["Siyu Xu", "Yunke Wang", "Chenghao Xia", "Dihao Zhu", "Tao Huang", "Chang Xu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a507"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1090867, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a976"}, "filepath": "data/2506.17561v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994189992353275, "type": "Poster", "name": "VLA-OS: Structuring and Dissecting Planning Representations and Paradigms in Vision-Language-Action Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118219", "abstract": "Recent studies on Vision-Language-Action (VLA) models have shifted from the end-to-end action-generation paradigm toward a pipeline involving task planning followed by action generation, demonstrating improved performance on various complex, long-horizon manipulation tasks. However, existing approaches vary significantly in terms of network architectures, planning paradigms, representations, and training data sources, making it challenging for researchers to identify the precise sources of performance gains and determine which component is more difficult to learn. To systematically investigate the impacts of different planning paradigms and representations isolating from network architectures and training data, in this paper, we introduce \\name, a unified VLA architecture suite capable of various task planning paradigms, and design a comprehensive suite of controlled experiments across diverse object categories (rigid and deformable), visual modalities (2D and 3D), environments (simulation and real-world), and end-effectors (grippers and dexterous hands). Our results demonstrate that: 1) visually grounded planning representations are generally better than language planning representations; 2) the Hierarchical-VLA paradigm generally achieves superior performance than other paradigms, albeit at the cost of slower training and inference speeds.", "arxiv_id": "2506.17561v1", "arxiv_authors": ["Chongkai Gao", "Zixuan Liu", "Zhenghao Chi", "Junshan Huang", "Xin Fei", "Yiwen Hou", "Yuxuan Zhang", "Yudi Lin", "Zhirui Fang", "Zeyu Jiang", "Lin Shao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a508"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 957696, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a977"}, "filepath": "data/2503.06142v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995137499206525, "type": "Poster", "name": "VLForgery Face Triad: Detection, Localization and Attribution via Multimodal Large Language Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119856", "abstract": "Faces synthesized by diffusion models (DMs) with high-quality and controllable attributes pose a significant challenge for Deepfake detection. Most state-of-the-art detectors only yield a binary decision, incapable of forgery localization, attribution of forgery methods, and providing analysis on the cause of forgeries. In this work, we integrate Multimodal Large Language Models (MLLMs) within DM-based face forensics, and propose a fine-grained analysis triad framework called VLForgery,that can 1) predict falsified facial images;2) locate the falsified face regions subjected to partial synthesis; and 3) attribute the synthesis with specific generators. To achieve the above goals, we introduce VLF (Visual Language Forensics), a novel and diverse synthesis face dataset designed to facilitate rich interactions between `Visual' and `Language' modalities in MLLMs.Additionally, we propose an extrinsic knowledge-guided description method, termed EkCot, which leverages knowledge from the image generation pipeline to enable MLLMs to quickly capture image content. Furthermore, we introduce a low-level vision comparison pipeline designed to identify differential features between real and fake that MLLMs can inherently understand. These features are then incorporated into EkCot, enhancing its ability to analyze forgeries in a structured manner, following the sequence of detection, localization, and attribution.Extensive experiments demonstrate that VLForgery outperforms other state-of-the-art forensic approaches in detection accuracy, with additional potential for falsified region localization and attribution analysis.", "arxiv_id": "2503.06142v1", "arxiv_authors": ["Xinan He", "Yue Zhou", "Bing Fan", "Bin Li", "Guopu Zhu", "Feng Ding"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a509"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1597778, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a978"}, "filepath": "data/2505.16192v2.png", "tags": [], "_media_type": "image", "_rand": 0.9993414806042304, "type": "Poster", "name": "VLM-R\u00b3: Region Recognition, Reasoning, and Refinement for Enhanced Multimodal Chain-of-Thought", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119135", "abstract": "Recently, reasoning-based MLLMs have achieved a degree of success in generating long-form textual reasoning chains. However, they still struggle with complex tasks that necessitate dynamic and iterative focusing on and revisiting of visual regions to achieve precise grounding of textual reasoning in visual evidence. We introduce VLM-R\u00b3 (Visual Language Model with Region Recognition, Reasoning, and Refinement ), a framework that equips an MLLM with the ability to (i) decide when additional visual evidence is needed, (ii) determine where to ground within the image, and (iii) seamlessly weave the relevant sub-image content back into an interleaved chain-of-thought. The core of our method is \\textbf{Region-Conditioned Reinforcement Policy Optimization (R-GRPO)}, a training paradigm that rewards the model for selecting informative regions, formulating appropriate transformations (e.g. crop, zoom), and integrating the resulting visual context into subsequent reasoning steps. To bootstrap this policy, we compile a modest but carefully curated Visuo-Lingual Interleaved Rationale (VLIR) corpus that provides step-level supervision on region selection and textual justification. Extensive experiments on MathVista, ScienceQA, and other benchmarks show that VLM-R$^3$ sets a new state of the art in zero-shot and few-shot settings, with the largest gains appearing on questions demanding subtle spatial reasoning or fine-grained visual cue extraction.", "arxiv_id": "2505.16192v2", "arxiv_authors": ["Chaoya Jiang", "Yongrui Heng", "Wei Ye", "Han Yang", "Haiyang Xu", "Ming Yan", "Ji Zhang", "Fei Huang", "Shikun Zhang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1012077, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a979"}, "filepath": "data/2506.03614v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994820774347102, "type": "Poster", "name": "VLMs can Aggregate Scattered Training Patches", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120040", "abstract": "One way to mitigate risks in vision-language models (VLMs) is to censor dangerous samples in their training data.However, data moderation can be easily bypassed when harmful images are split into small, benign-looking patches, scattered across many training samples. VLMs may then learn to piece these fragments together and generate harmful responses at inference, either from full images or text references.For instance, if trained on image patches from a bloody scene paired with the descriptions \"safe,\" VLMs may later describe, the full image or a text reference to the scene, as \"safe.\"We define the core ability of VLMs enabling this attack as $\\textit{visual stitching}$\u2014the ability to integrate visual information spread across multiple training samples that share the same textual descriptions.In our work, we first demonstrate visual stitching abilities in most open-source VLMs on three datasets where each image is labeled with a unique synthetic ID. We split each $(\\texttt{image}, \\texttt{ID})$ pair into $\\{(\\texttt{patch}, \\texttt{ID})\\}$ pairs at different granularity for finetuning, and we find that models can verbalize the correct IDs from full images or text reference.Building on this, we simulate the adversarial data poisoning scenario mentioned above by replacing IDs with text descriptions like \"safe\" or \"unsafe\" and using patches from dangerous images, demonstrating how harmful content can evade moderation in patches and later be reconstructed through visual stitching, posing serious VLM safety risks.", "arxiv_id": "2506.03614v1", "arxiv_authors": ["Zhanhui Zhou", "Lingjie Chen", "Chao Yang", "Chaochao Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1115795, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97a"}, "filepath": "data/2507.13361v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996919199596218, "type": "Poster", "name": "VLMs have Tunnel Vision: Evaluating Nonlocal Visual Reasoning in Leading VLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117181", "abstract": "Visual Language Models (VLMs) excel at complex visual tasks such as VQA and chart understanding, yet recent work suggests they struggle with simple perceptual tests. We present an evaluation that tests vision-language models\u2019 capacity for \\emph{nonlocal visual reasoning}- reasoning that requires chaining evidence collected from multiple, possibly distant, regions of an image. We isolate three distinct forms of non\u2011local vision: \\emph{comparative perception}, which demands holding two images in working memory and comparing them; \\emph{saccadic search}, which requires making discrete, evidence\u2011driven jumps to locate successive targets; and \\emph{smooth visual search}, which involves searching smoothly along a continuous contour. Flagship models (e.g., Gemini 2.5 Pro, Claude Vision 3.7, GPT\u2011o4-mini), even those that perform well on prior primitive\u2011vision benchmarks, fail these tests and barely exceed random accuracy on two variants of our tasks that are trivial for humans. Our structured evaluation suite allows us if VLMs can perform similar visual algorithms to humans. Our findings show that despite gains in raw visual acuity, current models lack core visual reasoning capabilities.", "arxiv_id": "2507.13361v1", "arxiv_authors": ["Shmuel Berman", "Jia Deng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110668, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97b"}, "filepath": "data/2510.21323v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994836771195853, "type": "Poster", "name": "VL-SAE: Interpreting and Enhancing Vision-Language Alignment with a Unified Concept Set", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/120232", "abstract": "The alignment of vision-language representations endows current Vision-Language Models (VLMs) with strong multi-modal reasoning capabilities. However, the interpretability of the alignment component remains uninvestigated due to the difficulty in mapping the semantics of multi-modal representations into a unified concept set. To address this problem, we propose VL-SAE, a sparse autoencoder that encodes vision-language representations into its hidden activations. Each neuron in the hidden layer correlates to a concept represented by semantically similar images and texts, thereby interpreting these representations with a unified concept set. To establish the neuron-concept correlation, we encourage semantically similar representations to exhibit consistent neuron activations during self-supervised training. First, to measure the semantic similarity of multi-modal representations, we perform their alignment in an explicit form based on cosine similarity. Second, we construct the VL-SAE with a distance-based encoder and two modality-specific decoders to ensure the activation consistency of semantically similar representations. Experiments across multiple VLMs (e.g., CLIP, LLaVA) demonstrate the superior capability of VL-SAE in interpreting and enhancing the vision-language alignment. For interpretation, the alignment between vision and language representations can be understood by comparing their semantics with concepts. For enhancement, the alignment can be strengthened by aligning vision-language representations at the concept level, contributing to performance improvements in downstream tasks, including zero-shot image classification and hallucination elimination. Codes are provided in the supplementary and will be released to GitHub.", "arxiv_id": "2510.21323v1", "arxiv_authors": ["Shufan Shen", "Junshu Sun", "Qingming Huang", "Shuhui Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.664Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1080302, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97c"}, "filepath": "data/2505.18986v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994372090518744, "type": "Poster", "name": "VL-SAM-V2: Open-World Object Detection with General and Specific Query Fusion", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118607", "abstract": "Current perception models have achieved remarkable success by leveraging large-scale labeled datasets, but still face challenges in open-world environments with novel objects. To address this limitation, researchers introduce open-set perception models to detect or segment arbitrary test-time user-input categories. However, open-set models rely on human involvement to provide predefined object categories as input during inference. More recently, researchers have framed a more realistic and challenging task known as open-ended perception that aims to discover unseen objects without requiring any category-level input from humans at inference time. Nevertheless, open-ended models suffer from low performance compared to open-set models. In this paper, we present VL-SAM-V2, an open-world object detection framework that is capable of discovering unseen objects while achieving favorable performance. To achieve this, we combine queries from open-set and open-ended models and propose a general and specific query fusion module to allow different queries to interact. By adjusting queries from open-set models, we enable VL-SAM-V2 to be evaluated in the open-set or open-ended mode. In addition, to learn more diverse queries, we introduce ranked learnable queries to match queries with proposals from open-ended models by sorting. Moreover, we design a denoising point training strategy to facilitate the training process. Experimental results on LVIS show that our method surpasses the previous open-set and open-ended methods, especially on rare objects.", "arxiv_id": "2505.18986v1", "arxiv_authors": ["Zhiwei Lin", "Yongtao Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1047300, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97d"}, "filepath": "data/2502.01932v5.png", "tags": [], "_media_type": "image", "_rand": 0.9992166431313465, "type": "Poster", "name": "VolleyBots: A Testbed for Multi-Drone Volleyball Game Combining Motion Control and Strategic Play", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121760", "abstract": "Robot sports, characterized by well-defined objectives, explicit rules, and dynamic interactions, present ideal scenarios for demonstrating embodied intelligence. In this paper, we present VolleyBots, a novel robot sports testbed where multiple drones cooperate and compete in the sport of volleyball under physical dynamics. VolleyBots integrates three features within a unified platform: competitive and cooperative gameplay, turn-based interaction structure, and agile 3D maneuvering. Competitive and cooperative gameplay challenges each drone to coordinate with its teammates while anticipating and countering opposing teams\u2019 tactics.Turn-based interaction demands precise timing, accurate state prediction, and management of long-horizon temporal dependencies.Agile 3D maneuvering requires rapid accelerations, sharp turns, and precise 3D positioning despite the quadrotor\u2019s underactuated dynamics.These intertwined features yield a complex problem combining motion control and strategic play, with no available expert demonstrations.We provide a comprehensive suite of tasks ranging from single-drone drills to multi-drone cooperative and competitive tasks, accompanied by baseline evaluations of representative multi-agent reinforcement learning (MARL) and game-theoretic algorithms. Simulation results show that on-policy reinforcement learning (RL) methods outperform off-policy methods in single-agent tasks, but both approaches struggle in complex tasks that combine motion control and strategic play.We additionally design a hierarchical policy which achieves 69.5% win rate against the strongest baseline in the 3 vs 3 task, underscoring its potential as an effective solution for tackling the complex interplay between low-level control and high-level strategy.The project page is at https://sites.google.com/view/volleybots.", "arxiv_id": "2502.01932v5", "arxiv_authors": ["Zelai Xu", "Ruize Zhang", "Chao Yu", "Huining Yuan", "Xiangmin Yi", "Shilong Ji", "Chuqi Wang", "Wenhao Tang", "Feng Gao", "Wenbo Ding", "Xinlei Chen", "Yu Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a50f"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1109317, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97e"}, "filepath": "data/2505.18809v2.png", "tags": [], "_media_type": "image", "_rand": 0.9991393138131255, "type": "Poster", "name": "VORTA: Efficient Video Diffusion via Routing Sparse Attention", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116733", "abstract": "Video Diffusion Transformers (VDiTs) have achieved remarkable progress in high-quality video generation, but remain computationally expensive due to the quadratic complexity of attention over high-dimensional video sequences. Recent attention acceleration methods leverage the sparsity of attention patterns to improve efficiency; however, they often overlook inefficiencies of redundant long-range interactions. To address this, we propose **VORTA**, an efficient VDiT framework with two novel components: (1) a sparse attention mechanism that efficiently captures long-range dependencies, and (2) a routing strategy that adaptively replaces full 3D attention with specialized sparse attention experts throughout the sampling process. It achieves a $\\mathbf{1.76\\times}$ end-to-end speedup without quality loss on VBench.Furthermore, VORTA can seamlessly integrate with various other acceleration methods, such as caching and step distillation, reaching up to $\\mathbf{14.41\\times}$ speedup with negligible performance degradation.VORTA demonstrates its efficiency and enhances the practicality of VDiTs in real-world settings.", "arxiv_id": "2505.18809v2", "arxiv_authors": ["Wenhao Sun", "Rong-Cheng Tu", "Yifu Ding", "Zhao Jin", "Jingyi Liao", "Shunyu Liu", "Dacheng Tao"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a510"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3559885, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a97f"}, "filepath": "data/2506.04623v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999911337965506, "type": "Poster", "name": "VoxDet: Rethinking 3D Semantic Scene Completion as Dense Object Detection", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116288", "abstract": "Camera-based Semantic Scene Completion (SSC) aims to reconstruct the 3D geometry and semantics of the surrounding environment. With dense voxel labels, prior works typically formulate SSC as a *dense segmentation task*, independently classifying each voxel. However, this paradigm neglects critical instance-centric discriminability, leading to instance-level incompleteness and adjacent ambiguities. To address this, we highlight a \"free lunch\" of SSC labels: the voxel-level class label has implicitly told the instance-level insight, which is ever-overlooked by the community. Motivated by this observation, we first introduce a training-free **Voxel-to-Instance (VoxNT) trick**: a simple yet effective method that freely converts voxel-level class labels into instance-level offset labels. Building on this, we further propose **VoxDet**, an instance-centric framework that reformulates the voxel-level SSC as *dense object detection* by decoupling it into two sub-tasks: offset regression and semantic prediction. Specifically, based on the lifted 3D volume, VoxDet first uses (a) Spatially-decoupled Voxel Encoder to generate disentangled feature volumes for the two sub-tasks, which learn task-specific spatial deformation in the densely projected tri-perceptive space. Then, we deploy (b) Task-decoupled Dense Predictor to address SSC via dense detection. Here, we first regress a 4D offset field to estimate distances (6 directions) between voxels and the corresponding object boundaries in the voxel space. The regressed offsets are then used to guide the instance-level aggregation in the classification branch, achieving instance-aware scene completion. Compared with the state-of-the-art method, VoxDet achieves 11.0% and 6.7% relative mIoU gains on the test set of Semantic KITTI and SSCBench-KITTI-360, respectively, while reducing 57.9% model parameters with around 1.3$\\times$ speed-up. The code will be released to propel the SSC and broader occupancy community.", "arxiv_id": "2506.04623v1", "arxiv_authors": ["Wuyang Li", "Zhu Yu", "Alexandre Alahi"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a511"}, "_cls": "Classification", "tags": [], "label": "cs.GR"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1099858, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a980"}, "filepath": "data/2510.23205v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996681835495074, "type": "Poster", "name": "VR-Drive: Viewpoint-Robust End-to-End Driving with Feed-Forward 3D Gaussian Splatting", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118352", "abstract": "End-to-end autonomous driving (E2E-AD) has emerged as a promising paradigm that unifies perception, prediction, and planning into a holistic, data-driven framework. However, achieving robustness to varying camera viewpoints, a common real-world challenge due to diverse vehicle configurations, remains an open problem. In this work, we propose VR-Drive, a novel E2E-AD framework that addresses viewpoint generalization by jointly learning 3D scene reconstruction as an auxiliary task to enable planning-aware view synthesis. Unlike prior scene-specific synthesis approaches, VR-Drive adopts a feed-forward inference strategy that supports online training-time augmentation from sparse views without additional annotations. To further improve viewpoint consistency, we introduce a viewpoint-mixed memory bank that facilitates temporal interaction across multiple viewpoints and a viewpoint-consistent distillation strategy that transfers knowledge from original to synthesized views. Trained in a fully end-to-end manner, VR-Drive effectively mitigates synthesis-induced noise and improves planning under viewpoint shifts. In addition, we release a new benchmark dataset to evaluate E2E-AD performance under novel camera viewpoints, enabling comprehensive analysis. Our results demonstrate that VR-Drive is a scalable and robust solution for the real-world deployment of end-to-end autonomous driving systems. Our code and datasets will be made publicly available.", "arxiv_id": "2510.23205v1", "arxiv_authors": ["Hoonhee Cho", "Jae-Young Kang", "Giwon Lee", "Hyemin Yang", "Heejun Park", "Seokwoo Jung", "Kuk-Jin Yoon"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a512"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1017932, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a981"}, "filepath": "data/2509.25033v3.png", "tags": [], "_media_type": "image", "_rand": 0.9996287063548861, "type": "Poster", "name": "VT-FSL: Bridging Vision and Text with LLMs for Few-Shot Learning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117710", "abstract": "Few-shot learning (FSL) aims to recognize novel concepts from only a few labeled support samples. Recent studies enhance support features by incorporating additional semantic information (e.g., class descriptions) or designing complex semantic fusion modules. However, these methods still suffer from hallucinating semantics that contradict the visual evidence due to the lack of grounding in actual instances, resulting in noisy guidance and costly corrections. To address these issues, we propose a novel framework, bridging Vision and Text with LLMs for Few-Shot Learning (VT-FSL), which constructs precise cross-modal prompts conditioned on Large Language Models (LLMs) and support images, seamlessly integrating them through a geometry-aware alignment mechanism. It mainly consists of Cross-modal Iterative Prompting (CIP) and Cross-modal Geometric Alignment (CGA). Specifically, the CIP conditions an LLM on both class names and support images to generate precise class descriptions iteratively in a single structured inference pass. These descriptions not only enrich the semantic understanding of novel classes, but also enable the zero-shot synthesis of semantically consistent support images. The generated descriptions and synthetic images act as complementary textual and visual prompts, providing high-level class semantics and low-level intra-class diversity to compensate for limited support data. Furthermore, the CGA jointly aligns the fused textual, support, and synthetic visual representations by minimizing the kernelized volume of the 3-dimensional parallelotope they span. It captures global and nonlinear relationships among all representations, enabling structured and consistent multimodal integration. The proposed VT-FSL method establishes new state-of-the-art performance across ten diverse benchmarks, including standard, cross-domain, and fine-grained few-shot learning scenarios. Code is available at https://anonymous.4open.science/r/VT-FSL-27B4.", "arxiv_id": "2509.25033v3", "arxiv_authors": ["Wenhao Li", "Qiangchang Wang", "Xianjing Meng", "Zhibin Wu", "Yilong Yin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a513"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1040017, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a982"}, "filepath": "data/2509.21100v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998843496383221, "type": "Poster", "name": "VTTS: Visual Test-Time Scaling to Reinforce Multimodal Reasoning by Iterative Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116032", "abstract": "Inducing reasoning in multimodal large language models (MLLMs) is critical for achieving human-level perception and understanding. Existing methods mainly leverage LLM reasoning to analyze parsed visuals, often limited by static perception stages. This paper introduces Visual Test-Time Scaling (VTTS), a novel approach to enhance MLLMs' reasoning via iterative perception during inference. VTTS mimics humans' hierarchical attention by progressively refining focus on high-confidence spatio-temporal regions, guided by updated textual predictions. Specifically, VTTS employs an Iterative Perception (ITP) mechanism, incorporating reinforcement learning with spatio-temporal supervision to optimize reasoning. To support this paradigm, we also present VTTS-80K, a dataset tailored for iterative perception.These designs allows a MLLM to enhance its performance by increasing its perceptual compute. Extensive experiments validate VTTS's effectiveness and generalization across diverse tasks and benchmarks. It exhibits substantial improvements (by more than 5\\% on average) over strong baselines (Qwen2.5VL-3B and -7B) on more than 20 benchmarks covering video conversation, image reasoning, and spatio-temporal perception.", "arxiv_id": "2509.21100v1", "arxiv_authors": ["Ziang Yan", "Xinhao Li", "Yinan He", "Zhengrong Yue", "Xiangyu Zeng", "Yali Wang", "Yu Qiao", "Limin Wang", "Yi Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a514"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1009503, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a983"}, "filepath": "data/2505.12161v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999662835158396, "type": "Poster", "name": "WaLRUS: Wavelets for Long range Representation Using State Space Methods", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119922", "abstract": "State-Space Models (SSMs) have proven to be powerful tools for online function approximation and for modeling long-range dependencies in sequential data. While recent methods such as HiPPO have demonstrated strong performance using a few polynomial bases, they remain limited by their reliance on closed-form solutions for specific, well-behaved bases. The SaFARi framework generalizes this approach, enabling the construction of SSMs from arbitrary frames, including non-orthogonal and redundant ones, thus allowing an infinite diversity of possible \"species'' within the SSM family. In this paper, we introduce WaLRUS (Wavelets for Long-range Representation Using SSMs), a new species of SaFARi built from Daubechies wavelet frames. We instantiate two variants, scaled-Walrus and translated-Walrus, and show that their multiresolution and localized nature offers significant advantages in representing non-smooth and transient signals. We compare Walrus to HiPPO-based models and demonstrate improved accuracy, better numerical properties, and more efficient implementations for online function approximation tasks.", "arxiv_id": "2505.12161v1", "arxiv_authors": ["Hossein Babaei", "Mel White", "Sina Alemohammad", "Richard G. Baraniuk"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a515"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 796650, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a984"}, "filepath": "data/2508.09560v2.png", "tags": [], "_media_type": "image", "_rand": 0.9999764273074131, "type": "Poster", "name": "WeatherPrompt: Multi-modality Representation Learning for All-Weather Drone Visual Geo-Localization", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118002", "abstract": "Visual geo-localization for drones faces critical degradation under weather perturbations, \\eg, rain and fog, where existing methods struggle with two inherent limitations: 1) Heavy reliance on limited weather categories that constrain generalization, and 2) Suboptimal disentanglement of entangled scene-weather features through pseudo weather categories.We present WeatherPrompt, a multi-modality learning paradigm that establishes weather-invariant representations through fusing the image embedding with the text context. Our framework introduces two key contributions: First, a Training-free Weather Reasoning mechanism that employs off-the-shelf large multi-modality models to synthesize multi-weather textual descriptions through human-like reasoning. It improves the scalability to unseen or complex weather, and could reflect different weather strength. Second, to better disentangle the scene and weather feature, we propose a multi-modality framework with the dynamic gating mechanism driven by the text embedding to adaptively reweight and fuse visual features across modalities. The framework is further optimized by the cross-modal objectives, including image-text contrastive learning and image-text matching, which maps the same scene with different weather conditions closer in the respresentation space. Extensive experiments validate that, under diverse weather conditions, our method achieves competitive recall rates compared to state-of-the-art drone geo-localization methods. Notably, it improves Recall@1 by +13.37\\% under night conditions and by 18.69\\% under fog and snow conditions.", "arxiv_id": "2508.09560v2", "arxiv_authors": ["Jiahao Wen", "Hang Yu", "Zhedong Zheng"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a516"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 950599, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a985"}, "filepath": "data/2510.17218v1.png", "tags": [], "_media_type": "image", "_rand": 0.9993924323064833, "type": "Poster", "name": "When One Moment Isn't Enough: Multi-Moment Retrieval with Cross-Moment Interactions", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118798", "abstract": "Existing Moment retrieval (MR) methods focus on Single-Moment Retrieval (SMR). However, one query can correspond to multiple relevant moments in real-world applications. This makes the existing methods insufficient for video temporal grounding. By revisiting the gap between current MR tasks and real-world applications, we introduce a high-quality datasets called QVHighlights Multi-Moment Dataset (QV-M$^2$), along with new evaluation metrics tailored for multi-moment retrieval (MMR). QV-M$^2$ consists of 2,212 annotations covering 5,522 video segments. Building on existing efforts in MMR, we propose a framework called FlashMMR. Specifically, we propose a Multi-moment Post-verification module to refine the moment boundaries. We introduce constrained temporal adjustment and subsequently leverage a verification module to re-evaluate the candidate segments. Through this sophisticated filtering pipeline, low-confidence proposals are pruned, and robust multi-moment alignment is achieved. We retrain and evaluate 6 existing MR methods on QV-M$^2$ and QVHighlights under both SMR and MMR settings. Results show that QV-M$^2$ serves as an effective benchmark for training and evaluating MMR models, while FlashMMR provides a strong baseline. Specifically, on QV-M$^2$, it achieves improvements over prior SOTA method by 3.00\\% on G-mAP, 2.70\\% on mAP@3+tgt, and 2.56\\% on mR@3. The proposed benchmark and method establish a foundation for advancing research in more realistic and challenging video temporal grounding scenarios.", "arxiv_id": "2510.17218v1", "arxiv_authors": ["Zhuo Cao", "Heming Du", "Bingqing Zhang", "Xin Yu", "Xue Li", "Sen Wang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a517"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1076728, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a986"}, "filepath": "data/2506.05551v2.png", "tags": [], "_media_type": "image", "_rand": 0.9998048578029223, "type": "Poster", "name": "When Semantics Mislead Vision: Mitigating Large Multimodal Models Hallucinations in Scene Text Spotting and Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119366", "abstract": "Large Multimodal Models (LMMs) have achieved impressive progress in visual perception and reasoning. However, when confronted with visually ambiguous or non-semantic scene text, they often struggle to accurately spot and understand the content, frequently generating semantically plausible yet visually incorrect answers, which we refer to as semantic hallucination.In this work, we investigate the underlying causes of semantic hallucination and identify a key finding: Transformer layers in LLM with stronger attention focus on scene text regions are less prone to producing semantic hallucinations. Thus, we propose a training-free semantic hallucination mitigation framework comprising two key components: (1) ZoomText, a coarse-to-fine strategy that identifies potential text regions without external detectors; and (2) Grounded Layer Correction, which adaptively leverages the internal representations from layers less prone to hallucination to guide decoding, correcting hallucinated outputs for non-semantic samples while preserving the semantics of meaningful ones. To enable rigorous evaluation, we introduce TextHalu-Bench, a benchmark of over 1,730 samples spanning both semantic and non-semantic cases, with manually curated question\u2013answer pairs designed to probe model hallucinations.Extensive experiments demonstrate that our method not only effectively mitigates semantic hallucination but also achieves strong performance on public benchmarks for scene text spotting and understanding.", "arxiv_id": "2506.05551v2", "arxiv_authors": ["Yan Shu", "Hangui Lin", "Yexin Liu", "Yan Zhang", "Gangyan Zeng", "Yan Li", "Yu Zhou", "Ser-Nam Lim", "Harry Yang", "Nicu Sebe"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a518"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 977667, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a987"}, "filepath": "data/2510.06077v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994445324798902, "type": "Poster", "name": "When Thinking Drifts: Evidential Grounding for Robust Video Reasoning", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115892", "abstract": "Video reasoning, the task of enabling machines to infer from dynamic visual content through multi-step logic, is crucial for advanced AI. While the Chain-of-Thought (CoT) mechanism has enhanced reasoning in text-based tasks, its application to video understanding remains underexplored. This paper presents a systematic analysis revealing that CoT often degrades performance in video reasoning, generating verbose but misleading internal monologues, and leading to hallucinated visual details and overridden correct intuitions\u2014a phenomenon we term \"visual thinking drift.\" We explain this drift through a Bayesian lens, positing that CoT traces often diverge from actual visual evidence, instead amplifying internal biases or language priors, causing models to storytell rather than engage in grounded reasoning. To counteract this, we introduce Visual Evidence Reward (VER), a novel reinforcement learning framework that explicitly rewards the generation of reasoning traces that are verifiably grounded in visual evidence. Comprehensive evaluation across 10 diverse video understanding benchmarks demonstrates that our Video-VER model consistently achieves top performance. Our work sheds light on the distinct challenges of video-centric reasoning and encourages the development of AI that robustly grounds its inferences in visual evidence---for large multimodal models that not only \"think before answering\", but also \"see while thinking\".", "arxiv_id": "2510.06077v1", "arxiv_authors": ["Mi Luo", "Zihui Xue", "Alex Dimakis", "Kristen Grauman"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a519"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1060706, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a988"}, "filepath": "data/2505.10311v3.png", "tags": [], "_media_type": "image", "_rand": 0.9995889103447952, "type": "Poster", "name": "Whitened Score Diffusion: A Structured Prior for Imaging Inverse Problems", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119040", "abstract": "Conventional score-based diffusion models (DMs) may struggle with anisotropic Gaussian diffusion processes due to the required inversion of covariance matrices in the denoising score matching training objective \\cite{vincent_connection_2011}. We propose Whitened Score (WS) diffusion models, a novel SDE-based framework that learns the Whitened Score function instead of the standard score. This approach circumvents covariance inversion, extending score-based DMs by enabling stable training of DMs on arbitrary Gaussian forward noising processes. WS DMs establish equivalence with FM for arbitrary Gaussian noise, allow for tailored spectral inductive biases, and provide strong Bayesian priors for imaging inverse problems with structured noise. We experiment with a variety of computational imaging tasks using the CIFAR and CelebA ($64\\times64$) datasets and demonstrate that WS diffusion priors trained on anisotropic Gaussian noising processes consistently outperform conventional diffusion priors based on isotropic Gaussian noise.", "arxiv_id": "2505.10311v3", "arxiv_authors": ["Jeffrey Alido", "Tongyu Li", "Yu Sun", "Lei Tian"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51a"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.665Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1136507, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a989"}, "filepath": "data/2506.21552v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998886154699017, "type": "Poster", "name": "Whole-Body-Conditioned Ego-Centric Video Prediction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117535", "abstract": "We train models to predict ego-centric video from human actions (PEVA), given the past video and an action represented by the relative 3D body pose. By conditioning on kinematic pose trajectories, structured by the joint hierarchy of the body, our model learns to simulate how physical human actions shape the environment from a first-person point of view. We train an auto-regressive conditional diffusion transformer on Nymeria, a large-scale dataset of real-world egocentric video and body pose capture. We further design a hierarchical evaluation protocol with increasingly challenging tasks, enabling a comprehensive analysis of the model\u2019s embodied prediction and control abilities. Our work represents an initial attempt to tackle the challenges of modeling complex real-world environments and embodied agent behaviors with video prediction from the perspective of a human.", "arxiv_id": "2506.21552v1", "arxiv_authors": ["Yutong Bai", "Danny Tran", "Amir Bar", "Yann LeCun", "Trevor Darrell", "Jitendra Malik"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1011468, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98a"}, "filepath": "data/2505.10118v1.png", "tags": [], "_media_type": "image", "_rand": 0.999573071372379, "type": "Poster", "name": "Why 1 + 1 < 1 in Visual Token Pruning: Beyond Naive Integration via Multi-Objective Balanced Covering", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119476", "abstract": "Existing visual token pruning methods target prompt alignment and visual preservation with static strategies, overlooking the varying relative importance of these objectives across tasks, which leads to inconsistent performance. To address this, we derive the first closed-form error bound for visual token pruning based on the Hausdorff distance, uniformly characterizing the contributions of both objectives. Moreover, leveraging $\\epsilon$-covering theory, we reveal an intrinsic trade-off between these objectives and quantify their optimal attainment levels under a fixed budget. To practically handle this trade-off, we propose Multi-Objective Balanced Covering (MoB), which reformulates visual token pruning as a bi-objective covering problem. In this framework, the attainment trade-off reduces to budget allocation via greedy radius trading. MoB offers a provable performance bound and linear scalability with respect to the number of input visual tokens, enabling adaptation to challenging pruning scenarios. Extensive experiments show that MoB preserves 96.4\\% of performance for LLaVA-1.5-7B using only 11.1\\% of the original visual tokens and accelerates LLaVA-Next-7B by 1.3-1.5$\\times$ with negligible performance loss. Additionally, evaluations on Qwen2-VL and Video-LLaVA confirm that MoB integrates seamlessly into advanced MLLMs and diverse vision-language tasks. The code will be made available soon.", "arxiv_id": "2505.10118v1", "arxiv_authors": ["Yangfu Li", "Hongjian Zhan", "Tianyi Chen", "Qi Liu", "Yue Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1144761, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98b"}, "filepath": "data/2506.13030v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998723545630276, "type": "Poster", "name": "WildCAT: Appearance-Aware Multi-View Diffusion in the Wild", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118317", "abstract": "Despite recent advances in sparse novel view synthesis (NVS) applied to object-centric scenes, scene-level NVS remains a challenge. A central issue is the lack of available clean multi-view training data, beyond manually curated datasets with limited diversity, camera variation, or licensing issues. On the other hand, an abundance of diverse and permissively-licensed data exists in the wild, consisting of scenes with varying appearances (illuminations, transient occlusions, etc.) from sources such as tourist photos. To this end, we present WildCAT, a framework for generating novel views of scenes learned from diverse 2D scene image data captured in-the-wild. We unlock training on these data sources by explicitly modeling global appearance conditions in images, extending the state-of-the-art multi-view diffusion paradigm to learn from scene views of varying appearances. Our trained model generalizes to new scenes at inference time, enabling the generation of multiple consistent novel views. WildCAT provides state-of-the-art results on single-view NVS in object- and scene-level settings, while training on strictly less data sources than prior methods. Additionally, it enables novel applications by providing global appearance control during generation.", "arxiv_id": "2506.13030v1", "arxiv_authors": ["Morris Alper", "David Novotny", "Filippos Kokkinos", "Hadar Averbuch-Elor", "Tom Monnier"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51d"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 5018295, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98c"}, "filepath": "data/2503.08153v1.png", "tags": [], "_media_type": "image", "_rand": 0.9999172153405604, "type": "Poster", "name": "WISA: World simulator assistant for physics-aware text-to-video generation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119925", "abstract": "Recent advances in text-to-video (T2V) generation, exemplified by models such as Sora and Kling, have demonstrated strong potential for constructing world simulators. However, existing T2V models still struggle to understand abstract physical principles and to generate videos that faithfully obey physical laws. This limitation stems primarily from the lack of explicit physical guidance, caused by a significant gap between high-level physical concepts and the generative capabilities of current models. To address this challenge, we propose the **W**orld **S**imulator **A**ssistant (**WISA**), a novel framework designed to systematically decompose and integrate physical principles into T2V models. Specifically, WISA decomposes physical knowledge into three hierarchical levels: textual physical descriptions, qualitative physical categories, and quantitative physical properties. It then incorporates several carefully designed modules\u2014such as Mixture-of-Physical-Experts Attention (MoPA) and a Physical Classifier\u2014to effectively encode these attributes and enhance the model\u2019s adherence to physical laws during generation. In addition, most existing video datasets feature only weak or implicit representations of physical phenomena, limiting their utility for learning explicit physical principles. To bridge this gap, we present **WISA-80K**, a new dataset comprising 80,000 human-curated videos that depict 17 fundamental physical laws across three core domains of physics: dynamics, thermodynamics, and optics. Experimental results show that WISA substantially improves the alignment of T2V models (such as CogVideoX and Wan2.1) with real-world physical laws, achieving notable gains on the VideoPhy benchmark. Our data, code, and models will be open source.", "arxiv_id": "2503.08153v1", "arxiv_authors": ["Jing Wang", "Ao Ma", "Ke Cao", "Jun Zheng", "Zhanjie Zhang", "Jiasong Feng", "Shanyuan Liu", "Yuhang Ma", "Bo Cheng", "Dawei Leng", "Yuhui Yin", "Xiaodan Liang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51e"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 3084122, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98d"}, "filepath": "data/2506.16895v2.png", "tags": [], "_media_type": "image", "_rand": 0.9997314156270002, "type": "Poster", "name": "With Limited Data for Multimodal Alignment, Let the STRUCTURE Guide You", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118769", "abstract": "Multimodal models have demonstrated powerful capabilities in complex tasks requiring multimodal alignment including zero-shot classification and cross-modal retrieval. However, existing models typically rely on millions of paired multimodal samples, which are prohibitively expensive or infeasible to obtain in many domains.In this work, we explore the feasibility of building multimodal models with limited amount of paired data by aligning pretrained unimodal foundation models. We show that high-quality alignment is possible with as few as tens of thousands of paired samples—less than $1\\%$ of the data typically used in the field. To achieve this, we introduce STRUCTURE, an effective regularization technique that preserves the neighborhood geometry of the latent space of unimodal encoders. Additionally, we show that aligning last layers is often suboptimal and demonstrate the benefits of aligning the layers with the highest representational similarity across modalities. These two components can be readily incorporated into existing alignment methods, yielding consistent gains across 24 zero-shot classification and retrieval benchmarks, with average relative improvement of $51.6\\%$ in classification and $91.8\\%$ in retrieval tasks. Our results highlight the effectiveness and broad applicability of our framework for limited-sample multimodal learning and offer a promising path forward for resource-constrained domains.", "arxiv_id": "2506.16895v2", "arxiv_authors": ["Fabian Gr\u00f6ger", "Shuo Wen", "Huyen Le", "Maria Brbi\u0107"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a51f"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1070213, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98e"}, "filepath": "data/2504.12369v1.png", "tags": [], "_media_type": "image", "_rand": 0.9991016255742563, "type": "Poster", "name": "WorldMem: Long-term Consistent World Simulation with Memory", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117127", "abstract": "World simulation has gained increasing popularity due to its ability to model virtual environments and predict the consequences of actions. However, the limited temporal context window often leads to failures in maintaining long-term consistency, particularly in preserving 3D spatial consistency. In this work, we present WorldMem, a framework that enhances scene generation with a memory bank consisting of memory units that store memory frames and states (e.g., poses and timestamps). By employing state-aware memory attention that effectively extracts relevant information from these memory frames based on their states, our method is capable of accurately reconstructing previously observed scenes, even under significant viewpoint or temporal gaps. Furthermore, by incorporating timestamps into the states, our framework not only models a static world but also captures its dynamic evolution over time, enabling both perception and interaction within the simulated world. Extensive experiments in both virtual and real scenarios validate the effectiveness of our approach.", "arxiv_id": "2504.12369v1", "arxiv_authors": ["Zeqi Xiao", "Yushi Lan", "Yifan Zhou", "Wenqi Ouyang", "Shuai Yang", "Yanhong Zeng", "Xingang Pan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a520"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 4186419, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a98f"}, "filepath": "data/2502.20694v1.png", "tags": [], "_media_type": "image", "_rand": 0.9995066954689782, "type": "Poster", "name": "WorldModelBench: Judging Video Generation Models As World Models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121570", "abstract": "Video generation models have rapidly progressed, positioning themselves as video world models capable of supporting decision-making applications like robotics and autonomous driving. However, current benchmarks fail to rigorously evaluate these claims, focusing only on general video quality, ignoring important factors to world models such as physics adherence.To bridge this gap, we propose WorldModelBench, a benchmark designed to evaluate the world modeling capabilities of video generation models in application-driven domains. WorldModelBench offers two key advantages: (1) Against to nuanced world modeling violations: By incorporating instruction-following and physics-adherence dimensions, WorldModelBench detects subtle violations, such as irregular changes in object size that breach the mass conservation law\u2014issues overlooked by prior benchmarks. (2) Aligned with large-scale human preferences: We crowd-source 67K human labels to accurately measure 14 frontier models. Using our high-quality human labels, we further fine-tune an accurate judger to automate the evaluation procedure, achieving 9.9% lower error in predicting world modeling violations than GPT-4o with 2B parameters. In addition, we demonstrate that training to align human annotations by maximizing the rewards from the judger noticeably improve the world modeling capability.", "arxiv_id": "2502.20694v1", "arxiv_authors": ["Dacheng Li", "Yunhao Fang", "Yukang Chen", "Shuo Yang", "Shiyi Cao", "Justin Wong", "Michael Luo", "Xiaolong Wang", "Hongxu Yin", "Joseph E. Gonzalez", "Ion Stoica", "Song Han", "Yao Lu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a521"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2593149, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a990"}, "filepath": "data/2508.15720v1.png", "tags": [], "_media_type": "image", "_rand": 0.9997656785659756, "type": "Poster", "name": "WorldWeaver: Generating Long-Horizon Video Worlds via Rich Perception", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115883", "abstract": "Generative video modeling has made significant strides, yet ensuring structural and temporal consistency over long sequences remains a challenge. Current methods predominantly rely on RGB signals, leading to accumulated errors in object structure and motion over extended durations. To address these issues, we introduce WorldWeaver, a robust framework for long video generation that jointly models RGB frames and perceptual conditions within a unified long-horizon modeling scheme. Our training framework offers three key advantages. First, by jointly predicting perceptual conditions and color information from a unified representation, it significantly enhances temporal consistency and motion dynamics. Second, by leveraging depth cues, which we observe to be more resistant to drift than RGB, we construct a memory bank that preserves clearer contextual information, improving quality in long-horizon video generation. Third, we employ segmented noise scheduling for training prediction groups, which further mitigates drift and reduces computational cost. Extensive experiments on both diffusion and rectified flow-based models demonstrate the effectiveness of WorldWeaver in reducing temporal drift and improving the fidelity of generated videos.", "arxiv_id": "2508.15720v1", "arxiv_authors": ["Zhiheng Liu", "Xueqing Deng", "Shoufa Chen", "Angtian Wang", "Qiushan Guo", "Mingfei Han", "Zeyue Xue", "Mengzhao Chen", "Ping Luo", "Linjie Yang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a522"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1018455, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a991"}, "filepath": "data/2503.08596v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994718259824699, "type": "Poster", "name": "X-Field: A Physically Grounded Representation for 3D X-ray Reconstruction", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/117962", "abstract": "X-ray imaging is indispensable in medical diagnostics, yet its use is tightly regulated due to radiation exposure. Recent research borrows representations from the 3D reconstruction area to complete two tasks with reduced radiation dose: X-ray Novel View Synthesis (NVS) and Computed Tomography (CT) reconstruction. However, these representations fail to fully capture the penetration and attenuation properties of X-ray imaging as they originate from visible light imaging. In this paper, we introduce X-Field, a 3D representation grounded in the physics of X-ray imaging. First, we employ homogeneous 3D ellipsoids with distinct attenuation coefficients to accurately model diverse materials within internal structures. Second, we introduce an efficient path partitioning algorithm that resolves the intricate intersection of ellipsoids to compute cumulative attenuation along an X-ray path. We further propose a hybrid progressive initialization to refine the geometric accuracy of X-Filed and incorporate material-based optimization to enhance model fitting along material boundaries.Experiments show that X-Field achieves superior visual fidelity on both real-world human organ and synthetic object datasets, outperforming state-of-the-art methods in X-ray NVS and CT Reconstruction.", "arxiv_id": "2503.08596v1", "arxiv_authors": ["Feiran Wang", "Jiachen Tao", "Junyi Wu", "Haoxuan Wang", "Bin Duan", "Kai Wang", "Zongxin Yang", "Yan Yan"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a523"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 2600108, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a992"}, "filepath": "data/2506.13558v1.png", "tags": [], "_media_type": "image", "_rand": 0.999947565943803, "type": "Poster", "name": "X-Scene: Large-Scale Driving Scene Generation with High Fidelity and Flexible Controllability", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118117", "abstract": "Diffusion models are advancing autonomous driving by enabling realistic data synthesis, predictive end-to-end planning, and closed-loop simulation, with a primary focus on temporally consistent generation. However, the generation of large-scale 3D scenes that require spatial coherence remains underexplored. In this paper, we propose X-Scene, a novel framework for large-scale driving scene generation that achieves both geometric intricacy and appearance fidelity, while offering flexible controllability. Specifically, X-Scene supports multi-granular control, including low-level conditions such as user-provided or text-driven layout for detailed scene composition and high-level semantic guidance such as user-intent and LLM-enriched text prompts for efficient customization. To enhance geometrical and visual fidelity, we introduce a unified pipeline that sequentially generates 3D semantic occupancy and the corresponding multiview images, while ensuring alignment between modalities. Additionally, we extend the generated local region into a large-scale scene through consistency-aware scene outpainting, which extrapolates new occupancy and images conditioned on the previously generated area, enhancing spatial continuity and preserving visual coherence. The resulting scenes are lifted into high-quality 3DGS representations, supporting diverse applications such as scene exploration. Comprehensive experiments demonstrate that X-Scene significantly advances controllability and fidelity for large-scale driving scene generation, empowering data generation and simulation for autonomous driving.", "arxiv_id": "2506.13558v1", "arxiv_authors": ["Yu Yang", "Alan Liang", "Jianbiao Mei", "Yukai Ma", "Yong Liu", "Gim Hee Lee"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a524"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1167008, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a993"}, "filepath": "data/2506.01608v2.png", "tags": [], "_media_type": "image", "_rand": 0.9992099445556595, "type": "Poster", "name": "X-Smart-Kitchen-30: Densely annotated cooking dataset with 3D kinematics to challenge video and language models", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/121758", "abstract": "Understanding behavior requires datasets that capture humans while carrying out complex tasks. The kitchen is an excellent environment for assessing human motor and cognitive function, as many complex actions are naturally exhibited in kitchens from chopping to cleaning. Here, we introduce the X-Smart-Kitchen-30 dataset (Note: Name \"X\" anonymized), collected in a noninvasive motion capture platform inside a kitchen environment. Nine static RGB-D cameras, inertial measurement units (IMUs) and one head-mounted HoloLens~2 headset were used to capture 3D hand, body, and eye movements. The X-Smart-Kitchen-30 dataset is a multi-view action dataset with synchronized exocentric, egocentric, depth, IMUs, eye gaze, body and hand kinematics spanning 29.7 hours of 16 subjects cooking four different recipes. Action sequences were densely annotated with 33.78 action segments per minute. Leveraging this multi-modal dataset, we propose four benchmarks to advance behavior understanding and modeling through 1) a vision-language benchmark, 2) a semantic text-to-motion generation benchmark, 3) a multi-modal action recognition benchmark, 4) a pose-based action segmentation benchmark. We expect the X-Smart-Kitchen-30 dataset to pave the way for better methods as well as insights to understand the nature of ethologically-valid human behavior.", "arxiv_id": "2506.01608v2", "arxiv_authors": ["Andy Bonnetto", "Haozhe Qi", "Franklin Leong", "Matea Tashkovska", "Mahdi Rad", "Solaiman Shokur", "Friedhelm Hummel", "Silvestro Micera", "Marc Pollefeys", "Alexander Mathis"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a525"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1669768, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a994"}, "filepath": "data/2506.21416v1.png", "tags": [], "_media_type": "image", "_rand": 0.9994370100902955, "type": "Poster", "name": "XVerse: Consistent Multi-Subject Control of Identity and Semantic Attributes via DiT Modulation", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119968", "abstract": "Achieving fine-grained control over subject identity and semantic attributes (pose, style, lighting) in text-to-image generation, particularly for multiple subjects, often undermines the editability and coherence of Diffusion Transformers (DiTs). Many approaches introduce artifacts or suffer from attribute entanglement. To overcome these challenges, we propose a novel multi-subject controlled generation model XVerse. By transforming reference images into offsets for token-specific text-stream modulation, XVerse allows for precise and independent control for specific subject without disrupting image latents or features. Consequently, XVerse offers high-fidelity, editable multi-subject image synthesis with robust control over individual subject characteristics and semantic attributes. This advancement significantly improves personalized and complex scene generation capabilities.", "arxiv_id": "2506.21416v1", "arxiv_authors": ["Bowen Chen", "Mengyi Zhao", "Haomiao Sun", "Li Chen", "Xu Wang", "Kang Du", "Xinglong Wu"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a526"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.666Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 6359325, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a995"}, "filepath": "data/2502.12524v1.png", "tags": [], "_media_type": "image", "_rand": 0.9996014731948171, "type": "Poster", "name": "YOLOv12: Attention-Centric Real-Time Object Detectors", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116765", "abstract": "Enhancing the network architecture of the YOLO framework has been crucial for a long time. Still, it has focused on CNN-based improvements despite the proven superiority of attention mechanisms in modeling capabilities. This is because attention-based models cannot match the speed of CNN-based models. This paper proposes an attention-centric YOLO framework, namely YOLOv12, that matches the speed of previous CNN-based ones while harnessing the performance benefits of attention mechanisms. YOLOv12 surpasses popular real-time object detectors in accuracy with competitive speed. For example, YOLOv12-N achieves 40.6% mAP with an inference latency of 1.63 ms on a T4 GPU, outperforming advanced YOLOv10-N / YOLO11-N by 2.1%/1.2% mAP with a comparable speed. This advantage extends to other model scales. YOLOv12 also surpasses end-to-end real-time detectors that improve DETR, such as RT-DETRv2 / RT-DETRv3: YOLOv12-X beats RT-DETRv2-R101 / RT-DETRv3-R101 while running faster with fewer calculations and parameters. See more comparisons in Figure 1. The code and models will be open-sourced.", "arxiv_id": "2502.12524v1", "arxiv_authors": ["Yunjie Tian", "Qixiang Ye", "David Doermann"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a527"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1390591, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a996"}, "filepath": "data/2506.12693v1.png", "tags": [], "_media_type": "image", "_rand": 0.9990508807435473, "type": "Poster", "name": "Zero-shot Denoising via Neural Compression: Theoretical and algorithmic framework", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/119178", "abstract": "Zero-shot denoising aims to denoise observations without access to training samples or clean reference images. This setting is particularly relevant in practical imaging scenarios involving specialized domains such as medical imaging or biology. In this work, we propose the Zero-Shot Neural Compression Denoiser (ZS-NCD), a novel denoising framework based on neural compression. ZS-NCD treats a neural compression network as an untrained model, optimized directly on patches extracted from a single noisy image. The final reconstruction is then obtained by aggregating the outputs of the trained model over overlapping patches. Thanks to the built-in entropy constraints of compression architectures, our method naturally avoids overfitting and does not require manual regularization or early stopping. Through extensive experiments, we show that ZS-NCD achieves state-of-the-art performance among zero-shot denoisers for both Gaussian and Poisson noise, and generalizes well to both natural and non-natural images. Additionally, we provide new finite-sample theoretical results that characterize upper bounds on the achievable reconstruction error of general maximum-likelihood compression-based denoisers. These results further establish the theoretical foundations of compression-based denoising.", "arxiv_id": "2506.12693v1", "arxiv_authors": ["Ali Zafari", "Xi Chen", "Shirin Jalali"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a528"}, "_cls": "Classification", "tags": [], "label": "eess.IV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 828890, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a997"}, "filepath": "data/2501.13457v2.png", "tags": [], "_media_type": "image", "_rand": 0.9996623299209667, "type": "Poster", "name": "Zero-Shot Trajectory Planning for Signal Temporal Logic Tasks", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/118254", "abstract": "Signal Temporal Logic (STL) is a powerful specification language for describing complex temporal behaviors of continuous signals, making it well-suited for high-level robotic task descriptions. However, generating executable plans for STL tasks is challenging, as it requires consideration of the coupling between the task specification and the system dynamics. Existing approaches either follow a model-based setting that explicitly requires knowledge of the system dynamics or adopt a task-oriented data-driven approach to learn plans for specific tasks. In this work, we address the problem of generating executable STL plans for systems with unknown dynamics. We propose a hierarchical planning framework that enables zero-shot generalization to new STL tasks by leveraging only task-agnostic trajectory data during offline training. The framework consists of three key components: (i) decomposing the STL specification into several progresses and time constraints, (ii) searching for timed waypoints that satisfy all progresses under time constraints, and (iii) generating trajectory segments using a pre-trained diffusion model and stitching them into complete trajectories. We formally prove that our method guarantees STL satisfaction, and simulation results demonstrate its effectiveness in generating dynamically feasible trajectories across diverse long-horizon STL tasks.", "arxiv_id": "2501.13457v2", "arxiv_authors": ["Ruijia Liu", "Ancheng Hou", "Xiao Yu", "Xiang Yin"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a529"}, "_cls": "Classification", "tags": [], "label": "cs.RO"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 991523, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a998"}, "filepath": "data/2505.21381v6.png", "tags": [], "_media_type": "image", "_rand": 0.9998137930950939, "type": "Poster", "name": "ZigzagPointMamba: Spatial-Semantic Mamba for Point Cloud Understanding", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116930", "abstract": "State Space Models (SSMs) like PointMamba provide efficient feature extraction for point cloud self-supervised learning with linear complexity, surpassing Transformers in computational efficiency. However, existing PointMamba-based methods rely on complex token ordering and random masking, disrupting spatial continuity and local semantic correlations. We propose ZigzagPointMamba to address these challenges. The key to our approach is a simple zigzag ordering strategy that globally sequences point cloud tokens, enhancing spatial continuity by preserving the proximity of spatially adjacent point tokens. Yet, random masking impairs local semantic modeling in self-supervised learning. To overcome this, we introduce a Semantic-Siamese Masking Strategy (SMS), which masks semantically similar tokens to facilitate reconstruction by integrating local features of original and similar tokens, thus overcoming dependence on isolated local features and enabling robust global semantic modeling. Our pre-training ZigzagPointMamba weights significantly boost downstream tasks, achieving a 1.59% mIoU gain on ShapeNetPart for part segmentation, a 0.4% higher accuracy on ModelNet40 for classification, and 0.19%, 1.22%, and 0.72% higher accuracies respectively for the classification tasks on the OBJ-BG, OBJ-ONLY, and PB-T50-RS subsets of ScanObjectNN. Code is available at https://anonymous.4open.science/r/ZigzagPointMamba-1800/.", "arxiv_id": "2505.21381v6", "arxiv_authors": ["Linshuang Diao", "Sensen Song", "Yurong Qian", "Dayong Ren"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a52a"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1048841, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a999"}, "filepath": "data/2505.22396v1.png", "tags": [], "_media_type": "image", "_rand": 0.9998255550833645, "type": "Poster", "name": "Zooming from Context to Cue: Hierarchical Preference Optimization for Multi-Image MLLMs", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/116709", "abstract": "Multi-modal Large Language Models (MLLMs) excel at single-image tasks but struggle with multi-image understanding due to cross-modal misalignment, leading to hallucinations (context omission, conflation, and misinterpretation). Existing methods using Direct Preference Optimization (DPO) constrain optimization to a solitary image reference within the input sequence, neglecting holistic context modeling. We propose \\textbf{C}ontext-to-\\textbf{C}ue \\textbf{D}irect \\textbf{P}reference \\textbf{O}ptimization~\\textbf{(CcDPO)}, a multi-level preference optimization framework that enhances per-image perception in multi-image settings by zooming into visual clues\u2014from sequential context to local details. It features: (i) \\textit{Context-Level Optimization} : Re-evaluates cognitive biases underlying MLLMs' multi-image context comprehension and integrates a spectrum of low-cost global sequence preferences for bias mitigation. (ii) \\textit{Needle-Level Optimization} : Directs attention to fine-grained visual details through region-targeted visual prompts and multimodal preference supervision. To support scalable optimization, we also construct \\textbf{MultiScope-42k}, an automatically generated dataset with high-quality multi-level preference pairs. Experiments show that CcDPO significantly reduces hallucinations and yields consistent performance gains across general multi-image vision-language tasks.", "arxiv_id": "2505.22396v1", "arxiv_authors": ["Xudong Li", "Mengdan Zhang", "Peixian Chen", "Xiawu Zheng", "Yan Zhang", "Jingyuan Zheng", "Yunhang Shen", "Ke Li", "Chaoyou Fu", "Xing Sun", "Rongrong Ji"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a52b"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1172095, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}},{"_id": {"$oid": "69092ad56ff4d31845e5a99a"}, "filepath": "data/2505.23734v2.png", "tags": [], "_media_type": "image", "_rand": 0.9990518776118221, "type": "Poster", "name": "ZPressor: Bottleneck-Aware Compression for Scalable Feed-Forward 3DGS", "virtualsite_url": "https://neurips.cc/virtual/2025/poster/115036", "abstract": "Feed-forward 3D Gaussian Splatting (3DGS) models have recently emerged as a promising solution for novel view synthesis, enabling one-pass inference without the need for per-scene 3DGS optimization. However, their scalability is fundamentally constrained by the limited capacity of their encoders, leading to degraded performance or excessive memory consumption as the number of input views increases. In this work, we analyze feed-forward 3DGS frameworks through the lens of the Information Bottleneck principle and introduce ZPressor, a lightweight architecture-agnostic module that enables efficient compression of multi-view inputs into a compact latent state $Z$ that retains essential scene information while discarding redundancy. Concretely, ZPressor enables existing feed-forward 3DGS models to scale to over 100 input views at 480P resolution on an 80GB GPU, by partitioning the views into anchor and support sets and using cross attention to compress the information from the support views into anchor views, forming the compressed latent state $Z$. We show that integrating ZPressor into several state-of-the-art feed-forward 3DGS models consistently improves performance under moderate input views and enhances robustness under dense view settings on two large-scale benchmarks DL3DV-10K and RealEstate10K.", "arxiv_id": "2505.23734v2", "arxiv_authors": ["Weijie Wang", "Donny Y. Chen", "Zeyu Zhang", "Duochao Shi", "Akide Liu", "Bohan Zhuang"], "arxiv_category": {"_id": {"$oid": "69092ad56ff4d31845e5a52c"}, "_cls": "Classification", "tags": [], "label": "cs.CV"}, "_dataset_id": {"$oid": "69092ad56ff4d31845e5a0be"}, "created_at": {"$date": "2025-11-03T22:21:09.667Z"}, "last_modified_at": {"$date": "2025-11-03T22:21:10.398Z"}, "metadata": {"_cls": "ImageMetadata", "size_bytes": 1110384, "mime_type": "image/png", "width": 4250, "height": 5500, "num_channels": 3}}]}