type
stringclasses 1
value | name
stringlengths 14
183
| virtualsite_url
stringlengths 46
46
| speakers/authors
stringlengths 8
1.31k
| abstract
stringlengths 246
3.59k
|
|---|---|---|---|---|
Poster
|
Amortized Variational Transdimensional Inference
|
https://neurips.cc//virtual/2025/poster/118604
|
Laurence Davies, Daniel MacKinlay, Rafael Oliveira, Scott SIsson
|
The expressiveness of flow-based models combined with stochastic variational inference (SVI) has, in recent years, expanded the application of optimization-based Bayesian inference to include problems with complex data relationships. However, until now, SVI using flow-based models has been limited to problems of fixed dimension. We introduce CoSMIC normalizing flows (COntextually-Specified Masking for Identity-mapped Components), an extension to neural autoregressive conditional normalizing flow architectures that enables using a single amortized variational density for inference over a transdimensional target distribution. We propose a combined stochastic variational transdimensional inference approach to training CoSMIC flows using techniques from Bayesian optimization and Monte Carlo gradient estimation. Numerical examples are provided to demonstrate the proposed methodology on challenging problems that scale to high-cardinality model spaces.
|
Poster
|
Amplifying Prominent Representations in Multimodal Learning via Variational Dirichlet Process
|
https://neurips.cc//virtual/2025/poster/117022
|
Tsai Hor Chan, Feng Wu, Yihang Chen, Guosheng Yin, Lequan Yu
|
Developing effective multimodal fusion approaches has become increasingly essential in many real-world scenarios, such as health care and finance. The key challenge is how to preserve the feature expressiveness in each modality while learning cross-modal interactions.Previous approaches primarily focus on the cross-modal alignment,while over-emphasis on the alignment of marginal distributions of modalities may impose excess regularization and obstruct meaningful representations within each modality.The Dirichlet process (DP) mixture model is a powerful Bayesian non-parametric method that can amplify the most prominent features by its richer-gets-richer property, which allocates increasing weights to them.Inspired by this unique characteristic of DP, we propose a new DP-driven multimodal learning framework that automatically achieves an optimal balance between prominent intra-modal representation learning and cross-modal alignment. Specifically, we assume that each modality follows a mixture of multivariate Gaussian distributions and further adopt DP to calculate the mixture weights for all the components. This paradigm allows DP to dynamically allocate the contributions of features and select the most prominent ones, leveraging its richer-gets-richer property, thus facilitating multimodal feature fusion.Extensive experiments on several multimodal datasets demonstrate the superior performance of our model over other competitors.Ablation analysis further validates the effectiveness of DP in aligning modality distributions and its robustness to changes in key hyperparameters.Code is anonymously available at https://anonymous.4open.science/r/DPMM-F15D
|
Poster
|
A Multimodal Benchmark for Framing of Oil & Gas Advertising and Potential Greenwashing Detection
|
https://neurips.cc//virtual/2025/poster/121505
|
Gaku Morio, Harri Rowlands, Dominik Stammbach, Christopher D Manning, Peter Henderson
|
Companies spend large amounts of money on public relations campaigns to project a positive brand image.However, sometimes there is a mismatch between what they say and what they do. Oil \& gas companies, for example, are accused of ``greenwashing'' with imagery of climate-friendly initiatives.Understanding the framing, and changes in framing, at scale can help better understand the goals and nature of public relation campaigns.To address this, we introduce a benchmark dataset of expert-annotated video ads obtained from Facebook and YouTube.The dataset provides annotations for 13 framing types for more than 50 companies or advocacy groups across 21 countries.Our dataset is especially designed for the evaluation of vision-language models (VLMs), distinguishing it from past text-only framing datasets.Baseline experiments show some promising results, while leaving room for improvement for future work: GPT-4.1 can detect environmental messages with 79\% F1 score, while our best model only achieves 46\% F1 score on identifying framing around green innovation.We also identify challenges that VLMs must address, such as implicit framing, handling videos of various lengths, or implicit cultural backgrounds.Our dataset contributes to research in multimodal analysis of strategic communication in the energy sector.
|
Poster
|
A Multimodal BiMamba Network with Test-Time Adaptation for Emotion Recognition Based on Physiological Signals
|
https://neurips.cc//virtual/2025/poster/119989
|
Ziyu Jia, Tingyu Du, Zhengyu Tian, Hongkai Li, Yong Zhang, Chenyu Liu
|
Emotion recognition based on physiological signals is of considerable significance in fields including psychological health and human-computer interaction, particularly in light of the substantial advances in multimodal emotion recognition techniques. However, two key challenges remain unresolved: 1) how to effectively model the intra-modal long-range dependencies and inter-modal correlations in multimodal physiological emotion signals and 2) how to address the performance limitations resulting from missing multimodal data. In this paper, we propose a multimodal bidirectional Mamba (BiMamba) network with test-time adaptation (TTA) for emotion recognition named BiM-TTA. Specifically, BiM-TTA consists of a multimodal BiMamba network and a multimodal TTA. The former includes intra-modal and inter-modal BiMamba modules, which model long-range dependencies along the time dimension and capture cross-modal correlations along the channel dimension, respectively. The latter (TTA) effectively mitigates the negative impact of the distribution shifts amplified by missing multimodal data through two-level entropy-based sample filtering and mutual information sharing across modalities. Experiments on two multimodal emotion datasets demonstrate that BiM-TTA achieves state-of-the-art performance.
|
Poster
|
A multiscale analysis of mean-field transformers in the moderate interaction regime
|
https://neurips.cc//virtual/2025/poster/117615
|
Giuseppe Bruno, Federico Pasqualotto, Andrea Agazzi
|
In this paper, we study the evolution of tokens through the depth of encoder-only transformer models at inference time by modeling them as a system of particles interacting in a mean-field way and studying the corresponding dynamics. More specifically, we consider this problem in the moderate interaction regime, where the number $N$ of tokens is large and the inverse temperature parameter $\beta$ of the model scales together with $N$. In this regime, the dynamics of the system displays a multiscale behavior: a fast phase, where the token empirical measure collapses on a low-dimensional space, an intermediate phase, where the measure further collapses into clusters, and a slow one, where such clusters sequentially merge into a single one. We provide a rigorous characterization of the limiting dynamics in each of these phases and prove convergence in the above mentioned limit, exemplifying our results with some simulations.
|
Poster
|
A Multi-Task Benchmark for Abusive Language Detection in Low-Resource Settings
|
https://neurips.cc//virtual/2025/poster/121801
|
Fitsum Gaim, Hoyun Song, Huije Lee, Changgeon Ko, Euijun Hwang, Jong Park
|
Content moderation research has recently made significant advances, but still fails to serve the majority of the world's languages due to the lack of resources, leaving millions of vulnerable users to online hostility. This work presents a large-scale human-annotated multi-task benchmark dataset for abusive language detection in Tigrinya social media with joint annotations for three tasks: abusiveness, sentiment, and topic classification. The dataset comprises 13,717 YouTube comments annotated by nine native speakers, collected from 7,373 videos with a total of over 1.2 billion views across 51 channels. We developed an iterative term clustering approach for effective data selection. Recognizing that around 64% of Tigrinya social media content uses Romanized transliterations rather than native Ge'ez script, our dataset accommodates both writing systems to reflect actual language use. We establish strong baselines across the tasks in the benchmark, while leaving significant challenges for future contributions. Our experiments reveal that small, specialized multi-task models outperform the current frontier models in the low-resource setting, achieving up to 86% accuracy (+7 points) in abusiveness detection.We make the resources publicly available to promote research on online safety.
|
Poster
|
An Adaptive Algorithm for Bilevel Optimization on Riemannian Manifolds
|
https://neurips.cc//virtual/2025/poster/119506
|
Xu Shi, Rufeng Xiao, Rujun Jiang
|
Existing methods for solving Riemannian bilevel optimization (RBO) problems require prior knowledge of the problem's first- and second-order information and curvature parameter of the Riemannian manifold to determine step sizes, which poses practical limitations when these parameters are unknown or computationally infeasible to obtain. In this paper, we introduce the Adaptive Riemannian Hypergradient Descent (AdaRHD) algorithm for solving RBO problems. To our knowledge, AdaRHD is the first method to incorporate a fully adaptive step size strategy that eliminates the need for problem-specific parameters in RBO problem resolution. We prove that AdaRHD achieves an $\mathcal{O}(1/\epsilon)$ iteration complexity for finding an $\epsilon$-stationary point, thus matching the complexity of existing non-adaptive methods. Furthermore, we demonstrate that substituting exponential mappings with retraction mappings maintains the same complexity bound. Experiments demonstrate that AdaRHD achieves comparable performance to existing non-adaptive approaches while exhibiting greater robustness.
|
Poster
|
An Adaptive Quantum Circuit of Dempster's Rule of Combination for Uncertain Pattern Classification
|
https://neurips.cc//virtual/2025/poster/117080
|
Fuyuan Xiao, Yu Zhou, Witold Pedrycz
|
In pattern classification, efficient uncertainty reasoning plays a critical role, particularly in real-time applications involving noisy data, ambiguous class boundaries, or overlapping categories. Leveraging the advanced computational power of quantum computing, an Adaptive Quantum Circuit for Dempster’s Rule of Combination (AQC-DRC) is proposed to address efficient classification under uncertain environments. The AQC-DRC is developed within the framework of quantum evidence theory (QET) and facilitates decision-making based on quantum basic probability and plausibility levels, which is a generalized Bayesian inference method. The AQC-DRC provides a deterministic computation of DRC, ensuring that quantum fusion outcomes in uncertain pattern classification are exactly aligned with those of the classical method, while simultaneously achieving exponential reductions in the computational complexity of evidence combination and significantly improving fusion efficiency. It is founded that the quantum basic probability amplitude function in QET, as a generalized quantum probability amplitude, can be naturally utilized to express the quantum amplitude encoding. In addition, the quantum basic probability in QET, as a generalized quantum probability, naturally forms a quantum basic probability distribution and can be used to represent quantum measurement outcomes for quantum basic probability level decision-making. Furthermore, the quantum plausibility function in QET also can be naturally used to express the quantum measurement outcomes for quantum plausibility level decision-making. These findings enrich the physical understanding of quantum amplitude encoding and quantum measurement outcomes, offering broad application prospects for representing and processing uncertain knowledge in pattern classification.
|
Poster
|
Analog Foundation Models
|
https://neurips.cc//virtual/2025/poster/115016
|
Julian Büchel, Iason Chalas, Giovanni Acampa, An Chen, Omobayode Fagbohungbe, Hsinyu Tsai, Kaoutar El Maghraoui, Manuel Le Gallo, Abbas Rahimi, Abu Sebastian
|
Analog in-memory computing (AIMC) is a promising compute paradigm to improve speed and power efficiency of neural network inference beyond the limits of conventional von Neumann-based architectures. However, AIMC introduces fundamental challenges such as noisy computations and strict constraints on input and output quantization. Because of these constraints and imprecisions, off-the-shelf LLMs are not able to achieve 4-bit-level performance when deployed on AIMC-based hardware. While researchers previously investigated recovering this accuracy gap on small, mostly vision-based models, a generic method applicable to LLMs pre-trained on trillions of tokens does not yet exist. In this work, we introduce a general and scalable method to robustly adapt LLMs for execution on noisy, low-precision analog hardware. Our approach enables state-of-the-art models — including Phi-3-mini-4k-instruct and Llama-3.2-1B-Instruct — to retain performance comparable to 4-bit weight, 8-bit activation baselines, despite the presence of analog noise and quantization constraints. Additionally, we show that as a byproduct of our training methodology, analog foundation models can be quantized for inference on low-precision digital hardware. Finally, we show that our models also benefit from test-time compute scaling, showing better scaling behavior than models trained with 4-bit weight and 8-bit static input quantization. Our work bridges the gap between high-capacity LLMs and efficient analog hardware, offering a path toward energy-efficient foundation models. Code is available at [anonymous.4open.science/r/analog-foundation-models-BB03](https://anonymous.4open.science/r/analog-foundation-models-BB03).
|
Poster
|
Analog In-memory Training on General Non-ideal Resistive Elements: The Impact of Response Functions
|
https://neurips.cc//virtual/2025/poster/117575
|
Zhaoxian Wu, Quan Xiao, Tayfun Gokmen, Omobayode Fagbohungbe, Tianyi Chen
|
As the economic and environmental costs of training and deploying large vision or language models increase dramatically, analog in-memory computing (AIMC) emerges as a promising energy-efficient solution. However, the training perspective, especially its training dynamic, is underexplored. In AIMC hardware, the trainable weights are represented by the conductance of resistive elements and updated using consecutive electrical pulses. While the conductance changes by a constant in response to each pulse, in reality, the change is scaled by asymmetric and non-linear response functions, leading to a non-ideal training dynamic. This paper provides a theoretical foundation for gradient-based training on AIMC hardware with non-ideal response functions. We demonstrate that asymmetric response functions negatively impact Analog SGD by imposing an implicit penalty on the objective. To overcome the issue, we propose residual learning algorithm, which provably converges exactly to a critical point by solving a bilevel optimization problem. We show that the proposed method can be extended to deal with other hardware imperfections like limited response granularity. As far as we know, it is the first paper to investigate the impact of a class of generic non-ideal response functions. The conclusion is supported by simulations validating our theoretical insights.
|
Poster
|
Analogy-based Multi-Turn Jailbreak against Large Language Models
|
https://neurips.cc//virtual/2025/poster/117981
|
Mengjie Wu, Zhenjun Lin, Yihao Huang, Kangjie Chen, Yuyang zhang, Yuhan Huang, Run Wang, Lina Wang
|
Large language models (LLMs) are inherently designed to support multi-turn interactions, which opens up new possibilities for jailbreak attacks that unfold gradually and potentially bypass safety mechanisms more effectively than single-turn attacks. However, current multi-turn jailbreak methods are still in their early stages and suffer from two key limitations. First, they all inherently require inserting sensitive phrases into the context, which makes the dialogue appear suspicious and increases the likelihood of rejection, undermining the effectiveness of the attack. Second, even when harmful content is generated, the response often fails to align with the malicious prompt due to semantic drift, where the conversation slowly moves away from its intended goal. To address these challenges, we propose an analogy-based black-box multi-turn jailbreak framework that constructs fully benign contexts to improve attack success rate while ensuring semantic alignment with the malicious intent. The method first guides the model through safe tasks that mirror the response structure of the malicious prompt, enabling it to internalize the format without exposure to sensitive content. A controlled semantic shift is then introduced in the final turn, substituting benign elements with malicious ones while preserving structural coherence. Experiments on six commercial and open-source LLMs, two benchmark datasets show that our method significantly improves attack performance, achieving an average attack success rate of 93.3\% and outperforming five competitive baselines. Our code is released at https://anonymous.4open.science/r/AMA-E1C4
|
Poster
|
Analytical Contrastive Projection for Accurate Continual Learning
|
https://neurips.cc//virtual/2025/poster/115870
|
Saleh Momeni, Changnan Xiao, Bing Liu
|
This paper studies the class-incremental learning (CIL) setting of continual learning. CIL aims to learn a sequence of tasks, where each task consists of a set of classes. Traditional CIL methods does not use a pre-trained model (PTM) and suffer from catastrophic forgetting (CF) due to their need to incrementally learn both feature representations and the classifier. The incorporation of PTMs into CIL has led to the development of computationally efficient methods that treat the PTM as a feature extractor paired with analytical classifiers. These methods often achieve state-of-the-art performance in CIL. However, they still face a major limitation: the inability to continually adapt or update feature representations incrementally to best suit the specific CIL tasks, leading to suboptimal performance. To overcome this, we propose ACP (Analytical Contrastive Projection), a novel method that retains the computational efficiency and stability of analytical classifiers while enabling incremental feature adaptation without gradient-based training. Our experiments demonstrate that ACP not only outperforms strong baselines but also matches the accuracy of joint training, which is regarded as the upper bound of CIL.
|
Poster
|
Analytic Energy-Guided Policy Optimization for Offline Reinforcement Learning
|
https://neurips.cc//virtual/2025/poster/116269
|
Jifeng Hu, Sili Huang, Zhejian Yang, Shengchao Hu, Li Shen, Hechang Chen, Lichao Sun, Yi Chang, Dacheng Tao
|
Conditional decision generation with diffusion models has shown powerful competitiveness in reinforcement learning (RL). Recent studies reveal the relation between energy-function-guidance diffusion models and constrained RL problems. The main challenge lies in estimating the intermediate energy, which is intractable due to the log-expectation formulation during the generation process. To address this issue, we propose the Analytic Energy-guided Policy Optimization (AEPO). Specifically, we first provide a theoretical analysis and the closed-form solution of the intermediate guidance when the diffusion model obeys the conditional Gaussian transformation. Then, we analyze the posterior Gaussian distribution in the log-expectation formulation and obtain the target estimation of the log-expectation under mild assumptions. Finally, we train an intermediate energy neural network to approach the target estimation of log-expectation formulation. We apply our method in 30+ offline RL tasks to demonstrate the effectiveness of our method. Extensive experiments illustrate that our method surpasses numerous representative baselines in D4RL offline reinforcement learning benchmarks.
|
Poster
|
Analyzing Fine-Grained Alignment and Enhancing Vision Understanding in Multimodal Language Models
|
https://neurips.cc//virtual/2025/poster/118225
|
Jiachen Jiang, Jinxin Zhou, Bo Peng, Xia Ning, Zhihui Zhu
|
Achieving better alignment between vision embeddings and Large Language Models (LLMs) is crucial for enhancing the abilities of Multimodal LLMs (MLLMs), particularly for recent models that rely on powerful pretrained vision encoders and LLMs. A common approach to connect the pretrained vision encoder and LLM is through a projector applied after the vision encoder. However, the projector is often trained to enable the LLM to generate captions, and hence the mechanism by which LLMs understand each vision token remains unclear. In this work, we first investigate the role of the projector in compressing vision embeddings and aligning them with word embeddings. We show that the projector significantly compresses visual information, removing redundant details while preserving essential elements necessary for the LLM to understand visual content. We then examine patch-level alignment---the alignment between each vision patch and its corresponding semantic words---and propose a $\textit{multi-semantic alignment hypothesis}$. Our analysis indicates that the projector trained by caption loss improves patch-level alignment but only to a limited extent, resulting in weak and coarse alignment. To address this issue, we propose $\textit{patch-aligned training}$ to efficiently enhance patch-level alignment. Our experiments show that patch-aligned training (1) achieves stronger compression capability and improved patch-level alignment, enabling the MLLM to generate higher-quality captions, (2) improves the MLLM's performance by 16% on referring expression grounding tasks, 4% on question-answering tasks, and 3% on modern instruction-following benchmarks when using the same supervised fine-tuning (SFT) setting. The proposed method can be easily extended to other multimodal models.
|
Poster
|
Analyzing Similarity Metrics for Data Selection for Language Model Pretraining
|
https://neurips.cc//virtual/2025/poster/118786
|
Dylan Sam, Ayan Chakrabarti, Afshin Rostamizadeh, Srikumar Ramalingam, Gui Citovsky, Sanjiv Kumar
|
Measuring similarity between training examples is critical for curating high-quality and diverse pretraining datasets for language models. However, similarity is typically computed with a generic off-the-shelf embedding model that has been trained for tasks such as retrieval. Whether these embedding-based similarity metrics are well-suited for pretraining data selection remains largely unexplored.In this paper, we propose a new framework to assess the suitability of a similarity metric specifically for data curation in language model pretraining applications. Our framework's first evaluation criterion captures how well distances reflect generalization in pretraining loss between different training examples. Next, we use each embedding model to guide a standard diversity-based data curation algorithm and measure its utility by pretraining a language model on the selected data and evaluating downstream task performance. Finally, we evaluate the capabilities of embeddings to distinguish between examples from different data sources. With these evaluations, we demonstrate that standard off-the-shelf embedding models are not well-suited for the pretraining data curation setting, underperforming even remarkably simple embeddings that are extracted from models trained on the same pretraining corpus. Our experiments are performed on the Pile, for pretraining a 1.7B parameter language model on 200B tokens. We believe our analysis and evaluation framework serves as a foundation for the future design of embeddings that specifically reason about similarity in pretraining datasets.
|
Poster
|
Analyzing the Power of Chain of Thought through Memorization Capabilities
|
https://neurips.cc//virtual/2025/poster/115641
|
Lijia Yu, Xiao-Shan Gao, Lijun Zhang
|
It has been shown that the chain of thought (CoT) can enhance the power of LLMs to simulate a Turing machine or an algorithm, and in particular its mathematical reasoning ability. The memorization capability of LLMs is an important aspect of their expressive ability, which offers valuable insight into designing models with enhanced generalization potential. Currently, the optimal memorization capacities of transformers have been established for both the general dataset and the dataset that satisfies a specific separability condition. However, the question of whether the CoT can improve the memorization capability of LLMs remains unexamined. To fill this gap, we establish the memorization capability for fixed-precision autoregressive transformers with or without CoT. Precisely, we first give the necessary and sufficient conditions for transformers to memorize a finite language and then provide the upper and lower bounds for the number of parameters of the memorization transformers. Our result indicates that the classes of languages that can be memorized by transformers with or without CoT do not contain each other, and the same number of parameters is needed for transformers with or without CoT to memorize, implying that CoT does not enhance a transformer’s memorization power significantly. We further show that CoT can not help transformers to memory certain infinite languages.
|
Poster
|
An Analysis of Causal Effect Estimation using Outcome Invariant Data Augmentation
|
https://neurips.cc//virtual/2025/poster/119327
|
UZAIR AKBAR, Niki Kilbertus, Hao Shen, Krikamol Muandet, Bo Dai
|
The technique of data augmentation (DA) is often used in machine learning for regularization purposes to better generalize under i.i.d. settings. In this work, we make a case for the use of DA beyond just the i.i.d. setting, but for generalization across interventions as well by presenting a unifying framework with topics in causal inference. Specifically, we argue that when the outcome generating mechanism is invariant to our choice of DA, then such augmentations can effectively be thought of as interventions on the treatment generating mechanism itself. This can potentially help to reduce the amount of bias in our estimation of causal effects arising from hidden confounders. In the presence of such unobserved confounding we typically make use of instrumental variables (IVs) -- sources of treatment randomization that are conditionally independent of the outcome. However, IVs may not be as readily available as DA for many applications, which is the main motivation behind this work. By appropriately regularizing IV based estimators, we introduce the concept of IV-like (IVL) regression for when treatment randomization sources may carry no information about the outcome and the possibility of its use for improving predictive performance across treatment interventions and reducing confounding bias. Finally, we cast parameterized DA as a IVL regression problem and show that when used in composition can simulate a worst-case application of such DA, further improving performance on causal estimation and generalization tasks beyond what simple DA may offer. This is shown both theoretically for the population case and via simulation experiments for the finite sample case using a simple linear example. We also present real data experiments to support our case.
|
Poster
|
An Analysis of Concept Bottleneck Models: Measuring, Understanding, and Mitigating the Impact of Noisy Annotations
|
https://neurips.cc//virtual/2025/poster/115724
|
Seonghwan Park, Jueun Mun, Donghyun Oh, Namhoon Lee
|
Concept bottleneck models (CBMs) ensure interpretability by decomposing predictions into human interpretable concepts.Yet the annotations used for training CBMs that enable this transparency are often noisy, and the impact of such corruption is not well understood.In this study, we present the first systematic study of noise in CBMs and show that even moderate corruption simultaneously impairs prediction performance, interpretability, and the intervention effectiveness.Our analysis identifies a susceptible subset of concepts whose accuracy declines far more than the average gap between noisy and clean supervision and whose corruption accounts for most performance loss.To mitigate this vulnerability we propose a two-stage framework.During training, sharpness-aware minimization stabilizes the learning of noise-sensitive concepts.During inference, where clean labels are unavailable, we rank concepts by predictive entropy and correct only the most uncertain ones, using uncertainty as a proxy for susceptibility.Theoretical analysis and extensive ablations elucidate why sharpness-aware training confers robustness and why uncertainty reliably identifies susceptible concepts, providing a principled basis that preserves both interpretability and resilience in the presence of noise.
|
Poster
|
An Analytical Theory of Spectral Bias in the Learning Dynamics of Diffusion Models
|
https://neurips.cc//virtual/2025/poster/117950
|
Binxu Wang, Cengiz Pehlevan
|
We develop an analytical framework for understanding how the learned distribution evolves during diffusion model training. Leveraging the Gaussian equivalence principle, we derived exact solutions for the gradient-flow dynamics of weights in one or two layer linear or linear convolutional denoiser settings with arbitrary data, where linear networks converge along principal components, and convolutional networks converge along Fourier modes. Remarkably, these solutions allow us to derive the generated distribution in closed-form and its KL-divergence through training. These analytical results expose a pronounced \emph{spectral bias}, i.e. for both weights and generated distributions, the convergence time of a mode follows an inverse power law of its variance. Empirical experiments on both Gaussian and natural image datasets demonstrate that the power-law spectral bias—remain robust even when using deeper or convolutional architectures. Our results underscore the importance of the data covariance in dictating the order and rate at which diffusion models learn different modes of the data, providing potential explanations of why earlier stopping could lead to incorrect details in image generative model.
|
Poster
|
Anatomically inspired digital twins capture hierarchical object representations in visual cortex
|
https://neurips.cc//virtual/2025/poster/119563
|
Emanuele Luconi, Dario Liscai, Carlo Baldassi, Alessandro Marin Vargas, Alessandro Sanzeni
|
Invariant object recognition-the ability to identify objects despite changes in appearance-is a hallmark of visual processing in the brain, yet its understanding remains a central challenge in systems neuroscience. Artificial neural networks trained to predict neural responses to visual stimuli (“digital twins”) could provide a powerful framework for studying such complex computations in silico. However, while current models accurately capture single-neuron responses within individual visual areas, their ability to reproduce how populations of neurons represent object identity, and how these representations transform across the cortical hierarchy, remains largely unexplored. Here we examine key functional signatures observed experimentally and find that current models account for hierarchical changes in basic single-neuron properties, such as receptive field size, but fail to capture more complex population-level phenomena, particularly invariant object representations. To address this gap, we introduce a biologically inspired hierarchical readout scheme that mirrors cortical anatomy, modeling each visual area as a projection from a distinct depth within a shared core network. This approach significantly improves the prediction of population-level representational transformations, outperforming standard models that use only the final layer, as well as alternatives with modified architecture, regularization, and loss function. Our results suggest that incorporating anatomical information provides a strong inductive bias in digital twin models, enabling them to better capture general principles of brain function.
|
Poster
|
An Attempt to Use Synthetic Pretraining Playgrounds for Language Model Architecture Design
|
https://neurips.cc//virtual/2025/poster/116327
|
Zeyuan Allen-Zhu
|
Understanding architectural differences in language models is challenging, particularly at academic-scale pretraining (e.g., 1.3B params, 100B tokens), where results are often dominated by noise. We propose synthetic pretraining tasks to isolate and evaluate key model capabilities. Using this framework, we identify \emph{Canon layers}: lightweight components—named after the musical term—that enhance horizontal information flow across neighboring tokens. Canon layers compute weighted sum of nearby token representations and integrate seamlessly into Transformers, linear attention, state-space models, or any sequence architecture.We present 12 results, including how Canon enhances reasoning depth ($2\times$), reasoning breadth, knowledge manipulation, etc. Canon transforms weak architectures like NoPE to match RoPE and linear attention to rival state-space models (e.g., Mamba2), validated through synthetic tasks and real-world academic-scale pretraining. This synthetic framework isolates core capabilities often obscured at academic scales, offering an \emph{economical, principled path} to guide architecture design as pretraining pipelines improve, aiming to unlock deeper reasoning.
|
Poster
|
Anchor-based Maximum Discrepancy for Relative Similarity Testing
|
https://neurips.cc//virtual/2025/poster/118191
|
Zhijian Zhou, Liuhua Peng, Xunye Tian, Feng Liu
|
The relative similarity testing aims to determine which of the distributions, $P$ or $Q$, is closer to an anchor distribution $U$. Existing kernel-based approaches often test the relative similarity with a fixed kernel in a manually specified alternative hypothesis, e.g., $Q$ is closer to $U$ than $P$. Although kernel selection is known to be important to kernel-based testing methods, the manually specified hypothesis poses a significant challenge for kernel selection in relative similarity testing: Once the hypothesis is specified first, we can always find a kernel such that the hypothesis is rejected. This challenge makes relative similarity testing ill-defined when we want to select a good kernel after the hypothesis is specified. In this paper, we cope with this challenge via learning a proper hypothesis and a kernel simultaneously, instead of learning a kernel after manually specifying the hypothesis. We propose an anchor-based maximum discrepancy (AMD), which defines the relative similarity as the maximum discrepancy between the distances of $(U, P)$ and $(U, Q)$ in a space of deep kernels. Based on AMD, our testing incorporates two phases. In Phase I, we estimate the AMD over the deep kernel space and infer the potential hypothesis. In Phase II, we assess the statistical significance of the potential hypothesis, where we propose a unified testing framework to derive thresholds for tests over different possible hypotheses from Phase I. Lastly, we validate our method theoretically and demonstrate its effectiveness via extensive experiments on benchmark datasets.
|
Poster
|
Anchored Diffusion Language Model
|
https://neurips.cc//virtual/2025/poster/119160
|
Litu Rout, Constantine Caramanis, Sanjay Shakkottai
|
Diffusion Language Models (DLMs) promise parallel generation and bidirectional context, yet they underperform autoregressive (AR) models in both *likelihood modeling* and *generated text quality*. We identify that this performance gap arises when important tokens (e.g., key words or low-frequency words that anchor a sentence) are masked early in the forward process, limiting contextual information for accurate reconstruction. To address this, we introduce the *Anchored Diffusion Language Model (ADLM)*, a novel two-stage framework that first predicts distributions over important tokens via an anchor network, and then predicts the likelihoods of missing tokens conditioned on the anchored predictions. ADLM significantly improves test perplexity on LM1B and OpenWebText, achieving up to 25.4\% gains over prior DLMs, and narrows the gap with strong AR baselines. It also achieves state-of-the-art zero-shot generalization across seven benchmarks and surpasses AR models in MAUVE score, which marks the first time a DLM generates better human-like text than an AR model. Theoretically, we derive an Anchored Negative Evidence Lower Bound (ANELBO) objective and show that anchoring improves sample complexity and likelihood modeling. Beyond diffusion, anchoring boosts performance in AR models and enhances reasoning in math and logic tasks, outperforming existing chain-of-thought approaches.
|
Poster
|
A Near-Optimal Algorithm for Decentralized Convex-Concave Finite-Sum Minimax Optimization
|
https://neurips.cc//virtual/2025/poster/115765
|
Hongxu Chen, Ke Wei, Haishan Ye, Luo Luo
|
In this paper, we study the distributed convex-concave finite-sum minimax optimization over the network, and a decentralized variance-reduced optimistic gradient method with stochastic mini-batch sizes (DIVERSE) is proposed. For the strongly-convex-strongly-concave objective, it is shown that DIVERSE can achieve a linear convergence rate that depends on the global smoothness parameters, yielding sharper computation and communication complexity bounds than existing results. Furthermore, we also establish the lower complexity bounds, which show that our upper bounds are optimal up to a logarithmic factor in terms of the local incremental first-order oracle calls, the computation rounds, and the communication rounds. Numerical experiments demonstrate that our algorithm outperforms existing methods in practice.
|
Poster
|
A Near-optimal, Scalable and Parallelizable Framework for Stochastic Bandits Robust to Adversarial Corruptions and Beyond
|
https://neurips.cc//virtual/2025/poster/116325
|
Zicheng Hu, Cheng Chen
|
We investigate various stochastic bandit problems in the presence of adversarial corruption. A seminal work in this area is the BARBAR algorithm, which is both robust and efficient. However, it suffers from a regret of $O(KC)$, which does not match the lower bound $\Omega(C)$. In this paper, we first improve the BARBAR algorithm by proposing a novel framework called BARBAT, which eliminates the factor of $K$ to achieve an optimal regret bound up to a logarithmic factor. We also extend BARBAT to various settings, including multi-agent bandits, graph bandits, combinatorial semi-bandits and batched bandits. Compared to the Follow-The-Regularized-Leader (FTRL) framework, our methods offer the advantages of being parallelizable (making it suitable for multi-agent bandits and batched bandits) and having lower computational costs (especially in semi-bandits). Numerical experiments verifies the efficiency of proposed methods.
|
Poster
|
An Effective Levelling Paradigm for Unlabeled Scenarios
|
https://neurips.cc//virtual/2025/poster/116112
|
Fangming Cui, Di Yang, Yuqiang Ren, Zhou Yu, Liang Xiao, Xinmei Tian
|
Advancements in directly-integration parameter optimization have underscored their potential to enhance the performance of labeled scenarios and tasks. One inherent flaw of these methods is that the optimized parameters usually exhibit weak performance on unlabeled tasks or scenarios. This may be attributed to the fact that the uncoordinated learning of directly-integration framework. To mitigate this issue of uncoordinated learning, we propose a novel method called Levelling Paradigm (LePa) to improve performance for unlabeled tasks or scenarios.The proposed LePa dynamically constrains and coordinates multiple objective functions, thereby improving the robustness of coordinated fine-tuning. Comprehensive experiments demonstrate that the LePa outperforms existing methods.
|
Poster
|
An Efficient Local Search Approach for Polarized Community Discovery in Signed Networks
|
https://neurips.cc//virtual/2025/poster/120341
|
Linus Aronsson, Morteza Haghir Chehreghani
|
Signed networks, where edges are labeled as positive or negative to represent friendly or antagonistic interactions, offer a natural framework for analyzing polarization, trust, and conflict in social systems. Detecting meaningful group structures in such networks is crucial for understanding online discourse, political divisions, and trust dynamics. A key challenge is to identify communities that are internally cohesive and externally antagonistic, while allowing for neutral or unaligned vertices. In this paper, we propose a method for identifying $k$ polarized communities that addresses a major limitation of prior methods: their tendency to produce highly size-imbalanced solutions. We introduce a novel optimization objective that avoids such imbalance. In addition, it is well known that approximation algorithms based on *local search* are highly effective for clustering signed networks when neutral vertices are not allowed. We build on this idea and design the first local search algorithm that extends to the setting with neutral vertices while scaling to large networks. By connecting our approach to block-coordinate Frank-Wolfe optimization, we prove a linear convergence rate, enabled by the structure of our objective. Experiments on real-world and synthetic datasets demonstrate that our method consistently outperforms state-of-the-art baselines in solution quality, while remaining competitive in computational efficiency.
|
Poster
|
An Efficient Orlicz-Sobolev Approach for Transporting Unbalanced Measures on a Graph
|
https://neurips.cc//virtual/2025/poster/117636
|
Tam Le, Truyen Nguyen, Hideitsu Hino, Kenji Fukumizu
|
We investigate optimal transport (OT) for measures on graph metric spaces with different total masses. To mitigate the limitations of traditional $L^p$ geometry, Orlicz-Wasserstein (OW) and generalized Sobolev transport (GST) employ \emph{Orlicz geometric structure}, leveraging convex functions to capture nuanced geometric relationships and remarkably contribute to advance certain machine learning approaches. However, both OW and GST are restricted to measures with equal total mass, limiting their applicability to real-world scenarios where mass variation is common, and input measures may have noisy supports, or outliers. To address unbalanced measures, OW can either incorporate mass constraints or marginal discrepancy penalization, but this leads to a more complex two-level optimization problem. Additionally, GST provides a scalable yet rigid framework, which poses significant challenges to extend GST to accommodate nonnegative measures. To tackle these challenges, in this work we revisit the entropy partial transport (EPT) problem. By exploiting Caffarelli \& McCann's insights, we develop a novel variant of EPT endowed with Orlicz geometric structure, called \emph{Orlicz-EPT}. We establish theoretical background to solve Orlicz-EPT using a binary search algorithmic approach. Especially, by leveraging the dual EPT and the underlying graph structure, we formulate a novel regularization approach that leads to the proposed \emph{Orlicz-Sobolev transport} (OST). Notably, we demonstrate that OST can be efficiently computed by simply solving a univariate optimization problem, in stark contrast to the intensive computation needed for Orlicz-EPT. Building on this, we derive geometric structures for OST and draw its connections to other transport distances. We empirically illustrate that OST is several-order faster than Orlicz-EPT. Furthermore, we show preliminary evidence on the advantages of OST for measures on a graph in document classification and topological data analysis.
|
Poster
|
An Ellipsoid Algorithm for Online Convex Optimization
|
https://neurips.cc//virtual/2025/poster/118562
|
Zakaria Mhammedi
|
We study the problem of Online Convex Optimization (OCO) over a convex set $\mathcal{K} \subset \mathbb{R}^d$, accessed via a separation oracle. While classical projection-based algorithms such as projected Online Gradient Descent (OGD) achieve the optimal $O(\sqrt{T})$ regret, they require computing Euclidean projections onto $\mathcal{K}$ whenever an iterate falls outside the feasible set. These projections can be computationally expensive, especially for complex or high-dimensional sets. Projection-free algorithms address this by replacing projections with alternative oracle-based procedures, such as separation or linear optimization oracles. However, the regret bounds of existing separation-based methods scale poorly with the set's \emph{asphericity} $\kappa$, defined as the ratio between the radii of the smallest enclosing ball and the largest inscribed ball in $\mathcal{K}$; for ill-conditioned sets, $\kappa$ can be arbitrarily large.We introduce a new separation-based algorithm for OCO that achieves a regret bound of $\tilde{O}(\sqrt{dT} + d^2)$, with only logarithmic dependence on $\kappa$. This removes a key limitation of prior work and eliminates the need for costly geometric pre-processing, such as transforming $\mathcal{K}$ into isotropic position. Our algorithm is based on a novel reduction to online optimization over a sequence of dynamically updated ellipsoids, inspired by the classical ellipsoid method for convex optimization. It requires only $\tilde{O}(1)$ separation oracle calls per round, on par with existing separation-based approaches. These advances make our method particularly well suited for online optimization over geometrically complex feasible sets.
|
Poster
|
AneuG-Flow: A Large-Scale Synthetic Dataset of Diverse Intracranial Aneurysm Geometries and Hemodynamics
|
https://neurips.cc//virtual/2025/poster/121403
|
Wenhao Ding, Yiying Sheng, Choon Yap, Hwa Leo, Simão de Castro
|
Hemodynamics has a substantial influence on normal cardiovascular growth and disease formation, but requires time-consuming simulations to obtain. Deep Learning algorithms to rapidly predict hemodynamics parameters can be very useful, but their development is hindered by the lack of large dataset on anatomic geometries and associated fluid dynamics. This paper presents a new large-scale dataset of intracranial aneurysm (IA) geometries and hemodynamics to support the development of neural operators to solve geometry-dependent flow governing partial differential equations. The dataset includes 14,000 steady-flow cases and 200 pulsatile-flow cases simulated with computational fluid dynamics. All cases are computed using a laminar flow setup with more than 3 million cells. Boundary conditions are defined as a parabolic velocity profile with a realistic waveform over time at the inlet, and geometry-dependent mass flow split ratios at the two downstream outlets. The geometries are generated by a deep generative model trained on a cohort of 109 real IAs located at the middle cerebral artery bifurcation, capturing a wide range of geometric variations in both aneurysm sacs and parent vessels. Simulation results shows substantial influence of geometry on fluid forces and flow patterns. In addition to surface mesh files, the dataset provides volume data of velocity, pressure, and wall shear stresses (WSS). For transient cases, spatial and temporal gradients of velocity and pressure are also included. The dataset is tested with PointNet and graph U-Nets for WSS prediction, which showed relative L2 loss of 4.67\% for normalized WSS pattern.
|
Poster
|
An Evidence-Based Post-Hoc Adjustment Framework for Anomaly Detection Under Data Contamination
|
https://neurips.cc//virtual/2025/poster/118342
|
Sukanya Patra, Souhaib Ben Taieb
|
Unsupervised anomaly detection (AD) methods typically assume clean training data, yet real-world datasets often contain undetected or mislabeled anomalies, leading to significant performance degradation. Existing solutions require access to the training pipelines, data or prior knowledge of the proportions of anomalies in the data, limiting their real-world applicability. To address this challenge, we propose EPHAD, a simple yet effective inference-time adaptation framework that updates the outputs of AD models trained on contaminated datasets using evidence gathered at inference. Our approach formulates test-time adaptation as a Bayesian inference problem, integrating the prior knowledge captured by the AD model trained on contaminated datasets with auxiliary evidence derived from foundation models like CLIP, classical methods like the Latent Outlier Factor or domain-specific knowledge. We illustrate the intuition behind EPHAD using a synthetic toy example and validate its effectiveness through comprehensive experiments across eight image-based AD datasets, twenty-seven tabular datasets, and a real-world industrial dataset. Additionally, we conduct an ablation study to analyse hyperparameter influence and robustness to varying contamination levels, demonstrating the versatility and robustness of EPHAD across diverse AD models and evidence pairs. To ensure reproducibility, our code is publicly available https://anonymous.4open.science/r/EPAF-2025/.
|
Poster
|
An Exact Analysis of PCA
|
https://neurips.cc//virtual/2025/poster/118663
|
Ayoub El Hanchi, Murat Erdogdu, Chris Maddison
|
What property of the data distribution determines the excess risk of principal component analysis? In this paper, we provide a precise answer to this question. We establish a central limit theorem for the error of the principal subspace estimated by PCA, and derive the asymptotic distribution of its excess risk under the reconstruction loss. We obtain a non-asymptotic upper bound on the excess risk of PCA that recovers, in the large sample limit, our asymptotic characterization. Underlying our contributions is the following result: we prove that the negative block Rayleigh quotient, defined on the Grassmannian, is generalized self-concordant along geodesics emanating from its minimizer of maximum rotation less than $\pi/4$.
|
Poster
|
AngleRoCL: Angle-Robust Concept Learning for Physically View-Invariant Adversarial Patches
|
https://neurips.cc//virtual/2025/poster/117272
|
Wenjun Ji, Yuxiang Fu, Luyang Ying, Deng-Ping Fan, Yuyi Wang, Ming-Ming Cheng, Ivor Tsang, Qing Guo
|
Cutting-edge works have demonstrated that text-to-image (T2I) diffusion models can generate adversarial patches that mislead state-of-the-art object detectors in the physical world, revealing detectors' vulnerabilities and risks. However, these methods neglect the adversarial patches' attack effectiveness when observed from different views in the physical world (\ie, angle robustness of the adversarial patches). In this paper, for the first time, we study the angle robustness of generated patches comprehensively, revealing the angle-robust issues of existing works and demonstrating that input texts affect the angle robustness of generated patches significantly. Motivated by the studies, we introduce Angle-Robust Concept Learning (AngleRoCL), a novel approach that learns a generalizable concept (\ie, specialized text embeddings in implementation) representing the capability of generating angle-robust patches. The learned concept can be incorporated into text prompts and guides T2I models to generate patches with their attack effectiveness inherently resistant to viewpoint variations. Through extensive simulation and physical-world experiments across multiple observation views, we demonstrate that AngleRoCL significantly enhances the angle robustness of generated patches compared to baseline methods. Our patches maintain high attack success rates even under challenging viewing conditions, with an average improvement of xxx in attack effectiveness across multiple angles. This research advances the understanding of physically angle-robust patches and provides insights into the relationship between textual concepts and physical properties in T2I-generated contents.
|
Poster
|
Angles Don’t Lie: Unlocking Training‑Efficient RL Through the Model’s Own Signals
|
https://neurips.cc//virtual/2025/poster/118660
|
Qinsi Wang, Jinghan Ke, Hancheng Ye, Yueqian Lin, Yuzhe Fu, Jianyi Zhang, Kurt Keutzer, Chenfeng Xu, Yiran Chen
|
Current Reinforcement Fine-tuning (RFT) paradigms for Large Language Models (LLMs) suffer from sample inefficiency due to the redundant exposure of identical queries under uniform data sampling. While previous work has explored curriculum learning via heuristic difficulty metrics, these strategies exhibit limitations by neglecting the intrinsic learning signals generated by the model itself, thus leading to suboptimal training regimes. In this paper, we identify a model-inherent signal termed *angle concentration* that effectively reflects an LLM's capacity to learn from specific data. We theoretically and empirically demonstrate a correlation between the angular distribution of token hidden state vectors and the resulting gradient, revealing a learning preference for data exhibiting higher angle concentration. Inspired by this finding, we propose GAIN-RL, a Gradient-driven Angle-Informed Navigated RL framework. By leveraging the model's intrinsic angle concentration signal, GAIN-RL dynamically selects training data in each epoch, ensuring consistently impactful gradient updates and thus significantly enhancing overall training efficiency. Empirical evaluations show that GAIN-RL (GRPO) achieves over a 2.5$\times$ acceleration in training efficiency across diverse mathematical and coding tasks and varying model scales. Furthermore, GAIN-RL (GRPO)'s efficient sampling yields data-efficient training, achieving better performance with half the original data compared to vanilla GRPO with full training data.
|
Poster
|
Angular Constraint Embedding via SpherePair Loss for Constrained Clustering
|
https://neurips.cc//virtual/2025/poster/118615
|
Shaojie Zhang, Ke Chen
|
Constrained clustering integrates domain knowledge through pairwise constraints.However, existing deep constrained clustering (DCC) methods are either limited by anchors inherent in end-to-end modeling or struggle with learning discriminative Euclidean embedding, restricting their scalability and real-world applicability.To avoid their respective pitfalls, we propose a novel angular constraint embedding approach for DCC, termed SpherePair.Using the SpherePair loss with a geometric formulation,our method faithfully encodes pairwise constraints and leads to embeddings that are clustering-friendly in angular space, effectively separating representation learning from clustering. SpherePair preserves pairwise relations without conflicts,requires no exact cluster number for constraint embedding, generalizes to unseen data, and is supported by rigorous theoretical guarantees.Comparative evaluations with state-of-the-art DCC methods on diverse benchmarks, along with empirical validation of theoretical insights, confirm its superior performance, scalability, and overall real-world effectiveness.
|
Poster
|
Angular Steering: Behavior Control via Rotation in Activation Space
|
https://neurips.cc//virtual/2025/poster/117017
|
Hieu Vu, Tan Nguyen
|
Controlling specific behaviors in large language models while preserving their general capabilities is a central challenge for safe and reliable artificial intelligence (AI) deployment. Current steering methods, such as vector addition and directional ablation, are constrained within a two-dimensional subspace defined by the activation and feature direction, making them sensitive to chosen parameters and potentially affecting unrelated features due to unintended interactions in activation space. We introduce Angular Steering, a novel and flexible method for behavior modulation that operates by rotating activations within a fixed two-dimensional subspace. By formulating steering as a geometric rotation toward or away from a target behavior direction, Angular Steering provides continuous, fine-grained control over behaviors such as refusal and compliance. We demonstrate this method using refusal steering as a use case. Additionally, we propose Adaptive Angular Steering, a selective variant that rotates only activations aligned with the target feature, further enhancing stability and coherence. Angular Steering generalizes existing addition and orthogonalization techniques under a unified geometric rotation framework, simplifying parameter selection and maintaining model stability across a broader range of adjustments. Experiments across multiple model families and sizes show that Angular Steering achieves robust behavioral control without degrading general language modeling performance, underscoring its flexibility, generalization, and robustness compared to prior approaches.
|
Poster
|
AnimateQR: Bridging Aesthetics and Functionality in Dynamic QR Code Generation
|
https://neurips.cc//virtual/2025/poster/116622
|
Guangyang Wu, Huayu Zheng, Siqi Luo, Guangtao Zhai, Xiaohong Liu
|
Animated QR codes present an exciting frontier for dynamic content delivery and digital interaction. However, despite their potential, there has been no prior work focusing on the generation of animated QR codes that are both visually appealing and universally scannable. In this paper, we introduce AnimateQR, **the first generative framework** for creating **animated QR codes** that balance aesthetic flexibility with scannability. Unlike previous methods that focus on static QR codes, AnimateQR leverages **hierarchical luminance guidance** and **progressive spatiotemporal control** to produce high-quality dynamic QR codes. Our first innovation is a multi-scale hierarchical control signal that adjusts luminance across different spatial scales, ensuring that the QR code remains decodable while allowing for artistic expression. The second innovation is a progressive control mechanism that dynamically adjusts spatiotemporal guidance throughout the diffusion denoising steps, enabling fine-grained balance between visual quality and scannability. Extensive experimental results demonstrate that AnimateQR achieves state-of-the-art performance in both decoding success rates (96\% vs. 56\% baseline) and visual quality (user preference: 7.2 vs. 2.3 on a 10-point scale). Codes will be made public upon acceptance.
|
Poster
|
An Improved Algorithm for Adversarial Linear Contextual Bandits via Reduction
|
https://neurips.cc//virtual/2025/poster/118445
|
Tim van Erven, Jack Mayo, Julia Olkhovskaya, Chen-Yu Wei
|
We present an efficient algorithm for linear contextual bandits with adversarial losses and stochastic action sets. Our approach reduces this setting to misspecification-robust adversarial linear bandits with fixed action sets. Without knowledge of the context distribution or access to a context simulator, the algorithm achieves $\tilde O(d^2\sqrt{T})$ regret and runs in poly$(d,C,T)$ time, where $d$ is the feature dimension, $C$ is the number of linear constraints defining the action set in each round, and $T$ is number of rounds. This resolves the open question by Liu et al. (2023) on whether one can obtain poly$(d)\sqrt{T}$ regret in polynomial time independent of the number of actions. For the important class of combinatorial bandits with adversarial losses and stochastic action sets, our algorithm is the first to achieve poly$(d)\sqrt{T}$ regret in polynomial time, while no prior algorithm achieves even $o(T)$ regret in polynomial time to our knowledge. When a simulator is available, the regret bound can be improved to $\tilde O(d\sqrt{L^\star})$, where $L^\star$ is the cumulative loss of the best policy.
|
Poster
|
An Information-theoretical Framework for Understanding Out-of-distribution Detection with Pretrained Vision-Language Models
|
https://neurips.cc//virtual/2025/poster/116939
|
Bo Peng, Jie Lu, Guangquan Zhang, Zhen Fang
|
Out-of-distribution (OOD) detection, recognized for its ability to identify samples of unknown classes, provides solid advantages in ensuring the reliability of machine learning models. Among existing OOD detection methods, pre-trained vision-language models have emerged as powerful post-hoc OOD detectors by leveraging textual and visual information. Despite the empirical success, there still remains a lack of research on a formal understanding of their effectiveness. This paper bridges the gap by theoretically demonstrating that existing CLIP-based post-hoc methods effectively perform a stochastic estimation of the point-wise mutual information (PMI) between the input image and each in-distribution label. This estimation is then utilized to construct energy functions for modeling in-distribution distributions.Different from prior methods that inherently consider PMI estimation as a whole task, we, motivated by the divide-and-conquer philosophy, decompose PMI estimation into multiple easier sub-tasks by applying the chain rule of PMI, which not only reduces the estimation complexity but also provably increases the estimation upper bound to reduce the underestimation bias. Extensive evaluations across mainstream benchmarks empirically manifest that our method establishes a new state-of-the-art in a variety of OOD detection setups.
|
Poster
|
An Investigation of Memorization Risk in Healthcare Foundation Models
|
https://neurips.cc//virtual/2025/poster/118370
|
Sana Tonekaboni, Lena Stempfle, Adibvafa Fallahpour, Walter Gerych, Marzyeh Ghassemi
|
Foundation models trained on large-scale de-identified electronic health records (EHRs) hold promise for clinical applications. However, their capacity to memorize patient information raises important privacy concerns. In this work, we introduce a suite of black-box evaluation tests to assess memorization risks in foundation models trained on structured EHR data. Our framework includes methods for probing memorization at both the embedding and generative levels, and distinguishes between generalization and harmful memorization in clinically relevant settings. We contextualize memorization in terms of its potential to compromise patient privacy, particularly for vulnerable subgroups. We validate our approach on a publicly available EHR foundation model and release an open-source toolkit to facilitate reproducible and collaborative privacy assessments in healthcare AI.
|
Poster
|
An Iterative Algorithm for Differentially Private $k$-PCA with Adaptive Noise
|
https://neurips.cc//virtual/2025/poster/117931
|
Johanna Düngler, Amartya Sanyal
|
Given $n$ i.i.d.random matrices $A_i \in \mathbb{R}^{d \times d}$ that share common expectation $\Sigma$, the objective of Differentially Private Stochastic PCA is to identify a subspace of dimension $k$ that captures the largest variance directions of $\Sigma$, while preserving differential privacy (DP) of each individual $A_i$. Existing methods either (i) require the sample size $n$ to scale super-linearly with dimension $d$, even under Gaussian assumptions on the $A_i$, or (ii) introduce excessive noise for DP even when the intrinsic randomness within $A_i$ is small.~\citet{liu2022dp} addressed these issues for sub-Gaussian data but only for estimating the top eigenvector ($k=1$) using their algorithm DP-PCA. We propose the first algorithm capable of estimating the top $k$ eigenvectors for arbitrary $k \leq d$, whilst overcoming both limitations above. For $k=1$, our algorithm matches the utility guarantees of DP-PCA, achieving near-optimal statistical error even when $n = \tilde{O}(d)$. We further provide a lower bound for general $k > 1$, matching our upper bound up to a factor of $k$, and experimentally demonstrate the advantages of our algorithm over comparable baselines.
|
Poster
|
AnomalyCoT: A Multi-Scenario Chain-of-Thought Dataset for Multimodal Large Language Models
|
https://neurips.cc//virtual/2025/poster/121641
|
Jiaxi Cheng, Yuliang Xu, Shoupeng Wang, Tao Ma, Yuchen He, Jinghe Zhang, Sihang Cai, Jiawei Zhen, Jingyi Jia, Yao Wan, Yan Xia, Zhou Zhao
|
Industrial Anomaly Detection (IAD) is an indispensable quality control technology in modern production processes. Recently, on account of the outstanding visual comprehension and cross-domain knowledge transfer capabilities of multimodal large language models (MLLMs), existing studies have explored the application of MLLMs in the IAD domain and established some multimodal IAD datasets. However, although the latest datasets contain various fundamental IAD tasks, they formulate tasks in a general question-and-answer format lacking a rigorous reasoning process, and they are relatively limited in the diversity of scenarios, which restricts their reliability in practical applications. In this paper, we propose AnomalyCoT, a multimodal Chain-of-Thought (CoT) dataset for multi-scenario IAD tasks. It consists of 37,565 IAD samples with the CoT data and is defined by challenging composite IAD tasks. Meanwhile, the CoT data for each sample provides precise coordinates of anomaly regions, thereby improving visual comprehension of defects across different types. AnomalyCoT is constructed through a systematic pipeline and involves multiple manual operations. Based on AnomalyCoT, we conducted a comprehensive evaluation of various mainstream MLLMs and fine-tuned representative models in different ways. The final results show that Gemini-2.0-flash achieved the best performance in the direct evaluation with an accuracy rate of 59.6\%, while Llama 3.2-Vision achieves the best performance after LoRA fine-tuning with an accuracy rate of 94.0\%. Among all the fine-tuned models, the average accuracy improvement reaches 36.5\%, demonstrating the potential of integrating CoT datasets in future applications within the IAD field. The code and data are available at \url{https://github.com/Zhaolutuan/AnomalyCoT}.
|
Poster
|
Anomaly Detection by an Ensemble of Pairs of Random Hyperspheres
|
https://neurips.cc//virtual/2025/poster/115418
|
Walid Durani, Collin Leiber, Khalid Durani, Claudia Plant, Christian Böhm
|
Anomaly detection is a crucial task in data mining, focusing on identifying data points that deviate significantly from the main patterns in the data. This paper introduces Anomaly Detection by an Ensemble of Random Pairs of Hyperspheres (ADERH), a new isolation-based technique leveraging two key observations: (i) anomalies are comparatively rare, and (ii) they typically deviate stronger from general patterns than normal data points. Drawing on a delta-separation argument, ADERH constructs an ensemble of hyperspheres built upon randomly paired data points to identify anomalies. To address inevitable overlaps between anomalous and normal regions in the feature space, ADERH integrates two complementary concepts: Pitch, which highlights points near hypersphere boundaries, and NDensity, which down-weights hyperspheres centered on sparse (and often anomalous) regions. By averaging these local, density-adjusted ``isolation'' indicators across many random subsets, ADERH yields robust anomaly scores that clearly separate normal from abnormal samples. Extensive experiments on diverse real-world datasets show that ADERH consistently outperforms state-of-the-art methods while maintaining linear runtime scalability and stable performance across varying hyperparameter settings.
|
Poster
|
An Optimized Franz-Parisi Criterion and its Equivalence with SQ Lower Bounds
|
https://neurips.cc//virtual/2025/poster/117789
|
Siyu Chen, Theodor Misiakiewicz, Ilias Zadik, Peiyuan Zhang
|
Bandeira et al. (2022) introduced the Franz-Parisi (FP) criterion for characterizing the computational hard phases in statistical detection problems. The FP criterion, based on an annealed version of the celebrated Franz-Parisi potential from statistical physics, was shown to be equivalent to low-degree polynomial (LDP) lower bounds for Gaussian additive models, thereby connecting two distinct approaches to understanding the computational hardness in statistical inference. In this paper, we propose a refined FP criterion that aims to better capture the geometric ``overlap" structure of statistical models. Our main result establishes that this optimized FP criterion is equivalent to Statistical Query (SQ) lower bounds---another foundational framework in computational complexity of statistical inference. Crucially, this equivalence holds under a mild, verifiable assumption satisfied by a broad class of statistical models, including Gaussian additive models, planted sparse models, non-Gaussian component analysis, single-index models, and convex truncation detection. On top of the above, our equivalence not only unifies and simplifies the derivation of several known SQ lower bounds, but also yields new SQ-lower bounds of independent interest.
|
Poster
|
A Novel General Framework for Sharp Lower Bounds in Succinct Stochastic Bandits
|
https://neurips.cc//virtual/2025/poster/115196
|
Guo Zeng, Jean Honorio
|
Many online learning applications adopt the stochastic bandit problem with a linear reward model, where the unknown bandit parameter exhibits a succinct structure. We study minimax regret lower bounds which allow to know whether more efficient algorithms can be proposed. We introduce a general definition of succinctness and propose a novel framework for constructing minimax regret lower bounds based on an information-regret trade-off. When applied to entry-sparse vectors, our framework sharpens a recent lower bound by (Hao et al, NeurIPS 2020). We further apply our framework to derive novel results. To the best of our knowledge, we provide the first lower bounds for the group-sparse and low-rank matrix settings.
|
Poster
|
Anti-Aliased 2D Gaussian Splatting
|
https://neurips.cc//virtual/2025/poster/119938
|
Mohamed Younes, Adnane Boukhayma
|
2D Gaussian Splatting (2DGS) has recently emerged as a promising method for novel view synthesis and surface reconstruction, offering better view-consistency and geometric accuracy than volumetric 3DGS. However, 2DGS suffers from severe aliasing artifacts when rendering at different sampling rates than those used during training, limiting its practical applications in scenarios requiring camera zoom or varying fields of view. We identify that these artifacts stem from two key limitations: the lack of frequency constraints in the representation and an ineffective screen-space clamping approach. To address these issues, we present AA-2DGS, an antialiased formulation of 2D Gaussian Splatting that maintains its geometric benefits while significantly enhancing rendering quality across different scales. Our method introduces a world space flat smoothing kernel that constrains the frequency content of 2D Gaussian primitives based on the maximal sampling frequency from training views, effectively eliminating high-frequency artifacts when zooming in. Additionally, we derive a novel object space Mip filter by leveraging an affine approximation of the ray-splat intersection mapping, which allows us to efficiently apply proper anti-aliasing directly in the local space of each splat.
|
Poster
|
Antidistillation Sampling
|
https://neurips.cc//virtual/2025/poster/117654
|
Yash Savani, Asher Trockman, Zhili Feng, Yixuan Xu, Avi Schwarzschild, Alexander Robey, Marc Finzi, J. Zico Kolter
|
Frontier models that generate extended reasoning traces inadvertently produce token sequences that can facilitate model distillation. Recognizing this vulnerability, model owners may seek sampling strategies that limit the effectiveness of distillation without compromising model performance. *Antidistillation sampling* provides exactly this capability. By strategically modifying a model's next-token probability distribution, antidistillation sampling poisons reasoning traces, rendering them significantly less effective for distillation while preserving the model's utility.
|
Poster
|
Any Large Language Model Can Be a Reliable Judge: Debiasing with a Reasoning-based Bias Detector
|
https://neurips.cc//virtual/2025/poster/115702
|
Haoyan Yang, Runxue Bao, Cao (Danica) Xiao, Jun Ma, Parminder Bhatia, Shangqian Gao, Taha Kass-Hout
|
LLM-as-a-Judge has emerged as a promising tool for automatically evaluating generated outputs, but its reliability is often undermined by potential biases in judgment. Existing efforts to mitigate these biases face key limitations: in-context learning-based methods fail to address rooted biases due to the evaluator’s limited capacity for self-reflection, whereas fine-tuning is not applicable to all evaluator types, especially closed-source models. To address this challenge, we introduce the **R**easoning-based **B**ias **D**etector (RBD), which is a plug-in module that identifies biased evaluations and generates structured reasoning to guide evaluator self-correction. Rather than modifying the evaluator itself, RBD operates externally and engages in an iterative process of bias detection and feedback-driven revision. To support its development, we design a complete pipeline consisting of biased dataset construction, supervision collection, distilled reasoning-based fine-tuning of RBD, and integration with LLM evaluators. We fine-tune four sizes of RBD models, ranging from 1.5B to 14B, and observe consistent performance improvements across all scales. Experimental results on 4 bias types—verbosity, position, bandwagon, and sentiment—evaluated using 8 LLM evaluators demonstrate RBD’s strong effectiveness. For example, the RBD-8B model improves evaluation accuracy by an average of 18.5% and consistency by 10.9%, and surpasses prompting-based baselines and fine-tuned judges by 12.8% and 17.2%, respectively. These results highlight RBD’s effectiveness and scalability. Additional experiments further demonstrate its strong generalization across biases and domains, as well as its efficiency.
|
Poster
|
Any-stepsize Gradient Descent for Separable Data under Fenchel–Young Losses
|
https://neurips.cc//virtual/2025/poster/119241
|
Han Bao, Shinsaku Sakaue, Yuki Takezawa
|
The gradient descent (GD) has been one of the most common optimizer in machine learning. In particular, the loss landscape of a neural network is typically sharpened during the initial phase of training, making the training dynamics hover on the edge of stability. This is beyond our standard understanding of GD convergence in the stable regime where arbitrarily chosen stepsize is sufficiently smaller than the edge of stability. Recently, Wu et al. (COLT2024) have showed that GD converges with arbitrary stepsize under linearly separable logistic regression. Although their analysis hinges on the self-bounding property of the logistic loss, which seems to be a cornerstone to establish a modified descent lemma, our pilot study shows that other loss functions without the self-bounding property can make GD converge with arbitrary stepsize. To further understand what property of a loss function matters in GD, we aim to show arbitrary-stepsize GD convergence for a general loss function based on the framework of \emph{Fenchel--Young losses}. We essentially leverage the classical perceptron argument to derive the convergence rate for achieving $\epsilon$-optimal loss, which is possible for a majority of Fenchel--Young losses. Among typical loss functions, the Tsallis entropy achieves the GD convergence rate $T=\Omega(\epsilon^{-1/2})$, and the R{\'e}nyi entropy achieves the far better rate $T=\Omega(\epsilon^{-1/3})$. We argue that these better rate is possible because of \emph{separation margin} of loss functions, instead of the self-bounding property.
|
Poster
|
Anytime-valid, Bayes-assisted, Prediction-Powered Inference
|
https://neurips.cc//virtual/2025/poster/118778
|
Valentin Kilian, Stefano Cortinovis, Francois Caron
|
Given a large pool of unlabelled data and a smaller amount of labels, prediction-powered inference (PPI) leverages machine learning predictions to increase the statistical efficiency of standard confidence interval procedures based solely on labelled data, while preserving their fixed-time validity. In this paper, we extend the PPI framework to the sequential setting, where labelled and unlabelled datasets grow over time. Exploiting Ville's inequality and the method of mixtures, we propose prediction-powered confidence sequence procedures that are valid uniformly over time and naturally accommodate prior knowledge on the quality of the predictions to further boost efficiency. We carefully illustrate the design choices behind our method and demonstrate its effectiveness in real and synthetic examples.
|
Poster
|
AOR: Anatomical Ontology-Guided Reasoning for Medical Large Multimodal Model in Chest X-Ray Interpretation
|
https://neurips.cc//virtual/2025/poster/118045
|
Qingqiu Li, Zihang Cui, Seongsu Bae, Jilan Xu, Runtian Yuan, Yuejie Zhang, Rui Feng, Quanli Shen, Xiaobo Zhang, Shang Gao, Junjun He, Shujun Wang
|
Chest X-rays (CXRs) are the most frequently performed imaging examinations in clinical settings. Recent advancements in Medical Large Multimodal Models (MLMMs) have enabled automated CXR interpretation, improving diagnostic accuracy and efficiency. However, despite their strong visual understanding, current MLMMs still face two major challenges: (1) Insufficient region-level understanding and interaction, and (2) Limited accuracy and interpretability due to single-step prediction. In this paper, we address these challenges by empowering MLMMs with anatomy-centric reasoning capabilities to enhance their interactivity and explainability. Specifically, we propose an Anatomical Ontology-Guided Reasoning (AOR) framework that accommodates both textual and optional visual prompts, centered on region-level information to enable multimodal multi-step reasoning. We also develop AOR-Instruction, a large instruction dataset for MLMMs training, under the guidance of expert physicians. Our experiments demonstrate AOR's superior performance in both Visual Question Answering (VQA) and report generation tasks. Code and data are available at: https://anonymous.4open.science/r/AOR-48C7/.
|
Poster
|
A Partition Cover Approach to Tokenization
|
https://neurips.cc//virtual/2025/poster/115918
|
Jia Peng Lim, Shawn Tan, XianJun, Davin Choo, Hady Lauw
|
Tokenization is the process of encoding strings into tokens of a fixed vocabulary size, and is widely utilized in Natural Language Processing applications.The leading tokenization algorithm today is Byte Pair Encoding (BPE), which formulates the tokenization problem as a compression problem and tackles it by performing sequences of merges.In this work, we formulate tokenization as an optimization objective, show that it is NP-hard via a simple reduction from vertex cover, and propose a polynomial-time greedy algorithm GreedTok.Our formulation naturally relaxes to the well-studied weighted maximum coverage problem which has a simple $(1 - 1/e)$-approximation algorithm GreedWMC.Through empirical evaluations on real-world corpora, we show that GreedTok outperforms BPE and Unigram on compression and achieves a covering score comparable to GreedWMC. Finally, our extensive pre-training for two transformer-based language models with 1 billion parameters, comparing the choices of BPE and GreedTok as the tokenizer, shows that GreedTok achieves a lower bit per byte even when we control for either the total dataset proportion or total training tokens.
|
Poster
|
A Physics-preserved Transfer Learning Method for Differential Equations
|
https://neurips.cc//virtual/2025/poster/120058
|
Haoran Yang, Chuan-Xian Ren
|
While data-driven methods such as neural operator have achieved great success in solving differential equations (DEs), they suffer from domain shift problems caused by different learning environments (with data bias or equation changes), which can be alleviated by transfer learning (TL). However, existing TL methods adopted in DEs problems lack either generalizability in general DEs problems or physics preservation during training. In this work, we focus on a general transfer learning method that adaptively correct the domain shift and preserve physical relation within the equation. Mathematically, we characterize the data domain as product distribution and the essential problems as distribution bias and operator bias. A Physics-preserved Optimal Tensor Transport (POTT) method that simultaneously admits generalizability to common DEs and physics preservation of specific problem is proposed to adapt the data-driven model to target domain, utilizing the pushforward distribution induced by the POTT map. Extensive experiments in simulation and real-world datasets demonstrate the superior performance, generalizability and physics preservation of the proposed POTT method.
|
Poster
|
APIGen-MT: Agentic Pipeline for Multi-Turn Data Generation via Simulated Agent-Human Interplay
|
https://neurips.cc//virtual/2025/poster/121441
|
Akshara Prabhakar, Zuxin Liu, Ming Zhu, Jianguo Zhang, Tulika Manoj Awalgaonkar, Shiyu Wang, Zhiwei Liu, Haolin Chen, Thai Hoang, Juan Carlos Niebles, Shelby Heinecke, Weiran Yao, Huan Wang, Silvio Savarese, Caiming Xiong
|
Training effective AI agents for multi-turn interactions requires high-quality data that captures realistic human-agent dynamics, yet such data is scarce and expensive to collect manually. We introduce APIGen-MT, a two-phase framework that generates verifiable and diverse multi-turn agent data. In the first phase, our agentic pipeline produces detailed task blueprints with ground-truth actions, leveraging a committee of LLM reviewers and iterative feedback loops. These blueprints are then transformed into complete interaction trajectories through simulated human-agent interplay. We train a family of models---the xLAM-2-fc-r series with sizes ranging from 1B to 70B parameters. Our models outperform frontier models such as GPT-4o and Claude 3.5 on $\tau$-bench and BFCL benchmarks, with the smaller models surpassing their larger counterparts, particularly in multi-turn settings, while maintaining superior consistency across multiple trials. Comprehensive experiments demonstrate that our verified blueprint-to-details approach yields high-quality training data, enabling the development of more reliable, efficient, and capable agents. We open-source both the synthetic data collected and the trained xLAM-2-fc-r models to advance research in AI agents.Dataset: https://huggingface.co/datasets/Salesforce/APIGen-MT-5k & Models: https://huggingface.co/collections/Salesforce/xlam-2-67ef5be12949d8dcdae354c4
|
Poster
|
A Plug-and-Play Query Synthesis Active Learning Framework for Neural PDE Solvers
|
https://neurips.cc//virtual/2025/poster/115451
|
Zhiyuan Wang, Jinwoo Go, Byung-Jun Yoon, Nathan Urban, Xiaoning Qian
|
In recent developments in scientific machine learning (SciML), neural surrogate solvers for partial differential equations (PDEs) have become powerful tools for accelerating scientific computation for various science and engineering applications. However, training neural PDE solvers often demands a large amount of high-fidelity PDE simulation data, which are expensive to generate. Active learning (AL) offers a promising solution by adaptively selecting training data from the PDE settings--including parameters, initial and boundary conditions--that are expected to be most informative to help reduce this data burden. In this work, we introduce PaPQS, a Plug-and-Play Query Synthesis AL framework that synthesizes informative PDE settings directly in the continuous design space. PaPQS optimizes the Expected Information Gain (EIG) while encouraging batch diversity, enabling model-aware exploration of the design space via backpropagation through the neural PDE solution trajectories. The framework is applicable to general PDE systems and surrogate architectures, and can be seamlessly integrated with existing AL strategies. Extensive experiments across different PDE systems demonstrate that our AL framework, PaPQS, consistently improves sample efficiency over existing AL baselines.
|
Poster
|
APML: Adaptive Probabilistic Matching Loss for Robust 3D Point Cloud Reconstruction
|
https://neurips.cc//virtual/2025/poster/118183
|
Sasan Sharifipour, Constantino Casado, Mohammad Sabokrou, Miguel Bordallo Lopez
|
Training deep learning models for point cloud prediction tasks such as shape completion and generation depends critically on loss functions that measure discrepancies between predicted and ground-truth point sets. Commonly used functions such as Chamfer Distance (CD), HyperCD, and InfoCD rely on nearest-neighbor assignments, which often induce many-to-one correspondences, leading to point congestion in dense regions and poor coverage in sparse regions. These losses also involve non-differentiable operations due to index selection, which may affect gradient-based optimization. Earth Mover Distance (EMD) enforces one-to-one correspondences and captures structural similarity more effectively, but its cubic computational complexity limits its practical use. We propose the Adaptive Probabilistic Matching Loss (APML), a fully differentiable approximation of one-to-one matching that leverages Sinkhorn iterations on a temperature-scaled similarity matrix derived from pairwise distances. We analytically compute the temperature to guarantee a minimum assignment probability, eliminating manual tuning. APML achieves near-quadratic runtime, comparable to Chamfer-based losses, and avoids non-differentiable operations. When integrated into state-of-the-art architectures (PoinTr, PCNNet) on ShapeNet benchmarks and on a spatio‑temporal Transformer (CSI2PC) that \textit{generates} 3‑D human point clouds from WiFi‑CSI measurements, APM loss yields faster convergence, superior spatial distribution, especially in low-density regions, and improved or on-par quantitative performance without additional hyperparameter search. The code is available at: https://github.com/apm-loss/apml.
|
Poster
|
APOLLO: Automated LLM and Lean Collaboration for Advanced Formal Reasoning
|
https://neurips.cc//virtual/2025/poster/116789
|
Azim Ospanov, Farzan Farnia, Roozbeh Yousefzadeh
|
Formal reasoning and automated theorem proving constitute a challenging subfield of machine learning, in which machines are tasked with proving mathematical theorems using formal languages like Lean. A formal verification system can check whether a formal proof is correct or not almost instantaneously, but generating a completely correct formal proof with large language models (LLMs) remains a formidable task. The usual approach in the literature is to prompt the LLM many times (up to several thousands) until one of the generated proofs passes the verification system. In this work, we present APOLLO (**A**utomated **P**r**O**of repair via **L**LM and **L**ean c**O**llaboration), a modular, model‑agnostic pipeline that combines the strengths of the Lean compiler with an LLM’s reasoning abilities to achieve better proof‐generation results at a low sampling budget. _Apollo_ directs a fully automated process in which the LLM generates proofs for theorems, a set of agents analyze the proofs, fix the syntax errors, identify the mistakes in the proofs using Lean, isolate failing sub‑lemmas, utilize automated solvers, and invoke an LLM on each remaining goal with a low top‑$K$ budget. The repaired sub‑proofs are recombined and reverified, iterating up to a user‑controlled maximum number of attempts. On the miniF2F benchmark, we establish a new state‑of‑the‑art accuracy of 75.0\% among 7B‑parameter models while keeping the sampling budget below one thousand. Moreover, _Apollo_ raises the state‑of‑the‑art accuracy for Goedel‑Prover‑SFT to 65.6\% while cutting sample complexity from 25,600 to a few hundred. General‑purpose models (o3‑mini, o4‑mini) jump from 3–7\% to over 40\% accuracy. Our results demonstrate that targeted, compiler‑guided repair of LLM outputs yields dramatic gains in both efficiency and correctness, suggesting a general paradigm for scalable automated theorem proving.
|
Poster
|
Approximate Domain Unlearning for Vision-Language Models
|
https://neurips.cc//virtual/2025/poster/116248
|
Kodai Kawamura, Yuta Goto, Rintaro Yanagi, Hirokatsu Kataoka, Go Irie
|
Pre-trained Vision-Language Models (VLMs) exhibit strong generalization capabilities, enabling them to recognize a wide range of objects across diverse domains without additional training. However, they often retain irrelevant information beyond the requirements of specific target downstream tasks, raising concerns about computational efficiency and potential information leakage. This has motivated growing interest in approximate unlearning, which aims to selectively remove unnecessary knowledge while preserving overall model performance. Existing approaches to approximate unlearning have primarily focused on {\em class unlearning}, where a VLM is retrained to fail to recognize specified object classes while maintaining accuracy for others. However, merely forgetting object classes is often insufficient in practical applications. For instance, an autonomous driving system should accurately recognize {\em real} cars, while avoiding misrecognition of {\em illustrated} cars depicted in roadside advertisements as {\em real} cars, which could be hazardous. In this paper, we introduce {\em Approximate Domain Unlearning (ADU)}, a novel problem setting that requires reducing recognition accuracy for images from specified domains (e.g., {\em illustration}) while preserving accuracy for other domains (e.g., {\em real}). ADU presents new technical challenges: due to the strong domain generalization capability of pre-trained VLMs, domain distributions are highly entangled in the feature space, making naive approaches based on penalizing target domains ineffective. To tackle this limitation, we propose a novel approach that explicitly disentangles domain distributions and adaptively captures instance-specific domain information. Extensive experiments on three multi-domain benchmark datasets demonstrate that our approach significantly outperforms strong baselines built upon state-of-the-art VLM tuning techniques, paving the way for practical and fine-grained unlearning in VLMs. Codes will be published upon acceptance.
|
Poster
|
Approximate Gradient Coding for Distributed Learning with Heterogeneous Stragglers
|
https://neurips.cc//virtual/2025/poster/118102
|
Heekang Song, Wan Choi
|
In this paper, we propose an optimally structured gradient coding scheme to mitigate the straggler problem in distributed learning. Conventional gradient coding methods often assume homogeneous straggler models or rely on excessive data replication, limiting performance in real-world heterogeneous systems. To address these limitations, we formulate an optimization problem minimizing residual error while ensuring unbiased gradient estimation by explicitly considering individual straggler probabilities. We derive closed-form solutions for optimal encoding and decoding coefficients via Lagrangian duality and convex optimization, and propose data allocation strategies that reduce both redundancy and computational load. We also analyze convergence behavior for $\lambda$-strongly convex and $\mu$-smooth loss functions. Numerical results show that our approach significantly reduces the impact of stragglers and accelerates convergence compared to existing methods.
|
Poster
|
Approximately Aligned Decoding
|
https://neurips.cc//virtual/2025/poster/120288
|
Daniel Melcer, Sujan Kumar Gonugondla, Pramuditha Perera, Haifeng Qian, Wen-Hao Chiang, Yanjun Wang, Nihal Jain, Pranav Garg, Xiaofei Ma, Anoop Deoras
|
It is common to reject undesired outputs of Large Language Models (LLMs); however, current methods to do so require an excessive amount of computation to re-sample after a rejection, or distort the distribution of outputs by constraining the output to highly improbable tokens.We present a method, Approximately Aligned Decoding (AprAD), to balance the distortion of the output distribution with computational efficiency, inspired by algorithms from the speculative decoding literature.AprAD allows for the generation of long sequences of text with difficult-to-satisfy constraints, while amplifying low probability outputs much less compared to existing methods.We show through a series of experiments that the task-specific performance of AprAD is comparable to methods that do not distort the output distribution, while being much more computationally efficient.
|
Poster
|
Approximating Shapley Explanations in Reinforcement Learning
|
https://neurips.cc//virtual/2025/poster/116290
|
Daniel Beechey, Özgür Şimşek
|
Reinforcement learning has achieved remarkable success in complex decision-making environments, yet its lack of transparency limits deployment in critical settings. Shapley values provide a principled framework for explaining reinforcement learning, but their exponential computational cost makes them impractical for real-world problems. We address this challenge by introducing FastSVERL, a scalable method for approximating Shapley values designed to handle the unique challenges of reinforcement learning, including temporal dependencies across multi-step trajectories, learning from off-policy data, and adapting to evolving agent behaviours. These contributions position FastSVERL as a practical solution for real-time, Shapley-based interpretability in reinforcement learning.
|
Poster
|
Approximation and Generalization Abilities of Score-based Neural Network Generative Models for Sub-Gaussian Distributions
|
https://neurips.cc//virtual/2025/poster/115885
|
Guoji Fu, Wee Sun Lee
|
This paper studies the approximation and generalization abilities of score-based neural network generative models (SGMs) in estimating an unknown distribution $P_0$ from $n$ i.i.d. observations in $d$ dimensions. Assuming merely that $P_0$ is $\alpha$-sub-Gaussian, we prove that for any time step $t \in [t_0, n^{\mathcal{O}(1)}]$, where $t_0 \geq \mathcal{O}(\alpha^2n^{-2/d}\log n)$, there exists a deep ReLU neural network with width $\leq \mathcal{O}(\log^3n)$ and depth $\leq \mathcal{O}(n^{3/d}\log_2n)$ that can approximate the scores with $\tilde{\mathcal{O}}(n^{-1})$ mean square error and achieve a nearly optimal rate of $\tilde{\mathcal{O}}(n^{-1}t_0^{-d/2})$ for score estimation, as measured by the score matching loss. Our framework is universal and can be used to establish convergence rates for SGMs under milder assumptions than previous work. For example, assuming further that the target density function $p_0$ lies in Sobolev or Besov classes, with an appropriately early stopping strategy, we demonstrate that neural network-based SGMs can attain nearly minimax convergence rates up to logarithmic factors. Our analysis removes several crucial assumptions, such as Lipschitz continuity of the score function or a strictly positive lower bound on the target density.
|
Poster
|
Approximation theory for 1-Lipschitz ResNets
|
https://neurips.cc//virtual/2025/poster/115741
|
Davide Murari, Takashi Furuya, Carola-Bibiane Schönlieb
|
$1$-Lipschitz neural networks are fundamental for generative modelling, inverse problems, and robust classifiers. In this paper, we focus on $1$-Lipschitz residual networks (ResNets) based on explicit Euler steps of negative gradient flows and study their approximation capabilities. Leveraging the Restricted Stone–Weierstrass Theorem, we first show that these $1$-Lipschitz ResNets are dense in the set of scalar $1$-Lipschitz functions on any compact domain when width and depth are allowed to grow. We also show that these networks can exactly represent scalar piecewise affine $1$-Lipschitz functions. We then prove a stronger statement: by inserting norm-constrained linear maps between the residual blocks, the same density holds when the hidden width is fixed. Because every layer obeys simple norm constraints, the resulting models can be trained with off-the-shelf optimisers. This paper provides the first universal approximation guarantees for $1$-Lipschitz ResNets, laying a rigorous foundation for their practical use.
|
Poster
|
A Practical Guide for Incorporating Symmetry in Diffusion Policy
|
https://neurips.cc//virtual/2025/poster/116965
|
Dian Wang, Boce Hu, Shuran Song, Robin Walters, Robert Platt
|
Recently, equivariant neural networks for policy learning have shown promising improvements in sample efficiency and generalization, however, their wide adoption faces substantial barriers due to implementation complexity. Equivariant architectures typically require specialized mathematical formulations and custom network design, posing significant challenges when integrating with modern policy frameworks like diffusion-based models. In this paper, we explore a number of straightforward and practical approaches to incorporate symmetry benefits into diffusion policies without the overhead of full equivariant designs. Specifically, we investigate (i) invariant representations via relative trajectory actions and eye-in-hand perception, (ii) integrating equivariant vision encoders, and (iii) symmetric feature extraction with pretrained encoders using Frame Averaging. We first prove that combining eye-in-hand perception with relative or delta action parameterization yields inherent SE(3)-invariance, thus improving policy generalization. We then perform a systematic experimental study on those design choices for integrating symmetry in diffusion policies, and conclude that an invariant representation with equivariant feature extraction significantly improves the policy performance. Our method achieves performance on par with or exceeding fully equivariant architectures while greatly simplifying implementation.
|
Poster
|
A Pre-training Framework for Relational Data with Information-theoretic Principles
|
https://neurips.cc//virtual/2025/poster/115246
|
Quang Truong, Zhikai Chen, Mingxuan Ju, Tong Zhao, Neil Shah, Jiliang Tang
|
Relational databases underpin critical infrastructure across a wide range of domains, yet the design of generalizable pre-training strategies for learning from relational databases remains an open challenge due to task heterogeneity. Specifically, there exist infinitely many possible downstream tasks, as tasks are defined based on relational schema graphs, temporal dependencies, and SQL-defined label logics. An effective pre-training framework is desired to take these factors into account in order to obtain task-aware representations. By incorporating knowledge of the underlying distribution that drives label generation, downstream tasks can benefit from relevant side-channel information. To bridge this gap, we introduce Task Vector Estimation (TVE), a novel pre-training framework that constructs predictive supervisory signals via set-based aggregation over schema traversal graphs, explicitly modeling next-window relational dynamics. We formalize our approach through an information-theoretic lens, demonstrating that task-informed representations retain more relevant signals than those obtained without task priors. Extensive experiments on the RelBench benchmark show that TVE consistently outperforms traditional pre-training baselines. Our findings advocate for pre-training objectives that encode task heterogeneity and temporal structure as design principles for predictive modeling on relational databases.
|
Poster
|
A Principled Approach to Randomized Selection under Uncertainty
|
https://neurips.cc//virtual/2025/poster/120283
|
Alexander Goldberg, Giulia Fanti, Nihar Shah
|
Many decision-making processes involve evaluating and then selecting items; examples include scientific peer review, job hiring, school admissions, and financial investment. Selection typically involves applying rules to evaluations and then deterministically choosing the best candidates. These domains often feature error-prone evaluations and uncertainty about future outcomes, which undermine the reliability of deterministic selection. As a result, selection mechanisms that incorporate uncertainty by involving explicit randomization are beginning to gain traction. However, current randomization methods are ad hoc. In this paper, we propose a principled framework for randomized decision-making based on interval estimates of the quality of each item. We introduce MERIT (Maximin Efficient Randomized Interval Top-k), an optimization-based method that maximizes the worst-case expected number of top candidates selected under "Knightian" uncertainty represented by overlapping intervals. We develop a polynomial-time algorithm to solve the optimization problem and demonstrate empirically that the method scales to over 10,000 items. Further, we prove that our approach can satisfy desirable axiomatic properties not guaranteed by existing approaches to randomization.
|
Poster
|
A Principled Path to Fitted Distributional Evaluation
|
https://neurips.cc//virtual/2025/poster/118928
|
Sungee Hong, Jiayi Wang, Zhengling Qi, Raymond K. W. Wong
|
In reinforcement learning, distributional off-policy evaluation (OPE) focuses on estimating the return distribution of a target policy using offline data collected under a different policy. This work focuses on extending the widely used fitted-Q evaluation---developed for expectation-based reinforcement learning---to the distributional OPE setting. We refer to this extension as fitted distributional evaluation (FDE). While only a few related approaches exist, there remains no unified framework for designing FDE methods. To fill this gap, we present a set of guiding principles for constructing theoretically sound FDE methods. Building on these principles, we develop several new FDE methods with convergence analysis and provide theoretical justification for existing methods, even in non-tabular environments with infinitely large state-action spaces. Extensive experiments, including simulations on linear quadratic regulators and Atari games, demonstrate the superior performance of the FDE methods.
|
Poster
|
A Principle of Pre-Strategy Intervention for Multi-Agent Reinforcement Learning
|
https://neurips.cc//virtual/2025/poster/117666
|
Anjie Liu, Jianhong Wang, Samuel Kaski, Jun Wang, Mengyue Yang
|
Guiding Cooperative Multi-Agent Reinforcement Learning (MARL) systems towards desirable outcomes are challenging, particularly when universal guidance on the desirable outcomes over the whole team is impractical. Furthermore, designing mechanisms to coordinate agents currently relies on empirical studies, lacking a unified perspective. To mitigate these issues, we introduce Multi-Agent Influence Diagrams (MAIDs) as a graphical tool to visualize existing coordination mechanisms. Based on MAIDs, we design a new coordination mechanism, referred to as targeted intervention, which is applied to only a single agent. In practice, we introduce a technique for causal inference, called pre-strategy intervention to implement the targeted intervention. Since MAIDs can be regarded as causal diagrams, the causal effect on desirable system outcomes is maximized, implying that the desirable outcomes are achieved. More importantly, the bundled relevance graph analysis from MAIDs is able to predict the solvability of coordination mechanisms via various MARL paradigms. In experiments, we demonstrate effectiveness of our proposed targeted intervention, and verify the result of relevance graph analysis.
|
Poster
|
A Private Approximation of the 2nd-Moment Matrix of Any Subsamplable Input
|
https://neurips.cc//virtual/2025/poster/119097
|
Bar Mahpud, Or Sheffet
|
We study the problem of differentially private second moment estimation and present a new algorithm that achieve strong privacy-utility trade-offs even for worst-case inputs under subsamplability assumptions on the data. We call an input $(m,\alpha,\beta)$-subsamplable if a random subsample of size $m$ (or larger) preserves w.p $\geq 1-\beta$ the spectral structure of the original second moment matrix up to a multiplicative factor of $1\pm \alpha$.Building upon subsamplability, we give a recursive algorithmic framework similar to Kamath et al (2019) that abides zero-Concentrated Differential Privacy (zCDP) while preserving w.h.p the accuracy of the second moment estimation upto an arbitrary factor of $(1\pm\gamma)$. We then show how to apply our algorithm to approximate the second moment matrix of a distribution $\mathcal{D}$, even when a noticeable fraction of the input are outliers.
|
Poster
|
A Probabilistic Inference Approach to Inference-Time Scaling of LLMs using Particle-Based Monte Carlo Methods
|
https://neurips.cc//virtual/2025/poster/115871
|
Isha Puri, Shivchander Sudalairaj, Guangxuan Xu, Abhishek Bhandwaldar, Kai Xu, Akash Srivastava
|
Large language models (LLMs) have achieved significant performance gains via scaling up model sizes and/or data. However, recent evidence suggests diminishing returns from such approaches, motivating a pivot to scaling test-time compute.Existing deterministic inference-time scaling methods, usually with reward models, cast the task as a search problem, but suffer from a key limitation: early pruning. Due to inherently imperfect reward models, promising trajectories may be discarded prematurely, leading to suboptimal performance. We propose a novel inference-time scaling approach by adapting particle-based Monte Carlo methods. Our method maintains a diverse set of candidates and robustly balances exploration and exploitation. Our empirical evaluation demonstrates that our particle filtering methods have a 4--16x better scaling rate over deterministic search counterparts on both various challenging mathematical and more general reasoning tasks. Using our approach, we show that Qwen2.5-Math-1.5B-Instruct surpasses GPT-4o accuracy in only 4 rollouts, while Qwen2.5-Math-7B-Instruct scales to o1 level accuracy in only 32 rollouts.Our work not only presents an effective method to inference-time scaling, but also connects rich literature in probabilistic inference with inference-time scaling of LLMs to develop more robust algorithms in future work.
|
Poster
|
A Provable Approach for End-to-End Safe Reinforcement Learning
|
https://neurips.cc//virtual/2025/poster/119993
|
Akifumi Wachi, Kohei Miyaguchi, Takumi Tanabe, Rei Sato, Youhei Akimoto
|
A longstanding goal in safe reinforcement learning (RL) is a method to ensure the safety of a policy throughout the entire process, from learning to operation. However, existing safe RL paradigms inherently struggle to achieve this objective. We propose a method, called Provably Lifetime Safe RL (PLS), that integrates offline safe RL with safe policy deployment to address this challenge. Our proposed method learns a policy offline using return-conditioned supervised learning and then deploys the resulting policy while cautiously optimizing a limited set of parameters, known as target returns, using Gaussian processes (GPs). Theoretically, we justify the use of GPs by analyzing the mathematical relationship between target and actual returns. We then prove that PLS finds near-optimal target returns while guaranteeing safety with high probability. Empirically, we demonstrate that PLS outperforms baselines both in safety and reward performance, thereby achieving the longstanding goal to obtain high rewards while ensuring the safety of a policy throughout the lifetime from learning to operation.
|
Poster
|
A Provable Pitfall of Generalization in SSMs
|
https://neurips.cc//virtual/2025/poster/120031
|
Yonatan Slutzky, Yotam Alexander, Noam Razin, Nadav Cohen
|
Neural networks are powered by an implicit bias: a tendency of gradient descent to fit training data in a way that generalizes to unseen data. A recent class of neural network models gaining increasing popularity is structured state space models (SSMs). Prior work argued that the implicit bias of SSMs leads to generalization in a setting where data is generated by a low dimensional teacher. In this paper, we revisit the latter setting, and formally establish a phenomenon entirely undetected by prior work on the implicit bias of SSMs. Namely, we prove that while implicit bias leads to generalization under many choices of training data, there exist special examples whose inclusion in training completely distorts the implicit bias, to a point where generalization fails. This failure occurs despite the special training examples being labeled by the teacher, i.e., having clean labels! We empirically demonstrate the phenomenon, with SSMs trained independently and as part of non-linear neural networks. In the area of adversarial machine learning, disrupting generalization with cleanly labeled training examples is known as clean-label poisoning. Given the proliferation of SSMs, we believe significant efforts should be invested in delineating their susceptibility to clean-label poisoning, and in developing methods for overcoming this susceptibility.
|
Poster
|
ArchCAD-400K: A Large-Scale CAD drawings Dataset and New Baseline for Panoptic Symbol Spotting
|
https://neurips.cc//virtual/2025/poster/115808
|
Ruifeng Luo, Zhengjie Liu, Tianxiao Cheng, Jie Wang, Tongjie Wang, Fei Cheng, Fu Chai, Yanpeng Li, Xingguang Wei, Haomin Wang, Shenglong Ye, Wenhai Wang, Zhang, Yu Qiao, Hongjie Zhang, Xianzhong Zhao
|
Recognizing symbols in architectural CAD drawings is critical for various advanced engineering applications. In this paper, we propose a novel CAD data annotation engine that leverages intrinsic attributes from systematically archived CAD drawings to automatically generate high-quality annotations, thus significantly reducing manual labeling efforts. Utilizing this engine, we construct ArchCAD-400K, a large-scale CAD dataset consisting of 413,062 chunks from 5538 highly standardized drawings, making it over 26 times larger than the largest existing CAD dataset. ArchCAD-400K boasts an extended drawing diversity and broader categories, offering line-grained annotations. Furthermore, we present a new baseline model for panoptic symbol spotting, termed Dual-Pathway Symbol Spotter (DPSS). It incorporates an adaptive fusion module to enhance primitive features with complementary image features, achieving state-of-the-art performance and enhanced robustness. Extensive experiments validate the effectiveness of DPSS, demonstrating the value of ArchCAD-400K and its potential to drive innovation in architectural design and construction.
|
Poster
|
Architectural and Inferential Inductive Biases for Exchangeable Sequence Modeling
|
https://neurips.cc//virtual/2025/poster/115035
|
Daksh Mittal, Ang Li, Tzu-Ching Yen, C. Guetta, Hongseok Namkoong
|
Autoregressive models have emerged as a powerful framework for modeling exchangeable sequences---i.i.d. observations when conditioned on some latent factor---enabling direct modeling of uncertainty from missing data (rather than a latent). Motivated by the critical role posterior inference plays as a subroutine in decision-making (e.g., active learning, bandits), we study the inferential and architectural inductive biases that are most effective for exchangeable sequence modeling. For the inference stage, we highlight a fundamental limitation of the prevalent single-step generation approach: its inability to distinguish between epistemic and aleatoric uncertainty. Instead, a long line of works in Bayesian statistics advocates for multi-step autoregressive generation; we demonstrate this "correct approach" enables superior uncertainty quantification that translates into better performance on downstream decision-making tasks. This naturally leads to the next question: which architectures are best suited for multi-step inference? We identify a subtle yet important gap between recently proposed Transformer architectures for exchangeable sequences (Müller et al., 2022; Nguyen & Grover, 2022; Ye & Namkoong, 2024), and prove that they in fact cannot guarantee exchangeability despite introducing significant computational overhead. Through empirical evaluation, we find that these custom architectures can significantly underperform compared to standard causal masking, highlighting the need for new architectural innovations in Transformer-based modeling of exchangeable sequences.
|
Poster
|
ArchPower: Dataset for Architecture-Level Power Modeling of Modern CPU Design
|
https://neurips.cc//virtual/2025/poster/121420
|
Qijun Zhang, Yao Lu, Mengming Li, Shang Liu, Zhiyao Xie
|
Power is the primary design objective of large-scale integrated circuits (ICs), especially for complex modern processors (i.e., CPUs). Accurate CPU power evaluation requires designers to go through the whole time-consuming IC implementation process, easily taking months. At the early design stage (e.g., architecture-level), classical power models are notoriously inaccurate. Recently, ML-based architecture-level power models have been proposed to boost accuracy, but the data availability is a severe challenge. Currently, there is no open-source dataset for this important ML application. A typical dataset generation process involves correct CPU design implementation and repetitive execution of power simulation flows, requiring significant design expertise, engineering effort, and execution time. Even private in-house datasets often fail to reflect realistic CPU design scenarios. In this work, we propose ArchPower, the first open-source dataset for architecture-level processor power modeling. We go through complex and realistic design flows to collect the CPU architectural information as features and the ground-truth simulated power as labels. Our dataset includes 200 CPU data samples, collected from 25 different CPU configurations when executing 8 different workloads. There are more than 100 architectural features in each data sample, including both hardware and event parameters. The label of each sample provides fine-grained power information, including the total design power and the power for each of the 11 components. Each power value is further decomposed into four fine-grained power groups: combinational logic power, sequential logic power, memory power, and clock power. ArchPower is available at https://github.com/hkust-zhiyao/ArchPower.
|
Poster
|
AReaL: Asynchronous Reinforcement Learning for Efficient and Scalable Language Reasoning
|
https://neurips.cc//virtual/2025/poster/117538
|
Wei Fu, Jiaxuan Gao, Xujie Shen, Chen Zhu, Zhiyu Mei, Chuyi He, Shusheng Xu, Guo Wei, Jun Mei, Jiashu Wang, Tongkai Yang, Binhang Yuan, YI WU
|
Reinforcement learning (RL) has become a key technique for fine-tuning large language reasoning models (LRMs), yet scaling RL training to support long sequences and massive models introduces significant system and algorithmic challenges. Existing frameworks tightly couple generation and training on the same hardware, limiting scalability and causing inefficiencies. In this paper, we present AReaL, a scalable asynchronous RL framework that decouples text generation and training across disjoint GPU sets. To address challenges unique to asynchronous pipelines—such as data staleness and rollout interruption—we propose a principled algorithm-system co-design. Our method introduces (1) Staleness Control to bound policy lag, (2) a Decoupled PPO Objective for stable learning under mild off-policy conditions, and (3) Interruptible Generation to reduce GPU idle time via chunked rollouts. Experiments on models up to 14B parameters across math and coding tasks demonstrate that AReaL outperforms or matches strong baselines in performance, while significantly improving training throughput and scaling efficiently across context lengths (up to 32K) and GPU clusters (up to 1024 GPUs). Our results establish a foundation for robust, high-throughput RL fine-tuning of next-generation LRMs.
|
Poster
|
ARECHO: Autoregressive Evaluation via Chain-Based Hypothesis Optimization for Speech Multi-Metric Estimation
|
https://neurips.cc//virtual/2025/poster/118248
|
Jiatong Shi, Yifan Cheng, Bo-Hao Su, Hye-jin Shim, Jinchuan Tian, Samuele Cornell, Yiwen Zhao, Siddhant Arora, Shinji Watanabe
|
Speech signal analysis poses significant challenges, particularly in tasks such as speech quality evaluation and profiling, where the goal is to predict multiple perceptual and objective metrics. For instance, metrics like PESQ (Perceptual Evaluation of Speech Quality), STOI (Short-Time Objective Intelligibility), and MOS (Mean Opinion Score) each capture different aspects of speech quality. However, these metrics often have different scales, assumptions, and dependencies, making joint estimation non-trivial. To address these issues, we introduce ARECHO (Autoregressive Evaluation via Chain-based Hypothesis Optimization), a chain-based, versatile evaluation system for speech assessment grounded in autoregressive dependency modeling. ARECHO is distinguished by three key innovations: (1) a comprehensive speech information tokenization pipeline; (2) a dynamic classifier chain that explicitly captures inter-metric dependencies; and (3) a two-step confidence-oriented decoding algorithm that enhances inference reliability. Experiments demonstrate that ARECHO significantly outperforms the baseline framework across diverse evaluation scenarios, including enhanced speech analysis, speech generation evaluation, and noisy speech evaluation. Furthermore, its dynamic dependency modeling improves interpretability by capturing inter-metric relationships.
|
Poster
|
A Regularized Newton Method for Nonconvex Optimization with Global and Local Complexity Guarantees
|
https://neurips.cc//virtual/2025/poster/116374
|
Yuhao Zhou, Jintao Xu, Bingrui Li, Chenglong Bao, Chao Ding, Jun Zhu
|
Finding an $\epsilon$-stationary point of a nonconvex function with a Lipschitz continuous Hessian is a central problem in optimization. Regularized Newton methods are a classical tool and have been studied extensively, yet they still face a trade‑off between global and local convergence. Whether a parameter-free algorithm of this type can simultaneously achieve optimal global complexity and quadratic local convergence remains an open question. To bridge this long-standing gap, we propose a new class of regularizers constructed from the current and previous gradients, and leverage the conjugate gradient approach with a negative curvature monitor to solve the regularized Newton equation. The proposed algorithm is adaptive, requiring no prior knowledge of the Hessian Lipschitz constant, and achieves a global complexity of $O(\epsilon^{-\frac{3}{2}})$ in terms of the second-order oracle calls, and $\tilde O(\epsilon^{-\frac{7}{4}})$ for Hessian-vector products, respectively. When the iterates converge to a point where the Hessian is positive definite, the method exhibits quadratic local convergence. Preliminary numerical results, including training the physics-informed neural networks, illustrate the competitiveness of our algorithm.
|
Poster
|
A Reinforcement Learning-based Bidding Strategy for Data Consumers in Auction-based Federated Learning
|
https://neurips.cc//virtual/2025/poster/115067
|
Xiaoli Tang, Han Yu, Xiaoxiao Li
|
Auction-based Federated Learning (AFL) fosters collaboration among self-interested data consumers (DCs) and data owners (DOs). A major challenge in AFL pertains to how DCs select and bid for DOs. Existing methods are generally static, making them ill-suited for dynamic AFL markets. To address this issue, we propose the R}einforcement Learning-based Bidding Strategy for DCs in Auction-based Federated Learning (RLB-AFL). We incorporate historical states into a Deep Q-Network to capture sequential information critical for bidding decisions. To mitigate state space sparsity, where specific states rarely reoccur for each DC during auctions, we incorporate the Gaussian Mixture Model into RLB-AFL. This facilitates soft clustering on sequential states, reducing the state space dimensionality and easing exploration and action-value function approximation. In addition, we enhance the $\epsilon$-greedy policy to help the RLB-AFL agent balance exploitation and exploration, enabling it to be more adaptable in the AFL decision-making process. Extensive experiments under 6 widely used benchmark datasets demonstrate that RLB-AFL achieves superior performance compared to 8 state-of-the-art approaches. It outperforms the best baseline by 10.56% and 3.15% in terms of average total utility
|
Poster
|
Are Large Language Models Sensitive to the Motives Behind Communication?
|
https://neurips.cc//virtual/2025/poster/115960
|
Addison J. Wu, Ryan Liu, Kerem Oktar, Ted Sumers, Tom Griffiths
|
Human communication is $\textit{motivated}$: people speak, write, and create content with a particular communicative intent in mind. As a result, the information large language models (LLMs) and associated AI agents receive is inherently biased by humans' intentions and incentives. People are remarkably attuned to navigating such biased information---we easily identify benevolent or self-serving motives in order to know what information to trust. For LLMs to be effective in the real world, they too must critically evaluate content by accounting for the motivations of the source: for example, discounting the claims made in a sales pitch. In this paper we undertake a comprehensive study of whether LLMs have this capacity of $\textit{motivational vigilance}$. We first use controlled experiments from cognitive science to identify that LLMs follow rational models of learning from motivated testimony, successfully discounting information from biased sources in a human-like manner. We then extend our evaluation to online recommendations, a more naturalistic reflection of LLM agents' information ecosystems. In these settings, we find that LLMs' inferences do not track the rational models' predictions nearly as closely---in part due to the presence of additional information that distract LLMs from vigilance-relevant considerations. Accordingly, a simple steering intervention that boosts the salience of intentions and incentives substantially increases the correspondence between LLMs and the rational model. These results suggest that LLMs possess a basic sensitivity to the motivations of others, but generalizing to novel real-world settings will require further improvements to these models.
|
Poster
|
Are Large Reasoning Models Good Translation Evaluators? Analysis and Performance Boost
|
https://neurips.cc//virtual/2025/poster/117120
|
Runzhe Zhan, Zhihong Huang, Xinyi Yang, Lidia Chao, Min Yang, Derek Wong
|
Recent advancements in large reasoning models (LRMs) have introduced the "slow thinking" paradigm, which leverages their inherent strengths to enhance reasoning capabilities for complex downstream tasks. However, the potential of LRMs as evaluators for machine translation (MT) quality remains underexplored. We provides the first systematic analysis of LRM-as-a-judge in MT evaluation. We identify key challenges, revealing LRMs require tailored evaluation materials, tend to ``overthink'' simpler instances and have issues with scoring mechanisms leading to overestimation. To address these, we propose to calibrate LRM thinking by training them on synthetic, human-like thinking trajectories. Our experiments on WMT24 Metrics benchmarks demonstrate that this approach largely reduces thinking budgets by ~35x while concurrently improving evaluation performance across different LRM scales from 7B to 32B (e.g., R1-Distill-Qwen-7B achieves a +8.7 correlation point improvement). These findings highlight the substantial potential of efficiently calibrated LRMs to advance human-centric MT evaluation.
|
Poster
|
A Reliable Cryptographic Framework for Empirical Machine Unlearning Evaluation
|
https://neurips.cc//virtual/2025/poster/117843
|
Yiwen Tu, Pingbang Hu, Jiaqi Ma
|
Machine unlearning updates machine learning models to remove information from specific training data samples, complying with data protection regulations that allow individuals to request the removal of their personal data. Despite the recent development of numerous unlearning algorithms, reliable evaluation of these algorithms remains an open research question. In this work, we focus on membership inference attack (MIA) based evaluation, one of the most common approaches for evaluating unlearning algorithms, and address various pitfalls of existing evaluation metrics that lack theoretical understanding and reliability. Specifically, by modeling the proposed evaluation process as a cryptographic game between unlearning algorithms and MIA adversaries, the naturally-induced evaluation metric measures the data removal efficacy of unlearning algorithms and enjoys provable guarantees that existing evaluation metrics fail to satisfy. Furthermore, we propose a practical and efficient approximation of the induced evaluation metric and demonstrate its effectiveness through both theoretical analysis and empirical experiments. Overall, this work presents a novel and reliable approach to empirically evaluating unlearning algorithms, paving the way for the development of more effective unlearning techniques.
|
Poster
|
Are Pixel-Wise Metrics Reliable for Computerized Tomography Reconstruction?
|
https://neurips.cc//virtual/2025/poster/118574
|
Tianyu Lin, Xinran Li, Chuntung Zhuang, Qi Chen, Yuanhao Cai, Kai Ding, Alan Yuille, Zongwei Zhou
|
Widely adopted evaluation metrics for sparse-view CT reconstruction---such as Structural Similarity Index Measure and Peak Signal-to-Noise Ratio---prioritize pixel-wise fidelity but often fail to capture the completeness of critical anatomical structures, particularly small or thin regions that are easily missed. To address this limitation, we propose a suite of novel anatomy-aware evaluation metrics designed to assess structural completeness across anatomical structures, including large organs, small organs, intestines, and vessels. Building on these metrics, we introduce CARE, a Completeness-Aware Reconstruction Enhancement framework that incorporates structural penalties during training to encourage anatomical preservation of significant regions. CARE is model-agnostic and can be seamlessly integrated into both analytical reconstruction methods and modern learning-based methods, such as Neural Radiance Fields and Gaussian Splatting. When applied to these methods, CARE substantially improves structural completeness in reconstructed CT scans, yielding performance gains of up to +32\% for large organs, +22\% for small organs, +40\% for intestines, and +36\% for vessels. Code has been attached as supplementary material for peer review and will be made publicly available.
|
Poster
|
ARGenSeg: Image Segmentation with Autoregressive Image Generation Model
|
https://neurips.cc//virtual/2025/poster/115738
|
Xiaolong Wang, Lixiang Ru, Ziyuan Huang, Kaixiang Ji, DanDan Zheng, Jingdong Chen, Jun Zhou
|
We propose a novel AutoRegressive Generation-based paradigm for image Segmentation (ARGenSeg), achieving multimodal understanding and pixel-level perception within a unified framework. Prior works integrating image segmentation into multimodal large language models (MLLMs) typically employ either boundary points representation or dedicated segmentation heads.These methods rely on discrete representations or semantic prompts fed into task-specific decoders, which limits the ability of the MLLM to capture fine-grained visual details.To address these challenges, we introduce a segmentation framework for MLLM based on image generation, which naturally produces dense masks for target objects.We leverage MLLM to output visual tokens and detokenize them into images using an universal VQ-VAE, making the segmentation fully dependent on the pixel-level understanding of the MLLM.To reduce inference latency, we employ a next-scale-prediction strategy to generate required visual tokens in parallel.Extensive experiments demonstrate that our method surpasses prior state-of-the-art approaches on multiple segmentation datasets with a remarkable boost in inference speed, while maintaining strong understanding capabilities.
|
Poster
|
ARIA: Training Language Agents with Intention-driven Reward Aggregation
|
https://neurips.cc//virtual/2025/poster/116879
|
睿涵 杨, yikai zhang, Chen, Xintao Wang, Jiangjie Chen, Siyu Yuan, Deqing Yang, Yanghua Xiao
|
Large language models (LLMs) have enabled agents to perform complex reasoning and decision-making through free-form language interactions. However, in open-ended language action environments (e.g., negotiation or question-asking games), the action space can be formulated as a joint distribution over tokens, resulting in an extremely large and combinatorial action space. Sampling actions in such a space can lead to extreme reward sparsity, which brings large reward variance, hindering effective reinforcement learning (RL). To address this, we propose **ARIA**, a method that **A**ggregates **R**ewards in **I**ntention space to enable efficient and effective language **A**gents training. ARIA aims to project natural language actions from the high-dimensional joint token distribution space into a low-dimensional intention space, where semantically similar actions are clustered and assigned shared rewards. This intention-aware reward aggregation reduces reward variance by densifying reward signals, fostering efficient and effective policy optimization. Extensive experiments demonstrate that ARIA not only significantly reduces gradient variance, but also delivers substantial performance gains of average 9.95% across four downstream tasks (e.g., negotiation and text-based games), consistently outperforming strong offline and online RL baselines.
|
Poster
|
ARM: Adaptive Reasoning Model
|
https://neurips.cc//virtual/2025/poster/115075
|
Siye Wu, Jian Xie, yikai zhang, Chen, Kai Zhang, Yu Su, Yanghua Xiao
|
While large reasoning models demonstrate strong performance on complex tasks, they lack the ability to adjust reasoning token usage based on task difficulty. This often leads to the "overthinking" problem—excessive and unnecessary reasoning—which, although potentially mitigated by human intervention to control the token budget, still fundamentally contradicts the goal of achieving fully autonomous AI. In this work, we propose Adaptive Reasoning Model (ARM), a reasoning model capable of adaptively selecting appropriate reasoning formats based on the task at hand. These formats include three efficient ones—Direct Answer, Short CoT, and Code—as well as a more elaborate format, Long CoT. To train ARM, we introduce Ada-GRPO, an adaptation of Group Relative Policy Optimization (GRPO), which addresses the format collapse issue in traditional GRPO. Ada-GRPO enables ARM to achieve high token efficiency, reducing tokens by an average of $\sim$30%, and up to $\sim$70%, while maintaining performance comparable to the model that relies solely on Long CoT. Furthermore, not only does it improve inference efficiency through reduced token generation, but it also brings a $\sim$2$\times$ speedup in training. In addition to the default Adaptive Mode, ARM supports two additional reasoning modes: 1) Instruction-Guided Mode, which allows users to explicitly specify the reasoning format via special tokens—ideal when the appropriate format is known for a batch of tasks. 2) Consensus-Guided Mode, which aggregates the outputs of the three efficient formats and resorts to Long CoT in case of disagreement, prioritizing performance with higher token usage. All the resources will be released.
|
Poster
|
ARMesh: Autoregressive Mesh Generation via Next-Level-of-Detail Prediction
|
https://neurips.cc//virtual/2025/poster/115211
|
Jiabao Lei, Kewei Shi, Zhihao Liang, Kui Jia
|
Directly generating 3D meshes, the default representation for 3D shapes in the graphics industry, using auto-regressive (AR) models has become popular these days, thanks to their sharpness, compactness in the generated results, and ability to represent various types of surfaces. However, AR mesh generative models typically construct meshes face by face in lexicographic order, which does not effectively capture the underlying geometry in a manner consistent with human perception. Inspired by 2D models that progressively refine images, such as the prevailing next-scale prediction AR models, we propose generating meshes auto-regressively in a progressive coarse-to-fine manner. Specifically, we view mesh simplification algorithms, which gradually merge mesh faces to build simpler meshes, as a natural fine-to-coarse process. Therefore, we develop a transformer-based AR model to approximate the reverse process of a generalized mesh simplification algorithm in the order of level-of-detail, constructing meshes initially from a single point and gradually adding geometric details through local remeshing, where the topology is not predefined and is alterable. Our ablation studies and experiments show that this novel progressive mesh generation approach not only leads to improved mesh quality but also enables applications such as mesh refinement and editing.
|
Poster
|
AR-RAG: Autoregressive Retrieval Augmentation for Image Generation
|
https://neurips.cc//virtual/2025/poster/116365
|
Jingyuan Qi, Zhiyang Xu, Qifan Wang, Lifu Huangg
|
We introduce Autoregressive Retrieval Augmentation (AR-RAG), a novel paradigm that enhances image generation by autoregressively incorporating k-nearest neighbor retrievals at the patch level.Unlike prior methods that perform a single, static retrieval before generation and condition the entire generation on fixed reference images, AR-RAG performs context-aware retrievals at each generation step, using prior-generated patches as queries to retrieve and incorporate the most relevant patch-level visual references, enabling the model to respond to evolving generation needs while avoiding limitations (e.g., over-copying, stylistic bias, etc.) prevalent in existing methods. To realize AR-RAG, we propose two parallel frameworks: (1) Distribution-Augmentation in Decoding (DAiD), a training-free plug-and-use decoding strategy that directly merges the distribution of model-predicted patches with the distribution of retrieved patches, and (2) Feature-Augmentation in Decoding (FAiD), a parameter-efficient fine-tuning method that progressively smooths the features of retrieved patches via multi-scale convolution operations and leverages them to augment the image generation process. We validate the effectiveness of AR-RAG on widely adopted benchmarks, including Midjourney-30K, GenEval and DPG-Bench, demonstrating significant performance gains over state-of-the-art image generation models.
|
Poster
|
Artificial Hivemind: The Open-Ended Homogeneity of Language Models (and Beyond)
|
https://neurips.cc//virtual/2025/poster/121421
|
Liwei Jiang, Chai Yuanjun, Margaret Li, Mickel Liu, Raymond Fok, Maarten Sap, Yulia Tsvetkov, Nouha Dziri, Yejin Choi
|
Language models (LMs) often struggle to generate diverse, human-like creative content, raising concerns about the long-term homogenization of human thought through repeated exposure to similar outputs. Yet, scalable methods for evaluating LM output diversity remain limited—especially beyond narrow tasks like random number generation or stylized prompts. To address this gap, we introduce InfiniteChats, a large-scale dataset of 26,000 diverse, real-world open-ended user queries, along with the first comprehensive taxonomy for characterizing the full spectrum of open-ended prompts posed to LMs, comprising six top-level categories and 17 subcategories. These queries admit a wide range of plausible answers with no single ground truth. Using InfiniteChats, we present a large-scale analysis of mode collapse in LMs, manifested as redundant outputs even for inherently open-ended queries. Our study reveals a pronounced "Artificial Hivemind" effect in open-ended generation, characterized by (1) intra-model repetition, where a single model consistently generates similar responses, and (2) inter-model homogeneity, where different models produce strikingly similar outputs.InfiniteChats also includes 31,250 human annotations, across absolute ratings and pairwise preferences, with 25 independent human annotations per example. This enables fine-grained analysis of distributional preferences across annotators. Our findings show that state-of-the-art LMs, reward models, and LM judges align less with human ratings when annotators disagree or when responses are of similar quality. Overall, InfiniteChats offers the first large-scale resource for systematically studying open-endedness in LM queries, revealing critical insights to guide future research and mitigate long-term AI safety risks posed by the Artificial Hivemind.
|
Poster
|
A Scalable, Causal, and Energy Efficient Framework for Neural Decoding with Spiking Neural Networks
|
https://neurips.cc//virtual/2025/poster/116071
|
Georgios Mentzelopoulos, Ioannis Asmanis, Konrad Kording, Eva Dyer, Kostas Daniilidis, Flavia Vitale
|
Brain-computer interfaces (BCIs) promise to enable vital functions, such as speech and prosthetic control, for individuals with neuromotor impairments. Central to their success are neural decoders, models that map neural activity to intended behavior. Current learning-based decoding approaches fall into two classes: simple, causal models that lack generalization, or complex, non-causal models that generalize and scale offline but struggle in real-time settings. Both face a common challenge, their reliance on power-hungry artificial neural network backbones, which makes integration into real-world, resource-limited systems difficult. Spiking neural networks (SNNs) offer a promising alternative. Because they operate causally (i.e. only on present and past inputs) these models are suitable for real-time use, and their low energy demands make them ideal for battery-constrained environments. To this end, we introduce **Spikachu: a scalable, causal, and energy-efficient neural decoding framework based on SNNs**. Our approach processes binned spikes directly by projecting them into a shared latent space, where spiking modules, adapted to the timing of the input, extract relevant features; these latent representations are then integrated and decoded to generate behavioral predictions. We evaluate our approach on 113 recording sessions from 6 non-human primates, totaling 43 hours of recordings. Our method outperforms causal baselines when trained on single sessions using between 2.26× and 418.81× less energy. Furthermore, we demonstrate that scaling up training to multiple sessions and subjects improves performance and enables few-shot transfer to unseen sessions, subjects, and tasks. Overall, Spikachu introduces a scalable, online-compatible neural decoding framework based on SNNs, whose performance is competitive relative to state-of-the-art models while consuming orders of magnitude less energy.
|
Poster
|
Ascent Fails to Forget
|
https://neurips.cc//virtual/2025/poster/118666
|
Ioannis Mavrothalassitis, Pol Puigdemont, Noam Levi, Volkan Cevher
|
Contrary to common belief, we show that gradient ascent-based unconstrained optimization methods frequently fail to perform machine unlearning, a phenomenon we attribute to the inherent statistical dependence between the forget and retain data sets. This dependence, which can manifest itself even as simple correlations, undermines the misconception that these sets can be independently manipulated during unlearning. We provide empirical and theoretical evidence showing these methods often fail precisely due to this overlooked relationship. For random forget sets, this dependence means that degrading forget set metrics (which, for a retrained model, should mirror test set metrics) inevitably harms overall test performance. Going beyond random sets, we consider logistic regression as an instructive example where a critical failure mode emerges: inter-set dependence causes gradient descent-ascent iterations to progressively diverge from the ideal retrained model. Strikingly, these methods can converge to solutions that are not only far from the retrained ideal but are potentially even further from it than the original model itself, rendering the unlearning process actively detrimental. A toy example further illustrates how this dependence can trap models in inferior local minima, inescapable via finetuning. Our findings highlight that the presence of such statistical dependencies, even when manifest only as correlations, can be sufficient for ascent-based unlearning to fail. Our theoretical insights are corroborated by experiments on complex neural networks, demonstrating that these methods do not perform as expected in practice due to this unaddressed statistical interplay.
|
Poster
|
A Semantic Parsing Framework for End-to-End Time Normalization
|
https://neurips.cc//virtual/2025/poster/117253
|
Xin Su, Sungduk Yu, Phillip Howard, Steven Bethard
|
Time normalization is the task of converting natural language temporal expressions into machine-readable representations. It underpins many downstream applications in information retrieval, question answering, and clinical decision-making. Traditional systems based on the ISO-TimeML schema limit expressivity and struggle with complex constructs such as compositional, event-relative, and multi-span time expressions. In this work, we introduce a novel formulation of time normalization as a code generation task grounded in the SCATE framework, which defines temporal semantics through symbolic and compositional operators. We implement a fully executable SCATE Python library and demonstrate that large language models (LLMs) can generate executable SCATE code. Leveraging this capability, we develop an automatic data augmentation pipeline using LLMs to synthesize large-scale annotated data with code-level validation. Our experiments show that small, locally deployable models trained on this augmented data can achieve strong performance, outperforming even their LLM parents and enabling practical, accurate, and interpretable time normalization.
|
Poster
|
A Set of Generalized Components to Achieve Effective Poison-only Clean-label Backdoor Attacks with Collaborative Sample Selection and Triggers
|
https://neurips.cc//virtual/2025/poster/115814
|
Zhixiao Wu, Yao Lu, Jie Wen, Hao Sun, Qi Zhou, Guangming Lu
|
Poison-only Clean-label Backdoor Attacks (PCBAs) aim to covertly inject attacker-desired behavior into DNNs by merely poisoning the dataset without changing the labels. To effectively implant a backdoor, multiple triggers are proposed for various attack requirements of Attack Success Rate (ASR) and stealthiness. Additionally, sample selection enhances clean-label backdoor attacks' ASR by meticulously selecting "hard'' samples instead of random samples to poison. Current methods, however, 1) usually handle the sample selection and triggers in isolation, leading to severely limited improvements on both ASR and stealthiness. Consequently, attacks exhibit unsatisfactory performance on evaluation metrics when converted to PCBAs via a mere stacking of methods. Therefore, we seek to explore the bi-directional collaborative relations between the sample selection and triggers to address the above dilemma. 2) Since the strong specificity within triggers, the simple combination of sample selection and triggers fails to substantially enhance both evaluation metrics, with generalization preserved among various attacks. Therefore, we seek to propose a set of components to significantly improve both stealthiness and ASR based on the commonalities of attacks. Specifically, Component A ascertains two critical selection factors, and then makes them an appropriate combination based on the trigger scale to select more reasonable "hard'' samples for improving ASR. Component B is proposed to select samples with similarities to relevant trigger implanted samples to promote stealthiness. Component C reassigns trigger poisoning intensity on RGB colors through distinct sensitivity of the human visual system to RGB for higher ASR, with stealthiness ensured by sample selection including Component B. Furthermore, all components can be strategically integrated into diverse PCBAs, enabling tailored solutions that balance ASR and stealthiness enhancement for specific attack requirements. Extensive experiments demonstrate the superiority of our components in stealthiness, ASR, and generalization. Our code will be released as soon as possible.
|
Poster
|
ASGO: Adaptive Structured Gradient Optimization
|
https://neurips.cc//virtual/2025/poster/116796
|
Kang An, Yuxing Liu, Rui Pan, Yi Ren, Shiqian Ma, Donald Goldfarb, Tong Zhang
|
Training deep neural networks (DNNs) is a structured optimization problem, because the parameters are naturally represented by matrices and tensors rather than simple vectors. Under this structural representation, it has been widely observed that gradients are low-rank and Hessians are approximately block-wise diagonal. These structured properties are crucial for designing efficient optimization algorithms but may not be utilized by current popular optimizers like Adam. In this paper, we present a novel optimization algorithm ASGO that capitalizes on these properties by employing a preconditioner that is adaptively updated using structured gradients. By fine-grained theoretical analysis, ASGO is proven to achieve superior convergence rates compared to existing structured gradient methods. Based on the convergence theory, we further demonstrate that ASGO can benefit from the low-rank and block-wise diagonal properties. We also discuss practical modifications of ASGO and empirically verify the effectiveness of the algorithm on language model tasks.
|
Poster
|
A Signed Graph Approach to Understanding and Mitigating Oversmoothing in GNNs
|
https://neurips.cc//virtual/2025/poster/120061
|
Jiaqi Wang, Xinyi Wu, James Cheng, Yifei Wang
|
Deep graph neural networks (GNNs) often suffer from oversmoothing, where node representations become overly homogeneous with increasing depth. While techniques like normalization, residual connections, and edge dropout have been proposed to mitigate oversmoothing, they are typically developed independently, with limited theoretical understanding of their underlying mechanisms. In this work, we present a unified theoretical perspective based on the framework of signed graphs, showing that many existing strategies implicitly introduce negative edges that alter message-passing to resist oversmoothing. However, we show that merely adding negative edges in an unstructured manner is insufficient—the asymptotic behavior of signed propagation depends critically on the strength and organization of positive and negative edges. To address this limitation, we leverage the theory of structural balance, which promotes stable, cluster-preserving dynamics by connecting similar nodes with positive edges and dissimilar ones with negative edges. We propose Structural Balanced Propagation (SBP), a plug-and-play method that assigns signed edges based on either labels or feature similarity to explicitly enhance structural balance in the constructed signed graphs. Experiments on nine benchmarks across both homophilic and heterophilic settings demonstrate that SBP consistently improves classification accuracy and mitigates oversmoothing, even at depths of up to 300 layers. Our results provide a principled explanation for prior oversmoothing remedies and introduce a new direction for signed message-passing design in deep GNNs.
|
Poster
|
A Simple Linear Patch Revives Layer-Pruned Large Language Models
|
https://neurips.cc//virtual/2025/poster/119421
|
Xinrui Chen, Haoli Bai, Tao Yuan, ruikang liu, Kang Zhao, Xianzhi Yu, Lu Hou, Tian Guan, Yonghong He, Chun Yuan
|
Layer pruning has become a popular technique for compressing large language models (LLMs) due to its simplicity. However, existing layer pruning methods often suffer from significant performance drops. We identify that \textit{this degradation stems from the mismatch of activation magnitudes across layers and tokens at the pruning interface}. To address this, we propose \textsc{LinearPatch}, a simple plug-and-play technique to revive the layer-pruned LLMs. The proposed method adopts Hadamard transformation to suppress massive outliers in particular tokens, and channel-wise scaling to align the activation magnitudes. These operations can be fused into a single matrix, which functions as a patch to bridge the pruning interface with negligible inference overhead. \textsc{LinearPatch} retains up to \textbf{94.15\%} performance of the original model when pruning 5 layers of LLaMA-3-8B on the question answering benchmark, surpassing existing state-of-the-art methods by \textbf{4\%}. In addition, the patch matrix can be further optimized with memory efficient offline knowledge distillation. With only 5K samples, the retained performance of \textsc{LinearPatch} can be further boosted to \textbf{95.16\%} within 30 minutes on a single computing card.
|
Poster
|
A Single-Loop First-Order Algorithm for Linearly Constrained Bilevel Optimization
|
https://neurips.cc//virtual/2025/poster/118807
|
Wei Shen, Jiawei Zhang, Minhui Huang, Cong Shen
|
We study bilevel optimization problems where the lower-level problems are strongly convex and have coupled linear constraints. To overcome the potential non-smoothness of the hyper-objective and the computational challenges associated with the Hessian matrix, we utilize penalty and augmented Lagrangian methods to reformulate the original problem as a single-level one. Especially, we establish a strong theoretical connection between the reformulated function and the original hyper-objective by characterizing the closeness of their values and derivatives. Based on this reformulation, we propose a single-loop, first-order algorithm for linearly constrained bilevel optimization (SFLCB). We provide rigorous analyses of its non-asymptotic convergence rates, showing an improvement over prior double-loop algorithms -- form $O(\epsilon^{-3}\log(\epsilon^{-1}))$ to $O(\epsilon^{-3})$. The experiments corroborate our theoretical findings and demonstrate the practical efficiency of the proposed SFLCB algorithm.
|
Poster
|
A Single-Loop Gradient Algorithm for Pessimistic Bilevel Optimization via Smooth Approximation
|
https://neurips.cc//virtual/2025/poster/115423
|
Qichao Cao, Shangzhi Zeng, Jin Zhang
|
Bilevel optimization has garnered significant attention in the machine learning community recently, particularly regarding the development of efficient numerical methods. While substantial progress has been made in developing efficient algorithms for optimistic bilevel optimization, the study of methods for solving Pessimistic Bilevel Optimization (PBO) remains relatively less explored, especially the design of fully first-order, single-loop gradient-based algorithms. This paper aims to bridge this research gap. We first propose a novel smooth approximation to the PBO problem, using penalization and regularization techniques. Building upon this approximation, we then propose SiPBA (Single-loop Pessimistic Bilevel Algorithm), a new gradient-based method specifically designed for PBO which avoids second-order derivative information or inner-loop iterations for subproblem solving. We provide theoretical validation for the proposed smooth approximation scheme and establish theoretical convergence for the algorithm SiPBA. Numerical experiments on synthetic examples and practical applications demonstrate the effectiveness and efficiency of SiPBA.
|
Poster
|
A Single-Swap Local Search Algorithm for k-Means of Lines
|
https://neurips.cc//virtual/2025/poster/116006
|
Ting Liang, Xiaoliang Wu, Junyu Huang, Jianxin Wang, Qilong Feng
|
Clustering is a fundamental problem that has been extensively studied over past few decades, with most research focusing on point based clustering such as k-means, k-median, and k-center. However, numerous real-world applications, such as motion analysis, traffic monitoring, and trajectory modeling, require clustering over structured data, including lines, time series and affine subspaces (flats), where traditional point-based clustering algorithms often fall short. In this paper, we study the k-means of lines problem, where the input is a set L of lines in R^d, and the goal is to find k centers C in R^d such that the sum of squared distances from each line in L to its nearest center in C is minimized. The local search algorithm is a well-established strategy for point-based k-means clustering, known for its efficiency and provable approximation guarantees. However, extending local search algorithm to the k-means of lines problem is nontrivial, as the capture relation used in point-based clustering does not generalize to the line setting. This is because that the point-to-line distance function lack the triangle inequality property that supports geometric analysis in point-based clustering. Moreover, since lines extend infinitely in space, it is difficult to identify effective swap points that can significantly reduce the clustering cost. To overcome above obstacles, we introduce a proportional capture relation that links optimal and current centers based the assignment proportions of lines, enabling a refined analysis that bypasses the triangle inequality barrier. We also introduce a CrossLine structure, which provides a principled discretization of the geometric space around line pairs, and ensures coverage of high-quality swap points essential for local search, thereby enabling effective execution of the local search process. Consequently, based on the proposed components, we gave the first single-swap local search algorithm for the k-means of lines problem, achieving a (500+\varepsilon)-approximation in polynomial time.
|
Poster
|
Ask a Strong LLM Judge when Your Reward Model is Uncertain
|
https://neurips.cc//virtual/2025/poster/117907
|
Zhenghao Xu, Qin Lu, Qingru Zhang, Liang Qiu, Ilgee Hong, Changlong Yu, Wenlin Yao, Yao Liu, Haoming Jiang, Lihong Li, Hyokun Yun, Tuo Zhao
|
Reward model (RM) plays a pivotal role in reinforcement learning with human feedback (RLHF) for aligning large language models (LLMs). However, classical RMs trained on human preferences are vulnerable to reward hacking and generalize poorly to out-of-distribution (OOD) inputs. By contrast, strong LLM judges equipped with reasoning capabilities demonstrate superior generalization, even without additional training, but incur significantly higher inference costs, limiting their applicability in online RLHF. In this work, we propose an uncertainty-based routing framework that efficiently complements a fast RM with a strong but costly LLM judge. Our approach formulates advantage estimation in policy gradient (PG) methods as pairwise preference classification, enabling principled uncertainty quantification to guide routing. Uncertain pairs are forwarded to the LLM judge, while confident ones are evaluated by the RM. Experiments on RM benchmarks demonstrate that our uncertainty-based routing strategy significantly outperforms random judge calling at the same cost, and downstream alignment results showcase its effectiveness in improving online RLHF.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.