Papers
arxiv:2402.03485

Attention Meets Post-hoc Interpretability: A Mathematical Perspective

Published on Feb 5, 2024
Authors:
,

Abstract

A mathematical analysis of attention-based architectures reveals differences between post-hoc and attention-based explanations, finding that post-hoc methods can capture more useful insights.

AI-generated summary

Attention-based architectures, in particular transformers, are at the heart of a technological revolution. Interestingly, in addition to helping obtain state-of-the-art results on a wide range of applications, the attention mechanism intrinsically provides meaningful insights on the internal behavior of the model. Can these insights be used as explanations? Debate rages on. In this paper, we mathematically study a simple attention-based architecture and pinpoint the differences between post-hoc and attention-based explanations. We show that they provide quite different results, and that, despite their limitations, post-hoc methods are capable of capturing more useful insights than merely examining the attention weights.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2402.03485 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2402.03485 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2402.03485 in a Space README.md to link it from this page.

Collections including this paper 1