The elephant in the interpretability room: Why use attention as explanation when we have saliency methods?
Abstract
There is a recent surge of papers that focus on attention as explanation of model predictions, giving mixed evidence on whether attention can be used as such. This has led some to try and `improve' attention so as to make it more interpretable. We argue that we should pay attention no heed.
While attention conveniently gives us one weight per input token and is easily extracted, it is often unclear towards what goal it is used as explanation. We argue that often that goal, whether explicitly stated or not, is to find out what input tokens are the most relevant to a prediction. When that is the case, input saliency methods better suit our needs, and there are no compelling reasons to use attention, despite the coincidence that it provides a weight for each input. With this position paper, we hope to shift some of the recent focus on attention to saliency methods, and for authors to clearly state the goal for their explanations.
While attention conveniently gives us one weight per input token and is easily extracted, it is often unclear towards what goal it is used as explanation. We argue that often that goal, whether explicitly stated or not, is to find out what input tokens are the most relevant to a prediction. When that is the case, input saliency methods better suit our needs, and there are no compelling reasons to use attention, despite the coincidence that it provides a weight for each input. With this position paper, we hope to shift some of the recent focus on attention to saliency methods, and for authors to clearly state the goal for their explanations.