Post-hoc Unsupervised Concept-based Explanation for Language: A Comparison Through User-LLM
Abstract
Concept-based explanations enhance interpretability by mapping complex model computations to human-understandable concepts. Evaluating their interpretability is complex, hinging not only on the quality of the concept space but also on how effectively these concepts are communicated to users. Existing evaluation metrics often focus solely on the concept space, neglecting the impact of communication and evaluating either faithfulness or plausibility.
To address these challenges, we introduce a simulatability framework that assesses interpretability by measuring a user's ability to predict the model's outputs based solely on the provided explanations. This approach accounts for both the concept space and its communication, encompassing the full spectrum of interpretability. Recognizing the impracticality of extensive human studies, we propose using large language models (user-LLMs) as proxies for human users in simulatability experiments. This novel method allows for scalable and consistent evaluation across various models and datasets. Our comprehensive experiments demonstrate that user-LLMs effectively simulate human interpretability assessments, providing consistent rankings of explanation methods. Our work advances the scalable evaluation of interpretability in Explainable AI, promoting the development of AI systems that are both accurate and transparent.
To address these challenges, we introduce a simulatability framework that assesses interpretability by measuring a user's ability to predict the model's outputs based solely on the provided explanations. This approach accounts for both the concept space and its communication, encompassing the full spectrum of interpretability. Recognizing the impracticality of extensive human studies, we propose using large language models (user-LLMs) as proxies for human users in simulatability experiments. This novel method allows for scalable and consistent evaluation across various models and datasets. Our comprehensive experiments demonstrate that user-LLMs effectively simulate human interpretability assessments, providing consistent rankings of explanation methods. Our work advances the scalable evaluation of interpretability in Explainable AI, promoting the development of AI systems that are both accurate and transparent.