Explaining the hard to explain: An overview of Explainable AI (XAI) for UX

IxDA-Pittsburgh (2022)

Abstract

There is a growing problem facing ML-driven product designers and engineers. Products and services increasingly generate and rely on larger, more complex datasets. As datasets grow in breadth and volume, the ML models built on them are increasing in complexity. But as ML model complexity grows, they become increasingly opaque. Without a means of understanding ML model decision making, end-users are less likely to trust and adopt the technology. Furthermore, the audiences for ML model explanations come from varied backgrounds, have different levels of experience with mathematics and statistics, and will rely on these technologies in a variety of contexts. In order to show the “whys” behind complex machine learning decision making, technologists will need to employ “Explainable AI.” In this talk, I'll sketch out “the basics” of Explainable AI (XAI) and describe, at a high level, popular methods and techniques. Then I describe the current challenges facing the field, and how UX can advocate for better experiences in ML driven products.