Meg Kurdziolek
Meg is the Lead UXR for Intrinsic.ai, where she focuses on making it easier to create automation solutions with industrial robotics. She is a "Xoogler" and previously worked on Explainable AI services for Google Cloud. Meg has had a varied career working for start-ups and large corporations alike, and she has published on topics such as information visualization, educational-technology design, voice user interface (VUI) design, explainable AI (XAI), and human-robot interaction (HRI). Meg is a proud alumnus of Virginia Tech, where she received her Ph.D. in Human-Computer Interaction.
Research Areas
Authored Publications
Sort By
Explaining the Unexplainable: Explainable AI (XAI) for UX
UXPA Magazine, 22.3 (2022)
Preview abstract
A problem faces machine learning (ML) product designers. As datasets grow larger and more complex, the ML models built on them grow complex and increasingly opaque. End users are less likely to trust and adopt the technology without clear explanations. Furthermore, audiences for ML model explanations vary considerably in background, experience with mathematical reasoning, and contexts in which they apply these technologies. UX professionals can use explainable artificial intelligence (XAI) methods and techniques to explain the reasoning behind ML products.
View details
Explaining the hard to explain: An overview of Explainable AI (XAI) for UX
IxDA-Pittsburgh (2022)
Preview abstract
There is a growing problem facing ML-driven product designers and engineers. Products and services increasingly generate and rely on larger, more complex datasets. As datasets grow in breadth and volume, the ML models built on them are increasing in complexity. But as ML model complexity grows, they become increasingly opaque. Without a means of understanding ML model decision making, end-users are less likely to trust and adopt the technology. Furthermore, the audiences for ML model explanations come from varied backgrounds, have different levels of experience with mathematics and statistics, and will rely on these technologies in a variety of contexts. In order to show the “whys” behind complex machine learning decision making, technologists will need to employ “Explainable AI.” In this talk, I'll sketch out “the basics” of Explainable AI (XAI) and describe, at a high level, popular methods and techniques. Then I describe the current challenges facing the field, and how UX can advocate for better experiences in ML driven products.
View details