- Michael Munn
- David Pitman
Explainable AI refers to methods and techniques in artificial intelligence (AI) that allow the results of the model to be explained in terms that are understandable by human experts. Explainability is one of the key components of what is now referred to as Responsible AI alongside ML fairness, security and privacy. A successful XAI system aims to increase trust and transparency for complex ML models in a way that benefits model developers, stakeholders, and users.
This book is a collection of some of the most effective and commonly used techniques for explaining why an ML model makes the predictions it does. We discuss the many aspects of Explainable AI including the challenges, metrics for success, and use case studies to guide best practices. Ultimately the goal of this book is to bridge the gap between the vast amount of work that has been done in Explainable AI and provide a quick reference for practitioners that aim to implement XAI into their ML development workflow.
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work