
Been Kim
Been is a research scientist at Brain. Her research focuses on improving interpretability in machine learning by building interpretability method for already-trained models or building inherently interpretable models. She has MS and PhD degrees from MIT.
Been has given tutorials on interpretability at ICML 2017 ,
at the
Deep Learning Summer school at University of Toronto, Vector institute in 2018
and at CVPR 2018 .
Been is one of the executive board member of Women in Machine Learning (WiML), and helps with various ML conferences as a workshop chair, an area chair, a steering committee and a program chair. More on here .
Authored Publications
Sort By
Google
Beyond Rewards: a Hierarchical Perspective on Offline Multiagent Behavioral Analysis
Shayegan Omidshafiei
Yannick Assogba
Advances in Neural Information Processing Systems (NeurIPS) (2022) (to appear)
DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
Chun-Liang Li
Brian Eoff
Rosalind Picard
International Conference on Learning Representations (ICLR) (2022)
Best of both worlds: local and global explanations with human-understandable concepts
Jessica Schrouff
Sebastien Baur
Shaobo Hou
Eric Loreaux
Diana Mincu
Ralph Blanes
(2021)
Concept Bottleneck Models
Pang Wei Koh
Thao Nguyen
Yew Siang Tang
Stephen Mussmann
Emma Pierson
Percy Liang
ICML 2020 (2020) (to appear)
On Completeness-aware Concept-Based Explanations in Deep Neural Networks
Chih-kuan Yeh
Chun-Liang Li
Pradeep Ravikumar
NeurIPS (2020) (to appear)
Human-Centered Tools for Coping with Imperfect Algorithms during Medical Decision-Making
Jason Hipp
Daniel Smilkov
Martin Stumpe
Conference on Human Factors in Computing Systems (2019)
Sanity Checks for Saliency Maps
Julius Adebayo
Justin Gilmer
Michael Christoph Muelly
Ian Goodfellow
Moritz Hardt
NeurIPS (Spotlight) (2018)