Jump to Content
Asma Ghandeharioun

Asma Ghandeharioun

Asma Ghandeharioun, Ph.D. is a research scientist at the People + AI research team in Google Research. She is working on systems that better interpret humans and are better interpreted by humans. Her previous work spans machine learning interpretability, conversational AI, affective computing, digital health, and, more broadly, human-centered AI. She holds a doctorate and master's degree from MIT and a bachelor's degree from the Sharif University of Technology. She has been trained as a computer scientist/engineer and has research experiences at MIT, Google Research, Microsoft Research, EPFL, and in collaboration with medical professionals from Harvard, renowned hospitals in the Boston area, and abroad.

Some of her favorite past projects include: Generating disentangled interpretations via concept traversals, approximating interactive human evaluation using self-play for open-domain dialog models, interpretability benefits of characterizing sources of uncertainty, estimating depressive symptom severity based on sensor data, and an emotion-aware wellbeing chatbot.

Her work has been published in premier peer-reviewed machine learning and digital health venues such as ICLR, NeurIPS, EMNLP, AAAI, ACII, AISTATS, Frontiers in Psychiatry, and Psychology of Well-being, and has been featured in Wired, Wall Street Journal, and New Scientist.
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    DISSECT: Disentangled Simultaneous Explanations via Concept Traversals
    Chun-Liang Li
    Brian Eoff
    Rosalind Picard
    International Conference on Learning Representations (2022)
    Preview abstract Explaining deep learning model inferences is a promising venue for scientific understanding, improving safety, uncovering hidden biases, evaluating fairness, and beyond, as argued by many scholars. One of the principal benefits of counterfactual explanations is allowing users to explore "what-if" scenarios through what does not and cannot exist in the data, a quality that many other forms of explanation such as heatmaps and influence functions are inherently incapable of doing. However, most previous work on generative explainability cannot disentangle important concepts effectively, produces unrealistic examples, or fails to retain relevant information. We propose a novel approach, DISSECT, that jointly trains a generator, a discriminator, and a concept disentangler to overcome such challenges using little supervision. DISSECT generates Concept Traversals (CTs), defined as a sequence of generated examples with increasing degrees of concepts that influence a classifier's decision. By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts automatically rather than rely on user-predefined concepts. We show that DISSECT produces CTs that (1) disentangle several concepts, (2) are influential to a classifier's decision and are coupled to its reasoning due to joint training (3), are realistic, (4) preserve relevant information, and (5) are stable across similar inputs. We validate DISSECT on several challenging synthetic and realistic datasets where previous methods fall short of satisfying desirable criteria for interpretability and show that it performs consistently well. Finally, we present experiments showing applications of DISSECT for detecting potential biases of a classifier and identifying spurious artifacts that impact predictions. View details
    Human-centric dialog training via offline reinforcement learning
    Judy Hanwen Shen
    Craig Ferguson
    Agata Lapedriza
    Noah Jones
    Shixiang Gu
    Rosalind Picard
    Empirical Methods in Natural Language Processing (EMNLP) (2020)
    Preview abstract How can we train a dialog model to produce better conversations by learning from human feedback, without the risk of humans teaching it harmful chat behaviors? We start by hosting models online, and gather human feedback from real-time, open-ended conversations, which we then use to train and improve the models using offline reinforcement learning (RL). We identify implicit conversational cues including language similarity, elicitation of laughter, sentiment, and more, which indicate positive human feedback, and embed these in multiple reward functions. A well-known challenge is that learning an RL policy in an offline setting usually fails due to the lack of ability to explore and the tendency to make over-optimistic estimates of future reward. These problems become even harder when using RL for language models, which can easily have a 20,000 action vocabulary and many possible reward functions. We solve the challenge by developing a novel class of offline RL algorithms. These algorithms use KL-control to penalize divergence from a pre-trained prior language model, and use a new strategy to make the algorithm pessimistic, instead of optimistic, in the face of uncertainty. We test the resulting dialog model with ratings from 80 users in an open-domain setting and find it achieves significant improvements over existing deep offline RL approaches. The novel offline RL method is viable for improving any existing generative dialog model using a static dataset of human feedback. View details
    Characterizing Sources of Uncertainty to Proxy Calibration and Disambiguate Annotator and Data Bias
    Brian Eoff
    Rosalind Picard
    ICCV Workshop on Interpreting and Explaining Visual Artificial Intelligence Models (2019)
    Preview abstract Supporting model interpretability for complex phenomena where annotators can legitimately disagree, such as emotion recognition, is a challenging machine learning task. In this work, we show that explicitly quantifying the uncertainty in such settings has interpretability benefits. We use a simple modification of a classical network inference using Monte Carlo dropout to give measures of epistemic and aleatoric uncertainty. We identify a significant correlation between aleatoric uncertainty and human annotator disagreement (r ≈ .3). Additionally, we demonstrate how difficult and subjective training samples can be identified using aleatoric uncertainty and how epistemic uncertainty can reveal data bias that could result in unfair predictions. We identify the total uncertainty as a suitable surrogate for model calibration, i.e. the degree we can trust model's predicted confidence. In addition to explainability benefits, we observe modest performance boosts from incorporating model uncertainty. View details
    No Results Found