Jump to Content
Ariel Liu

Ariel Liu

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    A Mixed-Methods Approach to Understanding User Trust after Voice Assistant Failures
    Allison Mercurio
    Amanda Elizabeth Baughan
    Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems Pages (2023)
    Preview abstract Despite huge gains in performance in natural language understanding via large language models in recent years, voice assistants still often fail to meet user expectations. In this study, we conducted a mixed-methods analysis of how voice assistant failures affect users' trust in their voice assistants. To illustrate how users have experienced these failures, we contribute a crowdsourced dataset of 199 voice assistant failures, categorized across 12 failure sources. Relying on interview and survey data, we find that certain failures, such as those due to overcapturing users' input, derail user trust more than others. We additionally examine how failures impact users' willingness to rely on voice assistants for future tasks. Users often stop using their voice assistants for specific tasks that result in failures for a short period of time before resuming similar usage. We demonstrate the importance of low stakes tasks, such as playing music, towards building trust after failures. View details
    Challenges in Supporting Exploratory Search through Voice Assistants
    Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems (Workshop), Association for Computing Machinery, New York, NY, USA
    Preview abstract Voice assistants have been successfully adopted for simple, routine tasks, such as asking for the weather or setting an alarm. However, as people get more familiar with voice assistants, they may increase their expectations for more complex tasks, such as exploratory search — e.g., “What should I do when I visit Paris with kids? Oh, and ideally not too expensive.” Compared to simple search tasks such as “How tall is the Eiffel Tower?”, which can be answered with a single-shot answer, the response to exploratory search is more nuanced, especially through voice-based assistants. In this paper, we outline four challenges in designing voice assistants that can better support exploratory search: addressing situationally induced impairments; working with mixed-modal interactions; designing for diverse populations; and meeting users’ expectations and gaining their trust. Addressing these challenges is important for developing more “intelligent” voice-based personal assistants. View details
    Building Empathy: Scaling User Research for Organizational Impact
    Victoria Schwanda Sosik
    Khadine Singh
    CHI '18 Extended Abstracts on Human Factors in Computing Systems (2018) (to appear)
    Preview abstract Building user empathy in a tech organization is crucial to ensure that products are designed with an eye toward user needs and experiences. The Pokerface program is a Google internal user empathy campaign with 26 researchers that helped more than 1500 employees—including engineers, product managers, designers, analysts, and program managers across more than 15 sites—have first-hand experiences with their users. Here, we discuss the goals of the Pokerface program, some challenges that we have faced during execution, and the impact we have measured thus far. View details
    No Results Found