Michal Lahav

Michal Lahav

Michal Lahav is a Staff User Experience Researcher for AIUX in Google Research. Her research areas include human centered approaches to generative AI, AI memory, collaborative AI and assistive speech technologies. Her research supports incorporating global perspectives, community-based research practices, and helping AI be more equitable for underrepresented communities.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    A Systematic Review and Thematic Analysis of Community-Collaborative Approaches to Computing Research
    Ned Cooper
    Tiffanie Horne
    Gillian Hayes
    Jess Scon Holbrook
    Lauren Wilcox
    ACM Conference on Human Factors in Computing Systems (ACM CHI) 2022 (2022)
    Preview abstract HCI researchers have been gradually shifting attention from individual users to communities when engaging in research, design, and system development. However, our field has yet to establish a cohesive, systematic understanding of the challenges, benefits, and commitments of community-collaborative approaches to research. We conducted a systematic review and thematic analysis of 47 computing research papers discussing participatory research with communities for the development of technological artifacts and systems, published over the last two decades. From this review, we identified seven themes associated with the evolution of a project: from establishing community partnerships to sustaining results. Our findings suggest several tensions characterize these projects, many of which relate to the power and position of researchers, and the computing research environment, relative to community partners. We discuss the implications of our findings and offer methodological proposals to guide HCI, and computing research more broadly, towards practices that center a community. View details
    LaMPost: Evaluation of an AI-assisted Writing Email Editor Prototype for Adults with Dyslexia
    Steven Goodman
    Erin Buehler
    Patrick Clary
    Andy Coenen
    Aaron Michael Donsbach
    Tiffanie Horne
    Bob MacDonald
    Rain Breaw Michaels
    Ajit Narayanan
    Joel Christopher Riley
    Alex Santana
    Rachel Sweeney
    Phil Weaver
    Ann Yuan
    Proceedings of ASSETS 2022, ACM (2022) (to appear)
    Preview abstract Prior work has explored the writing challenges experienced by people with dyslexia, and the potential for new spelling, grammar, and word retrieval technologies to address these challenges. However, the capabilities for natural language generation demonstrated by the latest class of large language models (LLMs) highlight an opportunity to explore new forms of human-AI writing support tools. In this paper, we introduce LaMPost, a prototype email-writing interface that explores the potential for LLMs to power writing support tools that address the varied needs of people with dyslexia. LaMPost draws from our understanding of these needs and introduces novel AI-powered features for email-writing, including: outlining main ideas, generating a subject line, suggesting changes, rewriting a selection. We evaluated LaMPost with 19 adults with dyslexia, identifying many promising routes for further exploration (including the popularity of the “rewrite” and “subject line” features), but also finding that the current generation of LLMs may not surpass the accuracy and quality thresholds required to meet the needs of writers with dyslexia. Surprisingly, we found that participants’ awareness of the AI had no effect on their perception of the system, nor on their feelings of autonomy, expression, and self-efficacy when writing emails. Our findings yield further insight into the benefits and drawbacks of using LLMs as writing support for adults with dyslexia and provide a foundation to build upon in future research. View details
    Preview abstract Automated speech recognition (ASR) converts language into text and is used across a variety of applications to assist us in everyday life, from powering virtual assistants, natural language conversations, to enabling dictation services. While recent work suggests that there are racial disparities in the performance of ASR systems for speakers of African American Vernacular English, little is known about the psychological and experiential effects of these failures paper provides a detailed examination of the behavioral and psychological consequences of ASR voice errors and the difficulty African American users have with getting their intents recognized. The results demonstrate that ASR failures have a negative, detrimental impact on African American users. Specifically, African Americans feel othered when using technology powered by ASR—errors surface thoughts about identity, namely about race and geographic location—leaving them feeling that the technology was not made for them. As a result, African Americans accommodate their speech to have better success with the technology. We incorporate the insights and lessons learned from sociolinguistics in our suggestions for linguistically responsive ways to build more inclusive voice systems that consider African American users’ needs, attitudes, and speech patterns. Our findings suggest that the use of a diary study can enable researchers to best understand the experiences and needs of communities who are often misunderstood by ASR. We argue this methodological framework could enable researchers who are concerned with fairness in AI to better capture the needs of all speakers who are traditionally misheard by voice-activated, artificially intelligent (voice-AI) digital systems. View details
    Three Directions for the Design of Human-Centered Machine Translation
    Samantha Robertson
    Wesley Deng
    Timnit Gebru
    Margaret Mitchell
    Samy Bengio
    Niloufar Salehi
    (2021)
    Preview abstract As people all over the world adopt machine translation (MT) to communicate across languages, there is increased need for affordances that aid users in understanding when to rely on automated translations. Identifying the information and interactions that will most help users meet their translation needs is an open area of research at the intersection of Human-Computer Interaction (HCI) and Natural Language Processing (NLP). This paper advances work in this area by drawing on a survey of users' strategies in assessing translations. We identify three directions for the design of translation systems that support more reliable and effective use of machine translation: helping users craft good inputs, helping users understand translations, and expanding interactivity and adaptivity. We describe how these can be introduced in current MT systems and highlight open questions for HCI and NLP research. View details
    Unmet Needs and Opportunities for Mobile Translation AI
    Abigail Evans
    Aaron Michael Donsbach
    Boris Smus
    Jess Scon Holbrook
    Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20), ACM, Honolulu, Hawaii, USA
    Preview abstract Translation apps and devices are often presented in the context of providing assistance while traveling abroad. However, the spectrum of needs for cross-language communication is much wider. To investigate these needs, we conducted three studies with populations spanning socioeconomic status and geographic regions: (1) United States-based travelers, (2) migrant workers in India, and (3) immigrant populations in the United States. We compare frequent travelers' perception and actual translation needs with those of the two migrant communities. The latter two, with low language proficiency, have the greatest translation needs to navigate their daily lives. However, current mobile translation apps do not meet these needs. Our findings provide new insights on the usage practices and limitations of mobile translation tools. Finally, we propose design implications to help apps better serve these unmet needs. View details