Meredith Ringel Morris
Meredith Ringel Morris is Director and Principal Scientist for Human-AI Interaction in Google DeepMind (formerly in Google Brain), conducting foundational research on Human-AI interaction and Human-Centered AI. Previously, she was Director of People + AI Research in Google Research's Responsible AI organization. She is also an Affiliate Professor at the University of Washington in The Paul G. Allen School of Computer Science & Engineering and in The Information School. Prior to joining Google Research, Dr. Morris was Research Area Manager for Interaction, Accessibility, and Mixed Reality at Microsoft Research, where she founded Microsoft’s Ability research group. Dr. Morris is an ACM Fellow and a member of the ACM SIGCHI Academy. Dr. Morris earned her Sc.B. in Computer Science from Brown University and her M.S. and Ph.D. in Computer Science from Stanford University.
Authored Publications
Sort By
From Provenance to Aberrations: Image Creator and Screen Reader User Perspectives on Alt Text for AI-Generated Images
Maitraye Das
Alexander J. Fiannaca
CHI Conference on Human Factors in Computing Systems (2024)
Preview abstract
AI-generated images are proliferating as a new visual medium. However, state-of-the-art image generation models do not output alternative (alt) text with
their images, rendering them largely inaccessible to screen reader users (SRUs). Moreover, less is known about what information would be most desirable
to SRUs in this new medium. To address this, we invited AI image creators and SRUs to evaluate alt text prepared from various sources and write their own
alt text for AI images. Our mixed-methods analysis makes three contributions. First, we highlight creators’ perspectives on alt text, as creators are well-positioned
to write descriptions of their images. Second, we illustrate SRUs’ alt text needs particular to the emerging medium of AI images. Finally, we discuss the
promises and pitfalls of utilizing text prompts written as input for AI models in alt text generation, and areas where broader digital accessibility guidelines
could expand to account for AI images.
View details
Help and The Social Construction of Access: A Case-Study from India
Vaishnav Kameswaran
Jerry Young Robinson
Nithya Sambasivan
Gaurav Aggarwal
Proceedings of ASSETS 2024, ACM (2024)
Preview abstract
A goal of accessible technology (AT) design is often to increase independence, i.e., to enable people with disabilities to accomplish tasks on their own without help. Recent work uses "interdependence" to challenge this view, a framing that recognizes mutual dependencies as critical to addressing the access needs of people with disabilities. However, empirical evidence examining interdependence is limited to the Global North; we address this gap, using interdependence as an analytical frame to understand how people with visual impairments (PVI) in India navigate indoor environments. Using interviews with PVI and their companions and a video-diary study we find that help is a central way of working for PVI to circumvent issues of social and structural inaccess and necessitates work. We uncover three kinds of interdependencies 1) self-initiated, 2) serendipitous, and 3) obligatory and discuss the implications these interdependencies have for AT design in the Global South.
View details
Preview abstract
Advances in deep learning systems have allowed large models to match or surpass human accuracy on a number of skills such as image classification, basic programming, and standardized test taking. As the performance of the most capable models begin to saturate on tasks where humans already achieve high accuracy, it becomes necessary to benchmark models on increasingly complex abilities. One such task is forecasting the future outcome of events. In this work we describe experiments using a novel dataset of real world events and associated human predictions, an evaluation metric to measure forecasting ability, and the accuracy of a number of different LLM based forecasting designs on the provided dataset. Additionally, we analyze the performance of the LLM forecasters against human predictions and find that models still struggle to make accurate predictions about the future. Our follow-up experiments indicate this is likely due to models' tendency to guess that most events are unlikely to occur (which tends to be true for many prediction datasets, but does not reflect actual forecasting abilities). We reflect on next steps for developing a systematic and reliable approach to studying LLM forecasting.
View details
Preview abstract
As AI systems quickly improve in both breadth and depth of performance, they lend themselves to creating increasingly powerful and realistic agents, including the possibility of agents modeled on specific people. We anticipate that within our lifetimes it may become common practice for people to create a custom AI agent to interact with loved ones and/or the broader world after death. We call these generative ghosts, since such agents will be capable of generating novel content rather than merely parroting content produced by their creator while living. In this paper, we first discuss the design space of potential implementations of generative ghosts. We then discuss the practical and ethical implications of generative ghosts, including potential positive and negative impacts on individuals and society. Based on these considerations, we lay out a research agenda for the AI and HCI research communities to empower people to create and interact with AI afterlives in a safe and beneficial manner.
View details
AI for Accessibility: An Agenda for the Global South
Vaishnav Kameswaran
Jerry Young
Nithya Sambasivan
Gaurav Aggarwal
ASSETS 2023 A11yFutures Workshop , ACM (2023)
Preview abstract
AI technologies have the potential to improve the quality of life for marginalized populations, including people with disabilities. However, a majority of these AI solutions are designed for people in the Global North and so far, have marginalized the needs of people with disabilities in the Global South. Yet, the increased proliferation of AI across the world suggests that this trend
will change. This prompts the question: What are key considerations for the design for AI solutions that center the needs of people with disabilities in the Global South: contexts often marked by poverty, limited resource availability, lack of accessible support structures and indifferent societal attitudes towards people with disabilities? In this position paper, we begin
to answer this question. To do so, we draw upon a case study of designing a novel AI solution to support the indoor navigation practices of people with visual impairments. We provide guidance to HCI, AI, and Accessibility researchers and practitioners to aid in their quest to design more inclusive AI technologies.
View details
Towards Semantically-Aware UI Design Tools: Design, Implementation, and Evaluation of Semantic Grouping Guidelines
Peitong Duan
Bjoern Hartmann
Karina Nguyen
Marti Hearst
ICML 2023 Workshop on Artificial Intelligence and Human-Computer Interaction (2023)
Preview abstract
A coherent semantic structure, where semantically-related elements are appropriately grouped, is critical for proper understanding of a UI. Ideally, UI design tools should help designers establish coherent semantic grouping. To work towards this, we contribute five semantic grouping guidelines that capture how human designers think about semantic grouping and are amenable to implementation in design tools. They were obtained from empirical observations
on existing UIs, a literature review, and iterative refinement with UI experts’ feedback. We validated our guidelines through an expert review and heuristic evaluation; results indicate these guidelines capture valuable information about semantic structure. We demonstrate the guidelines’ use for building systems by implementing a set of computational metrics. These metrics detected many of the same severe issues that human design experts marked in a comparative study. Running our metrics on a larger UI dataset suggests many real UIs exhibit grouping violations.
View details
Characterizing Image Accessibility on Wikipedia across Languages
Elisa Kreiss
Krishna Srinivasan
Tiziano Piccardi
Jesus Adolfo Hermosillo
Michael S. Bernstein
Christopher Potts
Wiki Workshop 2023 (to appear)
Preview abstract
We make a first attempt to characterize image accessibility on Wikipedia across languages, present new experimental results that can inform efforts to assess description quality, and offer some strategies to improve Wikipedia's image accessibility.
View details
Generative Agents: Interactive Simulacra of Human Behavior
Joon Sung Park
Joseph C. O'Brien
Percy Liang
Michael Bernstein
Proceedings of UIST 2023, ACM (2023)
Preview abstract
Believable proxies of human behavior can empower interactive applications ranging from immersive environments to rehearsal spaces for interpersonal communication to prototyping tools. In this paper, we introduce generative agents--computational software agents that simulate believable human behavior. Generative agents wake up, cook breakfast, and head to work; artists paint, while authors write; they form opinions, notice each other, and initiate conversations; they remember and reflect on days past as they plan the next day. To enable generative agents, we describe an architecture that extends a large language model to store a complete record of the agent's experiences using natural language, synthesize those memories over time into higher-level reflections, and retrieve them dynamically to plan behavior. We instantiate generative agents to populate an interactive sandbox environment inspired by The Sims, where end users can interact with a small town of twenty five agents using natural language. In an evaluation, these generative agents produce believable individual and emergent social behaviors: for example, starting with only a single user-specified notion that one agent wants to throw a Valentine's Day party, the agents autonomously spread invitations to the party over the next two days, make new acquaintances, ask each other out on dates to the party, and coordinate to show up for the party together at the right time. We demonstrate through ablation that the components of our agent architecture--observation, planning, and reflection--each contribute critically to the believability of agent behavior. By fusing large language models with computational, interactive agents, this work introduces architectural and interaction patterns for enabling believable simulations of human behavior.
View details
Preview abstract
Generative AI models, including large language models and multimodal models that include text and other media, are on the cusp of transforming many aspects of modern life, including entertainment, education, civic life, the arts, and a range of professions. There is potential for Generative AI to have a substantive impact on the methods and pace of discovery for a range of scientific disciplines. We interviewed twenty scientists from a range of fields (including the physical, life, and social sciences) to gain insight into whether or how Generative AI technologies might add value to the practice of their respective disciplines, including not only ways in which AI might accelerate scientific discovery (i.e., research), but also other aspects of their profession, including the education of future scholars and the communication of scientific findings. In addition to identifying opportunities for Generative AI to augment scientists’ current practices, we also asked participants to reflect on concerns about AI. These findings can help guide the responsible development of models and interfaces for scientific education, inquiry, and communication.
View details
Practical Challenges for Investigating Abbreviation Strategies
Elisa Kreiss
CHI 2023 Workshop on Assistive Writing, ACM (2023) (to appear)
Preview abstract
Saying more while typing less is the ideal we strive towards when designing assistive writing technology that can minimize effort. Complementary to efforts on predictive completions is the idea to use a drastically abbreviated version of an intended message, which can then be reconstructed using Language Models. This paper highlights the challenges that arise from investigating what makes an abbreviation scheme promising for a potential application. We hope that this can provide a guide for designing studies which consequently allow for fundamental insights on efficient and goal driven abbreviation strategies.
View details