Cynthia Bennett
Research Areas
      Authored Publications
    
  
  
  
    
    
  
      
        Sort By
        
        
    
    
        
          
            
              Toward Community- Led Evaluations of Text-to-Image AI Representations of Disability, Health, and Accessibility
            
          
        
        
          
            
              
                
                  
                    
                
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
    
    
    
    
    
            Equity and Access in Algorithms, Mechanisms, and Optimization (EAAMO) (2025)
          
          
        
        
        
          
              Preview abstract
          
          
              Responsible AI advocates for user evaluations, particularly when concerning people with disabilities, health conditions, and accessibility needs ( DHA)–wide- ranging but umbrellaed sociodemograph- ics. However, community- centered text- to- image AI’s ( T2I) evaluations are often researcher- led, situating evaluators as consumers. We instead recruited 21 people with diverse DHA to evaluate T2I by writing and editing their own T2I prompts with their preferred language and topics, in a method mirroring everyday use. We contribute user- generated terminology categories which inform future research and data collections, necessary for developing authentic scaled evaluations. We additionally surface yet- discussed DHA AI harms intersecting race and class, and participants shared harm impacts they experienced as image- creator evaluators. To this end, we demonstrate that prompt engineering– proposed as a misrepresentation mitigation– was largely ineffective at improving DHA representations. We discuss the importance of evaluator agency to increase ecological validity in community- centered evaluations, and opportunities to research iterative prompting as an evaluation technique.
              
  
View details
          
        
      
    
        
          
            
              Amplifying Trans and Nonbinary Voices: A Community-Centred Harm Taxonomy for LLMs
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Eddie Ungless
                      
                    
                
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Beka Gulotta
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics  (2025)
          
          
        
        
        
          
              Preview abstract
          
          
              We explore large language model (LLM) responses that may negatively impact the transgender and nonbinary (TGNB) community and introduce the Transing Transformers Toolkit, T3, which provides resources for identifying such harmful response behaviors. The heart of T3 is a community-centred taxonomy of harms, developed in collaboration with the TGNB community, which we complement with, amongst other guidance, suggested heuristics for evaluation. To develop the taxonomy, we adopted a multi-method approach that included surveys and focus groups with community experts. The contribution highlights the importance of community-centred approaches in mitigating harm, and outlines pathways for LLM developers to improve how their models handle TGNB-related topics.
              
  
View details
          
        
      
    
        
          
            
              Amplifying Trans and Nonbinary Voices: A Community-Centred Harm Taxonomy for LLMs
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Eddie Ungless
                      
                    
                
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Beka Gulotta
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (2025)
          
          
        
        
        
          
              Preview abstract
          
          
              We explore large language model (LLM) responses that may negatively impact the transgender and nonbinary (TGNB) community and introduce the Transing Transformers Toolkit, T3, which provides resources for identifying such harmful response behaviors. The heart of T3 is a community-centred taxonomy of harms, developed in collaboration with the TGNB community, which we complement with, amongst other guidance, suggested heuristics for evaluation. To develop the taxonomy, we adopted a multi-method approach that included surveys and focus groups with community experts. The contribution highlights the importance of community-centred approaches in mitigating harm, and outlines pathways for LLM developers to improve how their models handle TGNB-related topics.
              
  
View details
          
        
      
    
        
          
            
              "Accessibility people, you go work on that thing of yours over there": Addressing Disability Inclusion in AI Product Organizations
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Sanika Moharana
                      
                    
                
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Erin Buehler
                      
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Michael Madaio
                      
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Vinita Tibdewal
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            Proceedings of AIES 2025 (2025) (to appear)
          
          
        
        
        
          
              Preview abstract
          
          
              The rapid emergence of generative AI models and AI powered systems has surfaced a variety of concerns around responsibility, safety, and inclusion. Some of these concerns address specific vulnerable communities, including people with disabilities. At the same time, these systems may introduce harms upon disabled users that do not fit neatly into existing accessibility classifications, and may not be addressed by current accessibility practices. In this paper, we investigate how stakeholders across a variety of job types are encountering and addressing potentially negative impacts of AI on users with disabilities. Through interviews with 25 practitioners, we identify emerging challenges related to AI’s impact on disabled users, systemic obstacles that contribute to problems, and effective strategies for impacting change. Based on these findings, we offer suggestions for improving existing processes for creating AI-powered systems and supporting practitioners in developing skills to address these emerging challenges.
              
  
View details
          
        
      
    
        
          
            
              From Provenance to Aberrations: Image Creator and Screen Reader User Perspectives on Alt Text for AI-Generated Images
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Maitraye Das
                      
                    
                
              
            
              
                
                  
                    
                    
                      
                        Alexander J. Fiannaca
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            CHI Conference on Human Factors in Computing  Systems (2024)
          
          
        
        
        
          
              Preview abstract
          
          
              AI-generated images are proliferating as a new visual medium. However, state-of-the-art image generation models do not output alternative (alt) text with
their images, rendering them largely inaccessible to screen reader users (SRUs). Moreover, less is known about what information would be most desirable
to SRUs in this new medium. To address this, we invited AI image creators and SRUs to evaluate alt text prepared from various sources and write their own
alt text for AI images. Our mixed-methods analysis makes three contributions. First, we highlight creators’ perspectives on alt text, as creators are well-positioned
to write descriptions of their images. Second, we illustrate SRUs’ alt text needs particular to the emerging medium of AI images. Finally, we discuss the
promises and pitfalls of utilizing text prompts written as input for AI models in alt text generation, and areas where broader digital accessibility guidelines
could expand to account for AI images.
              
  
View details
          
        
      
    
        
          
            
              “They only care to show us the wheelchair”: disability representation in text-to-image AI models
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Avery Mack
                      
                    
                
              
            
              
                
                  
                    
                    
                      
                        Rida Qadri
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            CHI Conference on Human-Computer Interaction (2024)
          
          
        
        
        
          
              Preview abstract
          
          
              This paper reports on disability representation in images output from text-to-image (T2I) generative AI systems. Through eight focus groups with 25 people
with disabilities, we found that models repeatedly presented reductive archetypes for different disabilities. Often these representations reflected broader
societal stereotypes and biases, which our participants were concerned to see reproduced through T2I. Our participants discussed further challenges with
using these models including the current reliance on prompt engineering to reach satisfactorily diverse results. Finally, they offered suggestions for
how to improve disability representation with solutions like showing multiple, heterogeneous images for a single prompt and including the prompt with images
generated. Our discussion reflects on tensions and tradeoffs we found among the diverse perspectives shared to inform future research on representation-oriented
generative AI system evaluation metrics and development processes.
              
  
View details
          
        
      
    
        
        
          
              Preview abstract
          
          
              Generative AI (GAI) is proliferating, and among its many applications are to support creative work (e.g., generating text, images, music) and to enhance accessibility (e.g., captions of images and audio). As GAI evolves, creatives must consider how (or how not) to incorporate these tools into their practices. In this paper, we present interviews at the intersection of these applications. We learned from 10 creatives with disabilities who intentionally use and do not use GAI in and around their creative work. Their mediums ranged from audio engineering to leatherwork, and they collectively experienced a variety of disabilities, from sensory to motor to invisible disabilities. We share cross-cutting themes of their access hacks, how creative practice and access work become entangled, and their perspectives on how GAI should and should not fit into their workflows. In turn, we offer qualities of accessible creativity with responsible AI that can inform future research. 
              
  
View details
          
        
      
    
        
          
            
              Understanding Use Cases for AI-Powered Visual Interpretation Services
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Ricardo Gonzalez
                      
                    
                
              
            
              
                
                  
                    
                    
                      
                        Jazmin Collins
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Shiri Azenkot
                      
                    
                  
              
            
          
          
          
          
            CHI Conference on Human-Computer Interaction (2024)
          
          
        
        
        
          
              Preview abstract
          
          
              "Scene description" applications that describe visual content in a photo are useful daily tools for blind and low vision (BLV) people. Researchers have
studied their use, but they have only explored those that leverage remote sighted assistants; little is known about applications that use AI to generate
their descriptions. Thus, to investigate their use cases, we conducted a two-week diary study where 16 BLV participants used an AI-powered scene description
application we designed. Through their diary entries and follow-up interviews, users shared their information goals and assessments of the visual descriptions
they received. We analyzed the entries and found frequent use cases, such as identifying visual features of known objects, and surprising ones, such as avoiding contact with dangerous objects. We also found users scored the descriptions relatively low on average,
2.76 out of 5 (SD=1.49) for satisfaction and 2.43 out of 4 (SD=1.16) for trust, showing that descriptions still need signifcant improvements to deliver
satisfying and trustworthy experiences. We discuss future opportunities for AI as it becomes a more powerful accessibility tool for BLV users.
              
  
View details
          
        
      
    
        
          
            
              Characterizing Image Accessibility on Wikipedia across Languages
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Elisa Kreiss
                      
                    
                
              
            
              
                
                  
                    
                    
                      
                        Krishna Srinivasan
                      
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Tiziano Piccardi
                      
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Jesus Adolfo Hermosillo
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Michael S. Bernstein
                      
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                      
                        Christopher Potts
                      
                    
                  
              
            
          
          
          
          
            Wiki Workshop 2023 (to appear)
          
          
        
        
        
          
              Preview abstract
          
          
              We make a first attempt to characterize image accessibility on Wikipedia across languages, present new experimental results that can inform efforts to assess description quality, and offer some strategies to improve Wikipedia's image accessibility.
              
  
View details
          
        
      
    
        
          
            
              AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
            
          
        
        
          
            
              
                
                  
                    
    
    
    
    
    
                      
                        Rida Qadri
                      
                    
                
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
              
                
                  
                    
                    
                  
              
            
          
          
          
          
            Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 506–517
          
          
        
        
        
          
              Preview abstract
          
          
              This paper presents a community-centered study of cultural limitations of text-to-image (T2I) models in the South Asian context. We theorize these failures using scholarship on dominant media regimes of representations and locate them within participants’ reporting of their existing social marginalizations. We thus show how generative AI can reproduce an outsiders gaze for viewing South Asian cultures, shaped by global and regional power inequities. By centering communities as experts and soliciting their perspectives on T2I limitations, our study adds rich nuance into existing evaluative frameworks and deepens our understanding of the culturally-specific ways AI technologies can fail in non-Western and Global South settings. We distill lessons for responsible development of T2I models, recommending concrete pathways forward that can allow for recognition of structural inequalities.
              
  
View details