Renee Shelby
Research Areas
Authored Publications
Sort By
Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction
Shalaleh Rismani
Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), Association for Computing Machinery (2024), pp. 1-17 (to appear)
Preview abstract
Understanding how communities experience algorithms is necessary to mitigate potential harmful impacts. This paper presents folk theories of text-to-image (T2I) models to enrich understanding of how artist communities experience creative machine learning (ML) systems. This research draws on data collected from a workshop with 15 artists from 10 countries who incorporate T2I models in their creative practice. Through reflexive thematic analysis of workshop data, we highlight theorization of T2I use, harm, and harm-reduction. Folk theories of use envision T2I models as an artistic medium, a mundane tool, and locate true creativity as rising above model affordances. Theories of harm articulate T2I models as harmed by engineering efforts to eliminate glitches and product policy efforts to limit functionality. Theories of harm-reduction orient towards protecting T2I models for creative practice through transparency and distributed governance. We examine how these theories relate, and conclude by discussing how folk theorization informs responsible AI efforts.
View details
Creative ML Assemblages: The Interactive Politics of People, Processes, and Products
Ramya Malur Srinivasan
Katharina Burgdorf
Jennifer Lena
ACM Conference on Computer Supported Cooperative Work and Social Computing (2024) (to appear)
Preview abstract
Creative ML tools are collaborative systems that afford artistic creativity through their myriad interactive relationships. We propose using ``assemblage thinking" to support analyses of creative ML by approaching it as a system in which the elements of people, organizations, culture, practices, and technology constantly influence each other. We model these interactions as ``coordinating elements" that give rise to the social and political characteristics of a particular creative ML context, and call attention to three dynamic elements of creative ML whose interactions provide unique context for the social impact a particular system as: people, creative processes, and products. As creative assemblages are highly contextual, we present these as analytical concepts that computing researchers can adapt to better understand the functioning of a particular system or phenomena and identify intervention points to foster desired change. This paper contributes to theorizing interactions with AI in the context of art, and how these interactions shape the production of algorithmic art.
View details
Preview abstract
Generative AI (GAI) is proliferating, and among its many applications are to support creative work (e.g., generating text, images, music) and to enhance accessibility (e.g., captions of images and audio). As GAI evolves, creatives must consider how (or how not) to incorporate these tools into their practices. In this paper, we present interviews at the intersection of these applications. We learned from 10 creatives with disabilities who intentionally use and do not use GAI in and around their creative work. Their mediums ranged from audio engineering to leatherwork, and they collectively experienced a variety of disabilities, from sensory to motor to invisible disabilities. We share cross-cutting themes of their access hacks, how creative practice and access work become entangled, and their perspectives on how GAI should and should not fit into their workflows. In turn, we offer qualities of accessible creativity with responsible AI that can inform future research.
View details
Preview abstract
Inappropriate design and deployment of machine learning (ML) systems lead to negative downstream social and ethical impacts -- described here as social and ethical risks -- for users, society, and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide an appropriate structure for social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the CHI community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.
View details
Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Shalaleh Rismani
Kathryn Henne
AJung Moon
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Emilio Garcia
Gurleen Virk
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, 723–741
Preview abstract
Understanding the broader landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic and machine learning (ML) technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems arising at the micro-, meso-, and macro-levels of society. We present an applied taxonomy of sociotechnical harms to support more systematic surfacing of potential harms in algorithmic systems. Based on a scoping review of prior research on harms from AI systems (n=172), we identified five major themes related to sociotechnical harms — allocative, quality-of-service, representational, social system, and interpersonal harms. We describe these categories of harm, and present case studies that illustrate the usefulness of the taxonomy. We conclude with a discussion of challenges and under-explored areas of harm in the literature, which present opportunities for future research.
View details
AI’s Regimes of Representation: A Community-centered Study of Text-to-Image Models in South Asia
Rida Qadri
Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency, Association for Computing Machinery, 506–517
Preview abstract
This paper presents a community-centered study of cultural limitations of text-to-image (T2I) models in the South Asian context. We theorize these failures using scholarship on dominant media regimes of representations and locate them within participants’ reporting of their existing social marginalizations. We thus show how generative AI can reproduce an outsiders gaze for viewing South Asian cultures, shaped by global and regional power inequities. By centering communities as experts and soliciting their perspectives on T2I limitations, our study adds rich nuance into existing evaluative frameworks and deepens our understanding of the culturally-specific ways AI technologies can fail in non-Western and Global South settings. We distill lessons for responsible development of T2I models, recommending concrete pathways forward that can allow for recognition of structural inequalities.
View details
Safety and Fairness for Content Moderation in Generative Models
Susan Hao
Piyush Kumar
Sarah Laszlo
Bhaktipriya Radharapu
CVPR Workshop on Ethical Considerations in Creative applications of Computer Vision (2023)
Preview abstract
With significant advances in generative AI, new technologies are rapidly being deployed with generative components. Generative models are typically trained on large datasets, resulting in model behaviors that can mimic the worst of the content in the training data. Responsible deployment of generative technologies requires content moderation strategies, such as safety input and output filters. Here, we provide a theoretical framework for conceptualizing responsible content moderation of text-to-image generative technologies, including a demonstration of how to empirically measure the constructs we enumerate. We define and distinguish the concepts of safety, fairness, and metric equity, and enumerate example harms that can come in each domain. We then provide a demonstration of how the defined harms can be quantified. We conclude with a summary of how the style of harms quantification we demonstrate enables data-driven content moderation decisions.
View details
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renelito Delos Santos
AJung Moon
AIES 2023 (2023)
Towards Globally Responsible Generative AI Benchmarks
Rida Qadri
ICLR Workshop : Practical ML for Developing Countries Workshop (2023)
Preview abstract
As generative AI globalizes, there is an opportunity to reorient our nascent development frameworks and evaluative practices towards a global context. This paper uses lessons from a community-centered study on the failure modes of text to Image models in the South Asian context, to give suggestions on how the AI/ML community can develop culturally and contextually situated benchmarks. We present three forms of mitigations for culturally situated- evaluations: 1) diversifying our diversity measures 2) participatory prompt dataset curation 2) multi-tiered evaluations structures for community engagement. Through these mitigations we present concrete methods to make our evaluation processes more holistic and human-centered while also engaging with demands of deployment at global scale.
View details
Infrastructuring Care: How Trans and Non-Binary People Meet Health and Well-Being Needs through Technology
Lauren Wilcox
Rajesh Veeraraghavan
Oliver Haimson
Gabi Erickson
Michael Turken
Beka Gulotta
ACM Conference on Human Factors in Computing Systems (ACM CHI) 2023, Association for Computing Machinery, ACM (2023)
Preview abstract
We present a cross-cultural diary study with 64 transgender (trans) and non-binary (TGNB) adults in Mexico, the U.S., and India, to understand experiences keeping track of and managing aspects of personal health and well-being. Based on a reflexive thematic analysis of diary data, we highlight sociotechnical interactions that shape how transgender and non-binary people track and manage aspects of their health and well-being. Specifically, we surface the ways in which transgender and non-binary people infrastructure forms of care, by assembling together elements of informal social ecologies, formalized knowledge sources, and self-reflective media. We then examine the forms of precarity that interact with care infrastructure and shape management of health and well-being, including management of gender identity transitions. We discuss the ways in which our findings extend knowledge at the intersection of technology and marginalized health needs, and conclude by arguing for the importance of a research agenda to move toward TGNB-inclusive design.
View details