
Renee Shelby
Research Areas
Authored Publications
Sort By
Understanding Help-Seeking and Help-Giving on Social Media for Image-Based Sexual Abuse
Tara Matthews
Miranda Wei
Patrick Gage Kelley
Sarah Meiklejohn
(2024)
Preview abstract
Image-based sexual abuse (IBSA), like other forms of technology-facilitated abuse, is a growing threat to peoples' digital safety. Attacks include unwanted solicitations for sexually explicit images, extorting people under threat of leaking their images, or purposefully leaking images to enact revenge or exert control. In this paper, we explore how people experiencing IBSA seek and receive help from social media. Specifically, we identify over 100,000 Reddit posts that engage relationship and advice communities for help related to IBSA. We draw on a stratified sample of these posts to qualitatively examine how various types of IBSA unfold, the support needs of victim-survivors experiencing IBSA, and how communities help victim-survivors navigate their abuse through technical, emotional, and relationship advice. In the process, we highlight how gender, relationship dynamics, and the threat landscape influence the design space of sociotechnical solutions. We also highlight gaps that remain to connecting victim-survivors with important care---regardless of whom they turn to for help.
View details
Preview abstract
Generative AI (GAI) is proliferating, and among its many applications are to support creative work (e.g., generating text, images, music) and to enhance accessibility (e.g., captions of images and audio). As GAI evolves, creatives must consider how (or how not) to incorporate these tools into their practices. In this paper, we present interviews at the intersection of these applications. We learned from 10 creatives with disabilities who intentionally use and do not use GAI in and around their creative work. Their mediums ranged from audio engineering to leatherwork, and they collectively experienced a variety of disabilities, from sensory to motor to invisible disabilities. We share cross-cutting themes of their access hacks, how creative practice and access work become entangled, and their perspectives on how GAI should and should not fit into their workflows. In turn, we offer qualities of accessible creativity with responsible AI that can inform future research.
View details
Generative AI in Creative Practice: ML-Artist Folk Theories of T2I Use, Harm, and Harm-Reduction
Shalaleh Rismani
Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI '24), Association for Computing Machinery (2024), pp. 1-17 (to appear)
Preview abstract
Understanding how communities experience algorithms is necessary to mitigate potential harmful impacts. This paper presents folk theories of text-to-image (T2I) models to enrich understanding of how artist communities experience creative machine learning (ML) systems. This research draws on data collected from a workshop with 15 artists from 10 countries who incorporate T2I models in their creative practice. Through reflexive thematic analysis of workshop data, we highlight theorization of T2I use, harm, and harm-reduction. Folk theories of use envision T2I models as an artistic medium, a mundane tool, and locate true creativity as rising above model affordances. Theories of harm articulate T2I models as harmed by engineering efforts to eliminate glitches and product policy efforts to limit functionality. Theories of harm-reduction orient towards protecting T2I models for creative practice through transparency and distributed governance. We examine how these theories relate, and conclude by discussing how folk theorization informs responsible AI efforts.
View details
Creative ML Assemblages: The Interactive Politics of People, Processes, and Products
Ramya Malur Srinivasan
Katharina Burgdorf
Jennifer Lena
ACM Conference on Computer Supported Cooperative Work and Social Computing (2024) (to appear)
Preview abstract
Creative ML tools are collaborative systems that afford artistic creativity through their myriad interactive relationships. We propose using ``assemblage thinking" to support analyses of creative ML by approaching it as a system in which the elements of people, organizations, culture, practices, and technology constantly influence each other. We model these interactions as ``coordinating elements" that give rise to the social and political characteristics of a particular creative ML context, and call attention to three dynamic elements of creative ML whose interactions provide unique context for the social impact a particular system as: people, creative processes, and products. As creative assemblages are highly contextual, we present these as analytical concepts that computing researchers can adapt to better understand the functioning of a particular system or phenomena and identify intervention points to foster desired change. This paper contributes to theorizing interactions with AI in the context of art, and how these interactions shape the production of algorithmic art.
View details
Safety and Fairness for Content Moderation in Generative Models
Susan Hao
Piyush Kumar
Sarah Laszlo
Bhaktipriya Radharapu
CVPR Workshop on Ethical Considerations in Creative applications of Computer Vision (2023)
Preview abstract
With significant advances in generative AI, new technologies are rapidly being deployed with generative components. Generative models are typically trained on large datasets, resulting in model behaviors that can mimic the worst of the content in the training data. Responsible deployment of generative technologies requires content moderation strategies, such as safety input and output filters. Here, we provide a theoretical framework for conceptualizing responsible content moderation of text-to-image generative technologies, including a demonstration of how to empirically measure the constructs we enumerate. We define and distinguish the concepts of safety, fairness, and metric equity, and enumerate example harms that can come in each domain. We then provide a demonstration of how the defined harms can be quantified. We conclude with a summary of how the style of harms quantification we demonstrate enables data-driven content moderation decisions.
View details
Towards Globally Responsible Generative AI Benchmarks
Rida Qadri
ICLR Workshop : Practical ML for Developing Countries Workshop (2023)
Preview abstract
As generative AI globalizes, there is an opportunity to reorient our nascent development frameworks and evaluative practices towards a global context. This paper uses lessons from a community-centered study on the failure modes of text to Image models in the South Asian context, to give suggestions on how the AI/ML community can develop culturally and contextually situated benchmarks. We present three forms of mitigations for culturally situated- evaluations: 1) diversifying our diversity measures 2) participatory prompt dataset curation 2) multi-tiered evaluations structures for community engagement. Through these mitigations we present concrete methods to make our evaluation processes more holistic and human-centered while also engaging with demands of deployment at global scale.
View details
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renelito Delos Santos
AJung Moon
AIES 2023 (2023)
Infrastructuring Care: How Trans and Non-Binary People Meet Health and Well-Being Needs through Technology
Lauren Wilcox
Rajesh Veeraraghavan
Oliver Haimson
Gabi Erickson
Michael Turken
Beka Gulotta
ACM Conference on Human Factors in Computing Systems (ACM CHI) 2023, Association for Computing Machinery, ACM (2023)
Preview abstract
We present a cross-cultural diary study with 64 transgender (trans) and non-binary (TGNB) adults in Mexico, the U.S., and India, to understand experiences keeping track of and managing aspects of personal health and well-being. Based on a reflexive thematic analysis of diary data, we highlight sociotechnical interactions that shape how transgender and non-binary people track and manage aspects of their health and well-being. Specifically, we surface the ways in which transgender and non-binary people infrastructure forms of care, by assembling together elements of informal social ecologies, formalized knowledge sources, and self-reflective media. We then examine the forms of precarity that interact with care infrastructure and shape management of health and well-being, including management of gender identity transitions. We discuss the ways in which our findings extend knowledge at the intersection of technology and marginalized health needs, and conclude by arguing for the importance of a research agenda to move toward TGNB-inclusive design.
View details
Preview abstract
Power and information asymmetries between people and developers are further legitimized through contractual agreements that fail to provide meaningful consent and contestability. In particular, the Terms-of-Service (ToS) agreement, is a contract of adhesion where companies effectively set the terms and conditions of the contract. Whereas, ToS reinforce existing structural inequalities, we seek to enable an intersectional accountability mechanism grounded in the practice of algorithmic reparation. Building on existing critiques of ToS in the context of algorithmic systems, we return to the roots of contract theory by recentering notions of agency and mutual assent. We evolve a multipronged intervention we frame as the Terms-we-Serve-with (TwSw) social, computational, and legal framework. The TwSw is a new social imaginary centered on: (1) co-constitution of user agreements, through participatory mechanisms; (2) addressing friction, leveraging the fields of design justice and critical design in the production and resolution of conflict; (3) enabling refusal mechanisms, reflecting the need for a sufficient level of human oversight and agency including opting out; (4) complaint, through a feminist studies lens and open-sourced computational tools; and (5) disclosure-centered mediation, to disclose, acknowledge, and take responsibility for harm, drawing on the field of medical law. We further inform our analysis through an exploratory design workshop with a South African gender-based violence reporting AI startup. We derive practical strategies for communities, technologists, and policy-makers to leverage a relational approach to algorithmic reparation and propose there is a need for a radical restructuring of the “take-it-or-leave-it” ToS agreement.
View details
Preview abstract
Inappropriate design and deployment of machine learning (ML) systems lead to negative downstream social and ethical impacts -- described here as social and ethical risks -- for users, society, and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide an appropriate structure for social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the CHI community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.
View details