Andrew Smart
I'm a researcher in the Responsible AI Impact Lab working on machine learning fairness and the governance of AI. My background is in anthropology, philosophy, cognitive science and brain imaging. I'm interested in the relationships between social ontology, causality and how to estimate the risks and impacts of using machine learning in high-stakes domains.
Authored Publications
Sort By
Preview abstract
What is it to explain the outputs of an opaque machine learning model? Popular strategies in the literature are to develop
explainable machine learning techniques. These techniques approximate how the model works by providing local or global
information about the inner workings of a machine learning model. In this paper, we argue that, in some cases, explaining
machine learning outputs requires appealing to the third kind of explanation that we call socio-structural explanations.
The importance of socio-structural explanations is motivated by the observation that machine learning models are not
autonomous mathematico-computational entities. Instead, their very existence is intrinsically tied to the social context in
which they operate. Sometimes, the social structures are mirrored in the design and training of machine learning models
and hence appealing to the socio-structural explanations offers the relevant explanation for why the output is obtained.
By thoroughly examining a well-known case of racially biased algorithmic resource allocation in healthcare, we highlight
the significance of socio-structural explanations. One ramification of our proposal is that to understand how machine
learning models perpetuate unjust social harms, more is needed to interpret them by model interpretability methods.
Instead, providing socio-structural explanations adds explanatory adequacy as to how and why machine learning outputs
are obtained
View details
Preview abstract
Machine learning has a pseudoscience problem. An abundance of ethical issues arising from the use of machine learning (ML)-based technologies—by now, well documented—is inextricably entwined with the systematic epistemic misuse of these tools. We take a recent resurgence of deep learning-assisted physiognomic research as a case study in the relationship between ML-based pseudoscience and attendant social harms—the standard purview of “AI ethics.” In practice, the epistemic and ethical dimensions of ML misuse often arise from shared underlying reasons and are resolvable by the same pathways. Recent use of ML toward the ends of predicting protected attributes from photographs highlights the need for philosophical, historical, and domain-specific perspectives of particular sciences in the prevention and remediation of misused ML.
View details
Preview abstract
Inappropriate design and deployment of machine learning (ML) systems lead to negative downstream social and ethical impacts -- described here as social and ethical risks -- for users, society, and the environment. Despite the growing need to regulate ML systems, current processes for assessing and mitigating risks are disjointed and inconsistent. We interviewed 30 industry practitioners on their current social and ethical risk management practices and collected their first reactions on adapting safety engineering frameworks into their practice -- namely, System Theoretic Process Analysis (STPA) and Failure Mode and Effects Analysis (FMEA). Our findings suggest STPA/FMEA can provide an appropriate structure for social and ethical risk assessment and mitigation processes. However, we also find nontrivial challenges in integrating such frameworks in the fast-paced culture of the ML industry. We call on the CHI community to strengthen existing frameworks and assess their efficacy, ensuring that ML systems are safer for all people.
View details
Preview abstract
Large technology firms face the problem of moderating content on their platforms for compliance with laws and policies. To accomplish this at the scale of billions of pieces of content per day, a combination of human and machine review are necessary to label content. However, human error and subjective methods of measure are inherent in many audit procedures. This paper introduces statistical analysis methods and mathematical techniques to determine, quantify, and minimize these sources of risk. Through these methodologies it can be shown that we are able to reduce reviewer bias.
View details
Identifying Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction
Shalaleh Rismani
Kathryn Henne
AJung Moon
Paul Nicholas
N'Mah Yilla-Akbari
Jess Gallegos
Emilio Garcia
Gurleen Virk
Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, Association for Computing Machinery, 723–741
Preview abstract
Understanding the broader landscape of potential harms from algorithmic systems enables practitioners to better anticipate consequences of the systems they build. It also supports the prospect of incorporating controls to help minimize harms that emerge from the interplay of technologies and social and cultural dynamics. A growing body of scholarship has identified a wide range of harms across different algorithmic and machine learning (ML) technologies. However, computing research and practitioners lack a high level and synthesized overview of harms from algorithmic systems arising at the micro-, meso-, and macro-levels of society. We present an applied taxonomy of sociotechnical harms to support more systematic surfacing of potential harms in algorithmic systems. Based on a scoping review of prior research on harms from AI systems (n=172), we identified five major themes related to sociotechnical harms — allocative, quality-of-service, representational, social system, and interpersonal harms. We describe these categories of harm, and present case studies that illustrate the usefulness of the taxonomy. We conclude with a discussion of challenges and under-explored areas of harm in the literature, which present opportunities for future research.
View details
Beyond the ML Model: Applying Safety Engineering Frameworks to Text-to-Image Development
Shalaleh Rismani
Renelito Delos Santos
AJung Moon
AIES 2023 (2023)
Healthsheet: development of a transparency artifact for health datasets
Diana Mincu
Lauren Wilcox
Razvan Adrian Amironesei
Nyalleng Moorosi
ACM FAccT Conference 2022, ACM (2022)
Preview abstract
Machine learning (ML) approaches have demonstrated promising results in a wide range of healthcare applications. Data plays a crucial role in developing ML-based healthcare systems that directly affect people’s lives. Many of the ethical issues surrounding the use of ML in healthcare stem from structural inequalities underlying the way we collect, use, and handle data. Developing guidelines to improve documentation practices regarding the creation, use, and maintenance of ML healthcare datasets is therefore of critical importance. In this work, we introduce Healthsheet, a contextualized adaptation of the original datasheet questionnaire for health-specific applications. Through a series of semi-structured interviews, we adapt the datasheets for healthcare data documentation. As part of the Healthsheet development process and to understand the obstacles researchers face in creating datasheets, we worked with three publicly-available healthcare datasets as our case studies, each with different types of structured data: Electronic health Records (EHR), clinical trial study data, and smartphone-based performance outcome measures. Our findings from the interviewee study and case studies show 1) that datasheets should be contextualized for healthcare, 2) that despite incentives to adopt accountability practices such as datasheets, there is a lack of consistency in the broader use of these practices 3) how the ML for health community views datasheets and particularly Healthsheets as diagnostic tool to surface the limitations and strength of datasets and 4) the relative importance of different fields in the datasheet to healthcare concerns.
View details
Preview abstract
In response to growing concerns of bias, discrimination, and unfairness perpetuated by algorithmic systems, the datasets used to train and evaluate machine learning models have come under increased scrutiny. Many of these examinations have focused on the contents of machine learning datasets, finding glaring underrepresentation of minoritized groups. In contrast, relatively little work has been done to examine the norms, values, and assumptions embedded in these datasets. In this work, we conceptualize machine learning datasets as a type of informational infrastructure, and motivate a genealogy as method in examining the histories and modes of constitution at play in their creation. We present a critical history of ImageNet as an exemplar, utilizing critical discourse analysis of major texts around ImageNet’s creation and impact. We find that assumptions around ImageNet and other large computer vision datasets more generally rely on three themes: the aggregation and accumulation of more data, the computational construction of meaning, and making certain types of data labor invisible. By tracing the discourses that surround this influential benchmark, we contribute to the ongoing development of the standards and norms around data development in machine learning and artificial intelligence research.
View details
Towards Accountability for Machine Learning Datasets
Alex Hanna
Christina Greer
Margaret Mitchell
Proceedings of FAccT 2021 (2021) (to appear)
Preview abstract
Rising concern for the societal implications of artificial intelligence systems has inspired demands for greater transparency and accountability. However the datasets which empower machine learning are often used, shared and re-used with little visibility into the processes of deliberation which led to their creation. Which stakeholder groups had their perspectives included when the dataset was conceived? Which domain experts were consulted regarding how to model subgroups and other phenomena? How were questions of representational biases measured and addressed? Who labeled the data? In this paper, we introduce a rigorous framework for dataset development transparency which supports decision-making and accountability. The framework uses the cyclical, infrastructural and engineering nature of dataset development to draw on best practices from the software development lifecycle. Each stage of the data development lifecycle yields a set of documents that facilitate improved communication and decision-making, as well as drawing attention the value and necessity of careful data work. The proposed framework is intended to contribute to closing the accountability gap in artificial intelligence systems, by making visible the often overlooked work that goes into dataset creation.
View details
The Use and Misuse of Counterfactuals in Ethical Machine Learning
Atoosa Kasirzadeh
FAccT '21: Proceedings of the 2021 ACM Conference on Fairness, Accountability, and TransparencyMarch 2021 (2021)
Preview abstract
The use of counterfactuals for considerations of algorithmic
fairness and explainability is gaining prominence within the
machine learning community and industry. This paper argues for more caution with the use of counterfactuals when
the facts to be considered are social categories such as race or
gender. We review a broad body of papers from philosophy
and social sciences on social ontology and the semantics of
counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and social explainability
can require an incoherent theory of what social categories are.
Our findings suggest that most often the social categories may
not admit counterfactual manipulation, and hence may not
appropriately satisfy the demands for evaluating the truth or
falsity of counterfactuals. This is important because the widespread use of counterfactuals in machine learning can lead
to misleading results when applied in high-stakes domains.
Accordingly, we argue that even though counterfactuals play
an essential part in some causal inferences, their use for questions of algorithmic fairness and social explanations can create more problems than they resolve. Our positive result is
a set of tenets about using counterfactuals for fairness and
explanations in machine learning.
View details