Donald Martin, Jr.

Donald Martin, Jr.

Donald Martin, Jr. is Head of Societal Context Understanding Tools & Solutions, Responsible AI & Human-Centered Technology at Google Research. He focuses on driving equitable innovation in the spaces where Google's products and services interact with society and understanding the intersections between Trust and Safety, Machine Learning (ML) Fairness and Ethical Artificial Intelligence (AI). He holds a Bachelor of Science degree in Electrical Engineering from the University of Colorado at Denver and founded its National Society of Black Engineers (NSBE) chapter. Donald has over 30 years of technology leadership experience in the telecommunications and information technology industries. He has held CIO, CTO, COO, and product manager positions at global software development companies and telecommunications service providers. Donald holds a US utility patent for "problem modeling in resource optimization." His most recent publication is the Harvard Business Review article "AI Engineers Need to Think Beyond Engineering.”
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Health equity assessment of machine learning performance (HEAL): a framework and dermatology AI model case study
    Terry Spitz
    Malcolm Chelliah
    Heather Cole-Lewis
    Stephanie Farquhar
    Qinghan Xue
    Jenna Lester
    Cían Hughes
    Patricia Strachan
    Fraser Tan
    Peggy Bui
    Craig Mermel
    Lily Peng
    Sunny Virmani
    Ivor Horn
    Cameron Chen
    The Lancet eClinicalMedicine (2024)
    Preview abstract Background Artificial intelligence (AI) has repeatedly been shown to encode historical inequities in healthcare. We aimed to develop a framework to quantitatively assess the performance equity of health AI technologies and to illustrate its utility via a case study. Methods Here, we propose a methodology to assess whether health AI technologies prioritise performance for patient populations experiencing worse outcomes, that is complementary to existing fairness metrics. We developed the Health Equity Assessment of machine Learning performance (HEAL) framework designed to quantitatively assess the performance equity of health AI technologies via a four-step interdisciplinary process to understand and quantify domain-specific criteria, and the resulting HEAL metric. As an illustrative case study (analysis conducted between October 2022 and January 2023), we applied the HEAL framework to a dermatology AI model. A set of 5420 teledermatology cases (store-and-forward cases from patients of 20 years or older, submitted from primary care providers in the USA and skin cancer clinics in Australia), enriched for diversity in age, sex and race/ethnicity, was used to retrospectively evaluate the AI model's HEAL metric, defined as the likelihood that the AI model performs better for subpopulations with worse average health outcomes as compared to others. The likelihood that AI performance was anticorrelated to pre-existing health outcomes was estimated using bootstrap methods as the probability that the negated Spearman's rank correlation coefficient (i.e., “R”) was greater than zero. Positive values of R suggest that subpopulations with poorer health outcomes have better AI model performance. Thus, the HEAL metric, defined as p (R >0), measures how likely the AI technology is to prioritise performance for subpopulations with worse average health outcomes as compared to others (presented as a percentage below). Health outcomes were quantified as disability-adjusted life years (DALYs) when grouping by sex and age, and years of life lost (YLLs) when grouping by race/ethnicity. AI performance was measured as top-3 agreement with the reference diagnosis from a panel of 3 dermatologists per case. Findings Across all dermatologic conditions, the HEAL metric was 80.5% for prioritizing AI performance of racial/ethnic subpopulations based on YLLs, and 92.1% and 0.0% respectively for prioritizing AI performance of sex and age subpopulations based on DALYs. Certain dermatologic conditions were significantly associated with greater AI model performance compared to a reference category of less common conditions. For skin cancer conditions, the HEAL metric was 73.8% for prioritizing AI performance of age subpopulations based on DALYs. Interpretation Analysis using the proposed HEAL framework showed that the dermatology AI model prioritised performance for race/ethnicity, sex (all conditions) and age (cancer conditions) subpopulations with respect to pre-existing health disparities. More work is needed to investigate ways of promoting equitable AI performance across age for non-cancer conditions and to better understand how AI models can contribute towards improving equity in health outcomes. View details
    Preview abstract Understanding the long-term impact of algorithmic interventions on society is vital to achieving responsible AI. Traditional evaluation strategies often fall short due to the dynamic nature of society, positioning reinforcement learning (RL) as an effective tool for simulating long-term dynamics. In RL, the difficulty of environment design remains a barrier to building robust agents that perform well in practical settings. To address this issue we tap into the field of system dynamics (SD), given the shared foundation in simulation modeling and a mature practice of participatory approaches. We introduce SDGym, a low-code library built on the OpenAI Gym framework which enables the generation of custom RL environments based on SD simulation models. Through a feasibility study we validate that well specified, rich RL environments can be built solely with SD models and a few lines of configuration. We demonstrate the capabilities of SDGym environment using an SD model exploring the adoption of electric vehicles. We compare two SD simulators, PySD and BPTK-Py for parity, and train a D4PG agent using the Acme framework to showcase learning and environment interaction. Our preliminary findings underscore the potential of SD to contribute to comprehensive RL environments, and the potential of RL to discover effective dynamic policies within SD models, an improvement in both directions. By open-sourcing SDGym, the intent is to galvanize further research and promote adoption across the SD and RL communities, thereby catalyzing collaboraton in this emerging interdisciplinary space. View details
    Preview abstract We consider how causal theories and tools from the dynamical systems literature can be deployed to guide the representation of data when training neural networks for complex classification tasks. Specifically, we use simulated data to show that training a neural network with a data representation that makes explicit the invariant structural causal features of the data generating process of an epidemic system improves out-of-distribution (OOD) generalization performance on a classification task as compared to a more naive approach to data representation. We take these results to demonstrate that using a priori causal theory knowledge to reduce the epistemic uncertainty of ML developers can lead to more well-specified ML pipelines. This, in turn, points to the utility of a dynamical systems approach to the broader effort aimed at improving the robustness and safety of machine learning systems via improved ML system development practices. View details
    Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics
    Jill Kuhlberg
    William Samuel Isaac
    Machine Learning in Real Life (ML-IRL) ICLR 2020 Workshop (2020), pp. 6
    Preview abstract Recent research on algorithmic fairness has highlighted that the problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes. However, very little attention has been paid to methods for improving the fairness efficacy of this critical phase of ML system development. Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders. In this paper we introduce community based system dynamics (CBSD) as an approach to enable the participation of typically excluded stakeholders in the problem formulation phase of the ML system development process and facilitate the deep problem understanding required to mitigate bias during this crucial stage. View details
    Advancing Community Engaged Approaches to Identifying Structural Drivers of Racial Bias in Health Diagnostic Algorithms
    Jill Kuhlberg
    Irene Headen
    Ellis Ballard
    International Conference of the System Dynamics Society, International Conference of the System Dynamics Society (2020)
    Preview abstract Much attention and concern has been raised recently about bias and the use of machine learning algorithms in healthcare, especially as it relates to perpetuating racial discrimination and health disparities. Following an initial SD workshop at the Data for Black Lives II conference hosted at MIT in January of 2019, a group of conference participants interested in building capabilities to use SD to understand complex social issues convened monthly to explore issues related to racial bias in AI and implications for health disparities through qualitative and simulation modeling. Insights from the modeling process highlight the importance of centering the discussion of data and healthcare on people and their experiences with healthcare and science, and recognizing the social context where the algorithm is operating. Collective memory of community trauma, through deaths attributed to poor medical care, and negative experiences with healthcare are endogenous drivers of seeking treatment and experiencing effective care, which impact the availability and quality of data for algorithms. These drivers have drastically disparate initial conditions for different racial groups and point to limited impact of focusing solely on improving diagnostic algorithms on achieving better health outcomes for some groups. View details
    Preview abstract Machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs. Recent re-search has pointed out the limitations of fairness approaches that rely on oversimplified mathematical models that abstract away the underlying societal context where models are ultimately deployed and from which model inputs and complex socially constructed concepts such as fairness originate. In this paper, we outline three new tools to improve the comprehension, identification and representation of societal context. First, we propose a complex adaptive systems(CAS) based model and definition of societal context that may help researchers and product developers expand the abstraction boundary of ML fairness work to include societal context. Second, we introduce collaborative causal theory formation (CCTF)as a key capability for establishing a socio-technical frame that incorporates diverse mental models and associated causal theories in modeling the problem and solution space for ML-based products. Finally, we identify system dynamics (SD) as an established, transparent and rigorous framework for practicing CCTF during all phases of the ML product development process. We conclude with a discussion of how these systems-based approaches to understanding the societal context within which socio-technical systems are embedded can improve the development of fair and inclusive ML-based products. View details