Jump to Content
Donald Martin, Jr.

Donald Martin, Jr.

Donald Martin, Jr. is Head of Societal Context Understanding Tools & Solutions, Responsible AI & Human-Centered Technology at Google Research. He focuses on driving equitable innovation in the spaces where Google's products and services interact with society and understanding the intersections between Trust and Safety, Machine Learning (ML) Fairness and Ethical Artificial Intelligence (AI). He holds a Bachelor of Science degree in Electrical Engineering from the University of Colorado at Denver and founded its National Society of Black Engineers (NSBE) chapter. Donald has over 30 years of technology leadership experience in the telecommunications and information technology industries. He has held CIO, CTO, COO, and product manager positions at global software development companies and telecommunications service providers. Donald holds a US utility patent for "problem modeling in resource optimization." His most recent publication is the Harvard Business Review article "AI Engineers Need to Think Beyond Engineering.”
Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, desc
  • Year
  • Year, desc
    Preview abstract Understanding the long-term impact of algorithmic interventions on society is vital to achieving responsible AI. Traditional evaluation strategies often fall short due to the dynamic nature of society, positioning reinforcement learning (RL) as an effective tool for simulating long-term dynamics. In RL, the difficulty of environment design remains a barrier to building robust agents that perform well in practical settings. To address this issue we tap into the field of system dynamics (SD), given the shared foundation in simulation modeling and a mature practice of participatory approaches. We introduce SDGym, a low-code library built on the OpenAI Gym framework which enables the generation of custom RL environments based on SD simulation models. Through a feasibility study we validate that well specified, rich RL environments can be built solely with SD models and a few lines of configuration. We demonstrate the capabilities of SDGym environment using an SD model exploring the adoption of electric vehicles. We compare two SD simulators, PySD and BPTK-Py for parity, and train a D4PG agent using the Acme framework to showcase learning and environment interaction. Our preliminary findings underscore the potential of SD to contribute to comprehensive RL environments, and the potential of RL to discover effective dynamic policies within SD models, an improvement in both directions. By open-sourcing SDGym, the intent is to galvanize further research and promote adoption across the SD and RL communities, thereby catalyzing collaboraton in this emerging interdisciplinary space. View details
    Leveraging CBSD to Advance Community Engaged Approaches to Identifying Structural Drivers of Racial Bias in Health Diagnostic Algorithms
    Jill Kuhlberg
    Irene Headen
    Ellis Ballard
    International Conference of the System Dynamics Society, International Conference of the System Dynamics Society (2020) (to appear)
    Preview abstract Much attention and concern has been raised recently about bias and the use of machine learning algorithms in healthcare, especially as it relates to perpetuating racial discrimination and health disparities. Following an initial SD workshop at the Data for Black Lives II conference hosted at MIT in January of 2019, a group of conference participants interested in building capabilities to use SD to understand complex social issues convened monthly to explore issues related to racial bias in AI and implications for health disparities through qualitative and simulation modeling. Insights from the modeling process highlight the importance of centering the discussion of data and healthcare on people and their experiences with healthcare and science, and recognizing the social context where the algorithm is operating. Collective memory of community trauma, through deaths attributed to poor medical care, and negative experiences with healthcare are endogenous drivers of seeking treatment and experiencing effective care, which impact the availability and quality of data for algorithms. These drivers have drastically disparate initial conditions for different racial groups and point to limited impact of focusing solely on improving diagnostic algorithms on achieving better health outcomes for some groups. View details
    Preview abstract Machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs. Recent re-search has pointed out the limitations of fairness approaches that rely on oversimplified mathematical models that abstract away the underlying societal context where models are ultimately deployed and from which model inputs and complex socially constructed concepts such as fairness originate. In this paper, we outline three new tools to improve the comprehension, identification and representation of societal context. First, we propose a complex adaptive systems(CAS) based model and definition of societal context that may help researchers and product developers expand the abstraction boundary of ML fairness work to include societal context. Second, we introduce collaborative causal theory formation (CCTF)as a key capability for establishing a socio-technical frame that incorporates diverse mental models and associated causal theories in modeling the problem and solution space for ML-based products. Finally, we identify system dynamics (SD) as an established, transparent and rigorous framework for practicing CCTF during all phases of the ML product development process. We conclude with a discussion of how these systems-based approaches to understanding the societal context within which socio-technical systems are embedded can improve the development of fair and inclusive ML-based products. View details
    Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics
    Jill Kuhlberg
    William Samuel Isaac
    Machine Learning in Real Life (ML-IRL) ICLR 2020 Workshop (2020), pp. 6
    Preview abstract Recent research on algorithmic fairness has highlighted that the problem formulation phase of ML system development can be a key source of bias that has significant downstream impacts on ML system fairness outcomes. However, very little attention has been paid to methods for improving the fairness efficacy of this critical phase of ML system development. Current practice neither accounts for the dynamic complexity of high-stakes domains nor incorporates the perspectives of vulnerable stakeholders. In this paper we introduce community based system dynamics (CBSD) as an approach to enable the participation of typically excluded stakeholders in the problem formulation phase of the ML system development process and facilitate the deep problem understanding required to mitigate bias during this crucial stage. View details
    No Results Found