
Emmanuel Klu
Emmanuel Klu is a research engineer at Google Research where he focuses on society-centered AI. His research topics include fairness, causality, robustness, systems thinking, reinforcement learning, scalable evaluations, knowledge graphs and AI for impact. His prior work includes data engineering and reliability engineering for large-scale systems. Emmanuel holds a bachelor’s degree in Computer Science with a minor in Psychology from the Illinois Institute of Technology in Chicago.
Authored Publications
Sort By
Preview abstract
This paper presents SYMBIOSIS, an AI-powered framework to make Systems Thinking accessible for addressing societal challenges and unlock paths for leveraging systems thinking framework to improve AI systems. The platform establishes a centralized, open-source repository of systems thinking/system dynamics models categorized by Sustainable Development Goals (SDGs) and societal topics using topic modeling and classification techniques. Systems Thinking resources, though critical for articulating causal theories in complex problem spaces, are often locked behind specialized tools and intricate notations, creating high barriers to entry. To address this, we developed a generative co-pilot that translates complex systems representations - such as causal loops and stock-flow diagrams - into natural language (and vice-versa), allowing users to explore and build models without extensive technical training.
Rooted in community-based system dynamics (CBSD) and informed by community-driven insights on societal context, we aim to bridge the problem understanding chasm. This gap, driven by epistemic uncertainty, often limits ML developers who lack the community-specific knowledge essential for problem understanding and formulation, often leading to misaligned causal theories and reduced intervention effectiveness. Recent research identifies causal and abductive reasoning as crucial frontiers for AI, and Systems Thinking provides a naturally compatible framework for both. By making Systems Thinking frameworks more accessible and user-friendly, we aim to serve as a foundational step to unlock future research into Responsible and society-centered AI that better integrates societal context leveraging systems thinking framework and models. Our work underscores the need for ongoing research into AI's capacity essential system dynamics such as feedback processes and time delays, paving the way for more socially attuned, effective AI systems.
View details
Preview abstract
This paper presents SYMBIOSIS, an AI-powered framework to make Systems Thinking accessible for addressing societal challenges and unlock paths for leveraging systems thinking framework to improve AI systems. The platform establishes a centralized, open-source repository of systems thinking/system dynamics models categorized by Sustainable Development Goals (SDGs) and societal topics using topic modeling and classification techniques. Systems Thinking resources, though critical for articulating causal theories in complex problem spaces, are often locked behind specialized tools and intricate notations, creating high barriers to entry. To address this, we developed a generative co-pilot that translates complex systems representations - such as causal loops and stock-flow diagrams - into natural language (and vice-versa), allowing users to explore and build models without extensive technical training.
Rooted in community-based system dynamics (CBSD) and informed by community-driven insights on societal context, we aim to bridge the problem understanding chasm. This gap, driven by epistemic uncertainty, often limits ML developers who lack the community-specific knowledge essential for problem understanding and formulation, often leading to misaligned causal theories and reduced intervention effectiveness. Recent research identifies causal and abductive reasoning as crucial frontiers for AI, and Systems Thinking provides a naturally compatible framework for both. By making Systems Thinking frameworks more accessible and user-friendly, we aim to serve as a foundational step to unlock future research into Responsible and society-centered AI that better integrates societal context leveraging systems thinking framework and models. Our work underscores the need for ongoing research into AI's capacity essential system dynamics such as feedback processes and time delays, paving the way for more socially attuned, effective AI systems.
View details
Preview abstract
Understanding the long-term impact of algorithmic interventions on society is vital to achieving responsible AI. Traditional evaluation strategies often fall short due to the dynamic nature of society, positioning reinforcement learning (RL) as an effective tool for simulating long-term dynamics. In RL, the difficulty of environment design remains a barrier to building robust agents that perform well in practical settings. To address this issue we tap into the field of system dynamics (SD), given the shared foundation in simulation modeling and a mature practice of participatory approaches. We introduce SDGym, a low-code library built on the OpenAI Gym framework which enables the generation of custom RL environments based on SD simulation models. Through a feasibility study we validate that well specified, rich RL environments can be built solely with SD models and a few lines of configuration. We demonstrate the capabilities of SDGym environment using an SD model exploring the adoption of electric vehicles. We compare two SD simulators, PySD and BPTK-Py for parity, and train a D4PG agent using the Acme framework to showcase learning and environment interaction. Our preliminary findings underscore the potential of SD to contribute to comprehensive RL environments, and the potential of RL to discover effective dynamic policies within SD models, an improvement in both directions. By open-sourcing SDGym, the intent is to galvanize further research and promote adoption across the SD and RL communities, thereby catalyzing collaboraton in this emerging interdisciplinary space.
View details
Preview abstract
Machine learning models often learn unintended biases which can lead to unfair outcomes for minority groups when deployed into society. This is especially concerning in text
datasets where sensitive attributes such as race, gender, and sexual orientation may not be available. In this paper, we present a dataset coupled with an approach to improve text fairness in classifiers and language models. We create a new, more comprehensive identity lexicon, TIDAL, which includes 15135 identity terms and associated pragmatic context across three demographic categories. We leverage TIDAL to develop an identity annotation and augmentation tool that can be used to improve the availability of identity context and the effectiveness of ML fairness. We evaluate our approaches using human contributors, and additionally run experiments focused on dataset and model debiasing. Results show our assistive annotation technique improves the reliability and velocity of human-in-the-loop processes. Our dataset and methods uncover more disparities during evaluation, and also produce more fair models during remediation. These approaches provide a practical path forward for scaling classifier and generative model fairness in real-world settings.
View details