Emmanuel Klu
Emmanuel Klu is a research engineer at Google Research where he focuses on society-centered AI. His research topics include fairness, causality, robustness, systems thinking, reinforcement learning, scalable evaluations, knowledge graphs and AI for impact. His prior work includes data engineering and reliability engineering for large-scale systems. Emmanuel holds a bachelor’s degree in Computer Science with a minor in Psychology from the Illinois Institute of Technology in Chicago.
Authored Publications
Sort By
Preview abstract
Understanding the long-term impact of algorithmic interventions on society is vital to achieving responsible AI. Traditional evaluation strategies often fall short due to the dynamic nature of society, positioning reinforcement learning (RL) as an effective tool for simulating long-term dynamics. In RL, the difficulty of environment design remains a barrier to building robust agents that perform well in practical settings. To address this issue we tap into the field of system dynamics (SD), given the shared foundation in simulation modeling and a mature practice of participatory approaches. We introduce SDGym, a low-code library built on the OpenAI Gym framework which enables the generation of custom RL environments based on SD simulation models. Through a feasibility study we validate that well specified, rich RL environments can be built solely with SD models and a few lines of configuration. We demonstrate the capabilities of SDGym environment using an SD model exploring the adoption of electric vehicles. We compare two SD simulators, PySD and BPTK-Py for parity, and train a D4PG agent using the Acme framework to showcase learning and environment interaction. Our preliminary findings underscore the potential of SD to contribute to comprehensive RL environments, and the potential of RL to discover effective dynamic policies within SD models, an improvement in both directions. By open-sourcing SDGym, the intent is to galvanize further research and promote adoption across the SD and RL communities, thereby catalyzing collaboraton in this emerging interdisciplinary space.
View details
Preview abstract
Machine learning models often learn unintended biases which can lead to unfair outcomes for minority groups when deployed into society. This is especially concerning in text
datasets where sensitive attributes such as race, gender, and sexual orientation may not be available. In this paper, we present a dataset coupled with an approach to improve text fairness in classifiers and language models. We create a new, more comprehensive identity lexicon, TIDAL, which includes 15135 identity terms and associated pragmatic context across three demographic categories. We leverage TIDAL to develop an identity annotation and augmentation tool that can be used to improve the availability of identity context and the effectiveness of ML fairness. We evaluate our approaches using human contributors, and additionally run experiments focused on dataset and model debiasing. Results show our assistive annotation technique improves the reliability and velocity of human-in-the-loop processes. Our dataset and methods uncover more disparities during evaluation, and also produce more fair models during remediation. These approaches provide a practical path forward for scaling classifier and generative model fairness in real-world settings.
View details