Jump to Content

SDGym: Low-Code Reinforcement Learning Environments using System Dynamics Models

DJ Passey
Emmanuel Klu
Arxiv Preprint (2023)

Abstract

Understanding the long-term impact of algorithmic interventions on society is vital to achieving responsible AI. Traditional evaluation strategies often fall short due to the dynamic nature of society, positioning reinforcement learning (RL) as an effective tool for simulating long-term dynamics. In RL, the difficulty of environment design remains a barrier to building robust agents that perform well in practical settings. To address this issue we tap into the field of system dynamics (SD), given the shared foundation in simulation modeling and a mature practice of participatory approaches. We introduce SDGym, a low-code library built on the OpenAI Gym framework which enables the generation of custom RL environments based on SD simulation models. Through a feasibility study we validate that well specified, rich RL environments can be built solely with SD models and a few lines of configuration. We demonstrate the capabilities of SDGym environment using an SD model exploring the adoption of electric vehicles. We compare two SD simulators, PySD and BPTK-Py for parity, and train a D4PG agent using the Acme framework to showcase learning and environment interaction. Our preliminary findings underscore the potential of SD to contribute to comprehensive RL environments, and the potential of RL to discover effective dynamic policies within SD models, an improvement in both directions. By open-sourcing SDGym, the intent is to galvanize further research and promote adoption across the SD and RL communities, thereby catalyzing collaboraton in this emerging interdisciplinary space.

Research Areas