Liviu Panait
Liviu Panait received a Ph.D. degree in Computer Science from George Mason University in 2007, and is currently working on organizing the world's information and making it universally accessible and useful. His research interests include machine learning, multiagent systems, computer games, artificial life, data mining, and information retrieval.
Liviu Panait co-chaired the AAMAS 2007 Workshop on Adaptive and Learning Agents, the AAMAS 2006 Workshop on Adaptation and Learning in Autonomous Agents and Multiagent Systems, co-organized the AAAI 2005 Fall Symposium on Coevolutionary and Coadaptive Systems, served as a program committee member or as an invited reviewer for multiple international conferences and journals, and he is a member of the IEEE Task Force on Coevolution. He is a co-author of the ECJ evolutionary computation library and the MASON multi-agent simulation toolkit. For more information, please visit his home page.
Research Areas
Authored Publications
Sort By
Learning to Generate Image Embeddings with User-level Differential Privacy
Maxwell D. Collins
Yuxiao Wang
Sewoong Oh
IEEE Conference on Computer Vision and Pattern Recognition (CVPR) (2023) (to appear)
Preview abstract
We consider training feature extractors with user-level differential privacy to map images to embeddings from large-scale supervised data. To achieve user-level differential privacy, federated learning algorithms are extended and applied to aggregate user partitioned data, together with sensitivity control and noise addition. We demonstrate a variant of federated learning algorithm with partial aggregation and private reconstruction can achieve strong privacy utility trade-offs. When a large scale dataset is provided, it is possible to train feature extractors with both strong utility and privacy guarantees by combining techniques such as public pretraining, virtual clients, and partial aggregation.
View details
Theoretical Convergence Guarantees for Cooperative Coevolutionary Algorithms
Evolutionary Computation Journal (2010)
Preview abstract
Cooperative coevolutionary algorithms have the potential to significantly speed up the search process by dividing the space into parts that can be each conquered separately. Unfortunately, recent research presented theoretical and empirical arguments that these algorithms might not be fit for optimization tasks, as they might tend to drift to suboptimal solutions in the search space. This paper details an extended formal model for cooperative coevolutionary algorithms, and uses it to demonstrate that these algorithms will converge to the globally optimal solution, if properly set and if given enough resources. We also present an intuitive graphical visualization for the basins of attraction to optimal and suboptimal solutions in the search space.
View details
Preview abstract
In this paper, we discuss a curious relationship between Cooperative Coevolutionary Algorithms (CCEAs) and Univariate EDAs. Inspired by the theory of CCEAs, we also present a new EDA with theoretical convergence guarantees, and some preliminary experimental results in comparison with existing Univariate EDAs.
View details
Preview abstract
This paper presents the dynamics of multiple learning agents from an evolutionary game theoretic perspective. We provide replicator dynamics models for cooperative coevolutionary algorithms and for traditional multiagent Q-learning, and we extend these differential equations to account for lenient learners: agents that forgive possible mismatched teammate actions that resulted in low rewards. We use these extended formal models to study the convergenceguarantees for these
algorithms, and also to visualize the basins of attraction to optimal and suboptimal solutions in two benchmark coordination problems. We demonstrate that lenience provides learners with more accurate information about the benefits of performing their actions, resulting in higher likelihood of convergence to the globally optimal solution. In addition, our analysis indicates that the choice of
learning algorithm has an insignificant impact on the overall performance of multiagent learning algorithms; rather, the performance of these algorithms depends primarily on the level of lenience that the agents exhibit to one another. Finally, our research supports the strength and generality of evolutionary game theory as a backbone for multiagent learning.
View details
Theoretical Advantages of Lenient Learners in Multiagent Systems
Karl Tuyls
Proceedings of the Sixth International Conference on Autonomous Agents and Multi-agent Systems (AAMAS-07), ACM (2007)
Preview abstract
This paper presents the dynamics of multiple reinforcement learning agents from an Evolutionary Game Theoretic perspective. We provide a Replicator Dynamics model for traditional multiagent Q-learning, and we then extend these differential equations to account for lenient learners: agents that forgive possible mistakes of their teammates that resulted in lower rewards. We use this extended formal model to visualize the basins of attraction of both traditional and lenient multiagent Q-learners in two benchmark coordination problems. The results indicate that lenience provides learners with more accurate estimates for the utility of their actions, resulting in higher likelihood of convergence to the globally optimal solution. In addition, our research supports the strength of EGT as a backbone for multiagent reinforcement learning.
View details