
Olivier Bachem
Olivier is a research scientist in the Google Brain Team interested in fundamental problems in machine learning and artificial intelligence. He received his PhD from ETH Zurich where he was supervised by Andreas Krause in the Learning & Adaptive Systems group. In his dissertation, he investigated coresets - small summaries of large data sets with theoretical guarantees - and other sampling methods for large-scale clustering. He also held a Google PhD Fellowship in Machine Learning and was an Associated Fellow at the Max Planck ETH Center for Learning Systems. Before that, he obtained a bachelor’s degree in economics (University of St. Gallen), a master’s degree in quantitative finance (ETH Zurich & University of Zurich) as well as a master’s degree in statistics (ETH Zurich) where he was awarded an ETH medal for his master thesis.
Authored Publications
Sort By
Google
Factually Consistent Summarization via Reinforcement Learning with Textual Entailment Feedback
Paul Roit
Johan Ferret
Geoffrey Cideron
Matthieu Geist
Sertan Girgin
Léonard Hussenot
Nikola Momchev
Piotr Stanczyk
Nino Vieillard
Olivier Pietquin
Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics (2023), 6252–6272
Offline Reinforcement Learning as Anti-Exploration
Shideh Rezaeifar
Nino Vieillard
Léonard Hussenot
Olivier Pietquin
Matthieu Geist
AAAI (2022)
A general class of surrogate functions for stable and efficient reinforcement learning
Sharan Vaswani
Simone Totaro
Robert Müller
Shivam Garg
Matthieu Geist
Marlos C. Machado
Nicolas Le Roux
AISTATS (2022)
Concave Utility Reinforcement Learning: the Mean-field Game viewpoint
Matthieu Geist
Julien Perolat
Mathieu Laurière
Romuald Elie
Sarah Perrin
Remi Munos
Olivier Pietquin
AAMAS (2022)
Decoding A Neural Retriever's Latent Space for Query Suggestion
Leonard Adolphs
Michelle Chen Huebscher
Sertan Girgin
Thomas Hofmann
EMNLP (2022)
What Matters for Adversarial Imitation Learning?
Manu Orsini
Léonard Hussenot
Damien Vincent
Sertan Girgin
Matthieu Geist
Olivier Pietquin
Marcin Andrychowicz
NeurIPS (2021)
What Matters for On-Policy Deep Actor-Critic Methods? A Large-Scale Study
Marcin Andrychowicz
Piotr Michal Stanczyk
Manu Orsini
Sertan Girgin
Léonard Hussenot
Matthieu Geist
Olivier Pietquin
Marcin Michalski
Sylvain Gelly
ICLR (2021)
Hyperparameter Selection for Imitation Learning
Léonard Hussenot
Marcin Andrychowicz
Damien Vincent
Lukasz Piotr Stafiniak
Sertan Girgin
Nikola M Momchev
Manu Orsini
Matthieu Geist
Olivier Pietquin
ICML (2021)
A Commentary on the Unsupervised Learning of Disentangled Representations
Francesco Locatello
Stefan Bauer
Gunnar Rätsch
Sylvain Gelly
Bernhard Scholkopf
AAAI Conference on Artificial Intelligence (2020)
Evaluating Generative Models using Divergence Frontiers
Josip Djolonga
Marco Cuturi
Sylvain Gelly
International Conference on Artificial Intelligence and Statistics (2020)