We work to make robots useful in the real world through machine learning.
About the team
Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.
We're exploring how to teach robots transferable skills, by learning in parallel across many manipulation arms in our one-of-a-kind lab purpose-built for machine learning research.
We're enabling robots to predict what happens when they move objects around, to learn about the world around them and make better, safer decisions without supervision. We are sharing our training data publicly to help advance the state-of-the-art in this field. We're also bringing advances in deep learning to robot motion planning, navigation, and the exciting and demanding world of self-driving cars to improve their safety and reliability.
Research areas
Team focus summaries
Highlighted projects
Introducing PaLM-SayCan, a robotics algorithm that combines the understanding of language models with the real-world capabilities of a helper robot.
There are few large libraries with high-quality models of 3D objects. We describe our efforts to address this need by creating the Scanned Objects dataset, a curated collection of over 1000 3D-scanned common household items.
In Safe Reinforcement Learning for Legged Locomotion, we introduce a safe RL framework for learning legged locomotion while satisfying safety constraints during training.
In Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning, presented at ICLR 2022, we address the task of learning suitable state and action abstractions for long-range problems.
In Jump-Start Reinforcement Learning (JSRL), we introduce a meta-algorithm that can use a pre-existing policy of any form to initialize any type of RL algorithm.
In XIRL: Cross-Embodiment Inverse RL, presented as an oral paper at CoRL 2021, we introduce a self-supervised method for Cross-embodiment Inverse Reinforcement Learning (XIRL).
In BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning, published at CoRL 2021, we present new research that studies how robots can generalize to new tasks that they were not trained to do.
We are sharing our training data publicly to help advance the state-of-the-art in this field.
Our system enables us to study problems that arise from robotic learning in a challenging, multi-player, dynamic and interactive setting. Two projects illustrate the problems we have been investigating so far. i-S2R enables a robot to hold rallies of over 300 hits with a human player, while Goal’s-Eye enables learning goal-conditioned policies that match the precision of amateur humans.
Featured publications
Conference on Robot Learning 2022 (2022)
accepted to ICLR 2021 (oral presentation) (to appear)
Conference on Robot Learning (2021)
Robotics: Science and Systems (RSS) (2019)