We work to make robots useful in the real world through machine learning.
About the team
Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.
We're exploring how to teach robots transferable skills, by learning in parallel across many manipulation arms in our one-of-a-kind lab purpose-built for machine learning research.
We're enabling robots to predict what happens when they move objects around, to learn about the world around them and make better, safer decisions without supervision. We are sharing our training data publicly to help advance the state-of-the-art in this field. We're also bringing advances in deep learning to robot motion planning, navigation, and the exciting and demanding world of self-driving cars to improve their safety and reliability.
Team focus summaries
Human Robot Interaction
Our HRI team works to enable safe and useful human centered deployment of interactive robots. We focus on understanding, designing, and evaluating robotic systems for use by or with humans. Example active projects include: social-navigation benchmark design, designing and enabling voice interactions, and user studies and UX design.
Our Robot Mobility team works to enable safe, autonomous, and agile mobile robots in human centered environments. We focus on algorithms and infrastructure to enable mobile robots in human-centered environments. Projects include: navigation, locomotion, supporting infrastructure, and robot safety.
Our Robot Vision team works to leverage intermediate visual representations for generalizable robotics. Projects include unsupervised object segmentation, sim2real, trajectory-tracking, object/human tracking and prediction.
The Robot Manipulation team works to leverage machine learning to teach robots to act by interacting with the world.
The Robot Control team works on machine learning, perception and control for advancing Robotics and Alphabet moonshots.
Our Reasoning research focuses on helping robots to perform more complex tasks by breaking down long-horizon tasks into actions that robots can safely complete.
Introducing PaLM-SayCan, a robotics algorithm that combines the understanding of language models with the real-world capabilities of a helper robot.
There are few large libraries with high-quality models of 3D objects. We describe our efforts to address this need by creating the Scanned Objects dataset, a curated collection of over 1000 3D-scanned common household items.
In Safe Reinforcement Learning for Legged Locomotion, we introduce a safe RL framework for learning legged locomotion while satisfying safety constraints during training.
In Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning, presented at ICLR 2022, we address the task of learning suitable state and action abstractions for long-range problems.
In Jump-Start Reinforcement Learning (JSRL), we introduce a meta-algorithm that can use a pre-existing policy of any form to initialize any type of RL algorithm.
In BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning, published at CoRL 2021, we present new research that studies how robots can generalize to new tasks that they were not trained to do.
We are sharing our training data publicly to help advance the state-of-the-art in this field.