We work to make robots useful in the real world through machine learning.
About the team
Our goal is to improve robotics via machine learning, and improve machine learning via robotics. We foster close collaborations between machine learning researchers and roboticists to enable learning at scale on real and simulated robotic systems.
We're exploring how to teach robots transferable skills, by learning in parallel across many manipulation arms in our one-of-a-kind lab purpose-built for machine learning research.
We're enabling robots to predict what happens when they move objects around, to learn about the world around them and make better, safer decisions without supervision. We are sharing our training data publicly to help advance the state-of-the-art in this field. We're also bringing advances in deep learning to robot motion planning, navigation, and the exciting and demanding world of self-driving cars to improve their safety and reliability.
Team focus summaries
Human Robot Interaction
Our Human Robot Interaction (HRI) team works to enable safe and useful interactive robots. We focus on understanding, designing, and evaluating robotic systems for use by or with humans. Example projects include: social-navigation, intuitive interfaces for natural human-robot communication, enabling robots to learn from humans as teachers and from human feedback, human robot collaboration.
Our Robot Mobility team works to enable safe, autonomous, and agile mobile robots in human centered environments. Projects include: navigation, locomotion, supporting infrastructure, and robot safety.
Our Robot Vision team works to leverage intermediate visual representations for generalizable robotics. Projects include 3D representations for robotics, sim2real, object trajectory-tracking, human perception, scene understanding.
The Robot Manipulation team works to leverage machine learning to teach robots to act by interacting with the world.
The Robot Control team works on machine learning, perception and control for advancing Robotics and Alphabet moonshots.
Our Reasoning research focuses on helping robots to perform more complex tasks by breaking down long-horizon tasks into actions that robots can safely complete.
Agility and Precision
Our Agility and Precision research focuses on using machine learning to make robots move in a dynamic fashion while executing precise, dexterous movements. Projects include robotics table tennis, catching, agile locomotion, and bi-manual assembly.
The Robotics Infrastructure team works to build world-class shared robot, data and ML platforms to support and scale the ambitious needs of all Robotics at Google researchers, engineers, and operators.
Introducing PaLM-SayCan, a robotics algorithm that combines the understanding of language models with the real-world capabilities of a helper robot.
There are few large libraries with high-quality models of 3D objects. We describe our efforts to address this need by creating the Scanned Objects dataset, a curated collection of over 1000 3D-scanned common household items.
In Safe Reinforcement Learning for Legged Locomotion, we introduce a safe RL framework for learning legged locomotion while satisfying safety constraints during training.
In Value Function Spaces: Skill-Centric State Abstractions for Long-Horizon Reasoning, presented at ICLR 2022, we address the task of learning suitable state and action abstractions for long-range problems.
In Jump-Start Reinforcement Learning (JSRL), we introduce a meta-algorithm that can use a pre-existing policy of any form to initialize any type of RL algorithm.
In BC-Z: Zero-Shot Task Generalization with Robotic Imitation Learning, published at CoRL 2021, we present new research that studies how robots can generalize to new tasks that they were not trained to do.
We are sharing our training data publicly to help advance the state-of-the-art in this field.
Our system enables us to study problems that arise from robotic learning in a challenging, multi-player, dynamic and interactive setting. Two projects illustrate the problems we have been investigating so far. i-S2R enables a robot to hold rallies of over 300 hits with a human player, while Goal’s-Eye enables learning goal-conditioned policies that match the precision of amateur humans.