Sean Kirmani
Sean Kirmani is a Research Scientist working at Google DeepMind. His research interest involve computer vision, natural language processing, and robotics. Sean has also spent time working at X, the Moonshot Factory as part of The Everyday Robot Project.
Authored Publications
Sort By
Scalable Multi-Sensor Robot Imitation Learning via Task-Level Domain Consistency
Armando Fuentes
Daniel Ho
Eric Victor Jang
Matt Bennice
Mohi Khansari
Nicolas Sievers
Yuqing Du
ICRA (2023) (to appear)
Preview abstract
Recent work in visual end-to-end learning for robotics has shown the promise of imitation learning across a variety of tasks. However, such approaches are often expensive and require vast amounts of real world training demonstrations. Additionally, they rely on a time-consuming evaluation process for identifying the best model to deploy in the real world. These challenges can be mitigated by simulation - by supplementing real world data with simulated demonstrations and using simulated evaluations to identify strong policies. However, this introduces the well-known ``reality gap'' problem, where simulator inaccuracies decorrelates performance in simulation from reality. In this paper, we build on top of prior work in GAN-based domain adaptation and introduce the notion of a Task Consistency Loss (TCL), a self-supervised contrastive loss that encourages sim and real alignment both at the feature and action-prediction level. We demonstrate the effectiveness of our approach on the challenging task of latched-door opening with a 9 Degree-of-Freedom (DoF) mobile manipulator from raw RGB and depth images. While most prior work in vision-based manipulation operate from a fixed, third person view, mobile manipulation couples the challenges of locomotion and manipulation with greater visual diversity and action space complexity. We find that we are able to achieve 77% success on seen and unseen scenes, a +30% increase from the baseline, using only ~16 hours of teleoperation demonstrations in sim and real.
View details
Deep RL at Scale: Sorting Waste in Office Buildings with a Fleet of Mobile Manipulators
Jarek Rettinghouse
Daniel Ho
Julian Ibarz
Sangeetha Ramesh
Matt Bennice
Alexander Herzog
Chuyuan Kelly Fu
Adrian Li
Kim Kleiven
Jeff Bingham
Yevgen Chebotar
David Rendleman
Wenlong Lu
Mohi Khansari
Mrinal Kalakrishnan
Ying Xu
Noah Brown
Khem Holden
Justin Vincent
Peter Pastor Sampedro
Jessica Lin
David Dovo
Daniel Kappler
Mengyuan Yan
Sergey Levine
Jessica Lam
Jonathan Weisz
Paul Wohlhart
Karol Hausman
Cameron Lee
Bob Wei
Yao Lu
Preview abstract
We describe a system for deep reinforcement learning of robotic manipulation skills applied to a large-scale real-world task: sorting recyclables and trash in office buildings. Real-world deployment of deep RL policies requires not only effective training algorithms, but the ability to bootstrap real-world training and enable broad generalization. To this end, our system combines scalable deep RL from real-world data with bootstrapping from training in simulation, and incorporates auxiliary inputs from existing computer vision systems as a way to boost generalization to novel objects, while retaining the benefits of end-to-end training. We analyze the tradeoffs of different design decisions in our system, and present a large-scale empirical validation that includes training on real-world data gathered over the course of 24 months of experimentation, across a fleet of 23 robots in three office buildings, with a total training set of 9527 hours of robotic experience. Our final validation also consists of 4800 evaluation trials across 240 waste station configurations, in order to evaluate in detail the impact of the design decisions in our system, the scaling effects of including more real-world data, and the performance of the method on novel objects.
View details