Juergen Sturm
Jürgen Sturm is a tech lead manager at Google and works together with his team on perception problems in computer vision and robotics such as visual SLAM, 3D reconstruction and semantic scene understanding. Before he joined Google in 2015, he led an engineering team at Metaio where he worked on augmented reality. From 2011-2014, he was a post-doc in the computer vision group of Daniel Cremers at the Technical University of Munich. During this time, he founded FabliTec, a startup focused on 3D person scanning and printing. Previously, he obtained his PhD in robotics under the supervision of Wolfram Burgard from the University of Freiburg in 2011. His PhD thesis received the ECCAI best dissertation award 2011 and was short-listed for the euRobotics Georges Giralt Award 2012. His lecture "Visual Navigation for Flying Robots" was distinguished with the TUM best lecture award in 2012 and 2013. His online course on "Autonomous Navigation" on EdX attracted more than 50.000 students worldwide since 2014.
Research Areas
Authored Publications
Sort By
UltraFast 3D Sensing, Reconstruction and Understanding of People, Objects, and Environments
Anastasia Tkach
Christine Kaeser-Chen
Christoph Rhemann
Jonathan Taylor
Julien Valentin
Kaiwen Guo
Mingsong Dou
Sameh Khamis
Shahram Izadi
Sofien Bouaziz
Thomas Funkhouser
Yinda Zhang
Preview abstract
This is a set of slide decks presenting a full tutorial on 3D capture and reconstruction, with high-level applications on VR and AR. This request is to upload the slides on the tutorial website:
https://augmentedperception.github.io/cvpr18/
View details
ScanComplete: Large-Scale Scene Completion and Semantic Segmentation for 3D Scans
Angela Dai
Daniel Ritchie
Scott Reed
Matthias Nießner
Proc. Computer Vision and Pattern Recognition (CVPR), IEEE (2018)
Preview abstract
We introduce ScanComplete, a novel data-driven approach for taking an incomplete 3D scan of a scene as input and predicting a complete 3D model along with per-voxel semantic labels. The key contribution of our method is its ability to handle large scenes with varying spatial extent, managing the cubic growth in data size as scene size increases. To this end, we devise a fully-convolutional generative 3D CNN model whose filter kernels are invariant to the overall scene size. The model can be trained on scene subvolumes but deployed on arbitrarily large scenes at test time. In addition, we propose a coarse-to-fine inference strategy in order to produce high-resolution output while also leveraging large input context sizes. In an extensive series of experiments, we carefully evaluate different model design choices, considering both deterministic and probabilistic models for completion and semantic inference. Our results show that we outperform other methods not only in the size of the environments handled and processing efficiency, but also with regard to completion quality and semantic segmentation performance by a significant margin.
View details
Preview abstract
Augmented Reality aims at seamlessly blending virtual content into the real world. In this talk, I will showcase our recent work on 3D scene understanding. In particular, I will cover semantic segmentation and scan completion.
View details
Preview abstract
In my talk, I present the basic computer vision algorithms for augmented reality, covering the following topics:
- Motivation / Example use-cases for Augmented Reality
- ArCore features
- Algorithms
- Visual-inertial odometry
- Visual SLAM
- 3D reconstruction
- Semantic segmentation
- Scan completion
I will also show a live demo of ArCore using the AR Stickers app.
View details
Preview abstract
Robots need the ability to determine and track their position as well as the ability to perceive the geometry of the world around them (e.g., for avoiding obstacles or navigation). Tango provides these capabilities but previously it was relatively difficult to use Tango in robotics research. The main reason is that Tango only supports Android, while robotics research uses ROS (Robot Operating System). We teamed up with two external partners (Ekumen and Intermodalics) to develop Android apps that facilitate the integration of Tango in ROS-based robots.
View details