Jump to Content

Self-supervised Learning with Geometric Constraints in Monocular Video - Connecting Flow, Depth, and Camera

Yuhua Chen
ICCV (2019)
Google Scholar

Abstract

We present GLNet, a self-supervised framework for learning depth, optical flow, camera pose and intrinsic parameters from monocular video – addressing the difficulty of acquiring realistic ground-truth for such processes under a variety of conditions where we would like them to operate. We propose three contributions for self-supervised systems: 1) we design new loss functions that capture multiple geometric constraints (e.g. epipolar geometry) as well as adaptive photometric costs that support multiple moving objects, rigid and non-rigid, 2) we extend the model such that it predicts camera intrinsics, making it applicable to uncalibrated images or video, and 3) we propose several online finetuning strategies that rely on the symmetry of our self-supervised loss in both training and testing, in particular optimizing both parameters and/or the output of different tasks and leveraging their mutual interactions. The idea of jointly optimizing the system output, under all geometric and photometric constraints can be viewed as a dense generalization of classical bundle adjustment. We demonstrate the effectiveness of our method on KITTI and Cityscapes, where we outperform previous self-supervised approaches. We also show good generalization for transfer learning.

Research Areas