Jump to Content

Consistency Guided Scene Flow

Yuhua Chen
Luc Van Gool
(to appear)
Google Scholar

Abstract

We present a self-supervised framework, Consistency Guided Scene Flow Estimation (CGSF), to jointly estimate 3D scene structure and motion from stereo videos. The model takes two temporal stereo pairs as input, and predicts disparity and scene flow expressed as optical flow + disparity change. The model self-adapts at test time by iteratively refining its predictions. The refinement process is guided by a consistency loss, which combines stereo and temporal photo-consistency with a new geometric term that couples the disparity and 3D motion. To handle the noise in the consistency loss, we further propose a learned, output refinement network, which takes the initial predictions, the loss, and the gradient as input, and efficiently predicts a correlated output update. We perform extensive experimental validation on benchmark datasets and daily scenes captured by a stereo camera. We demonstrate the proposed model can reliably predict disparity and scene flow in many challenging scenarios, and achieves better generalization than the state-of-the-arts.

Research Areas