Google Research

Naive-Student: Leveraging semi-supervised learning in video sequences for urban scene segmentation

European Conference on Computer Vision (ECCV) (2020)


Supervised learning in large discriminative models is a mainstay for modern computer vision. Such an approach necessitates investing in large-scale, human annotated datasets for achieving state-of-the-art results. In turn, the efficacy of supervised learning may be limited by the size of the human annotated dataset. This limitation is particularly notable for image segmentation tasks where the expense of human annotation may be especially large, yet large amounts of unlabeled data may exist. In this work, we ask if we may leverage unlabeled video sequences to improve the performance on urban scene segmentation using semi-supervised learning. The goal of this work is to avoid the construction of sophisticated, learned architectures specific to label propagation (e.g., patch matching and optical flow). Instead, we simply predict pseudo-labels for the unlabeled data and train subsequent models with a mix of human-annotated and pseudo-labeled data. The procedure is iterated for several times. As a result, our model, trained with such simple yet effective iterative semi-supervised learning, attains state-of-the-art results at all three Cityscapes benchmarks, reaching the performance of 67.6% PQ, 42.4% AP, and 85.1% mIOU on the test set. We view this work as a notable step for building a simple procedure to harness unlabeled video sequences to surpass state-of-the-art performance on core computer vision tasks.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work