Jump to Content
Paul Voigtlaender

Paul Voigtlaender

I'm a research scientist at Google in Zurich, Switzerland. I'm currently working on the combination of language and vision for videos. Before that I did my PhD on "Video Object Segmentation and Tracking" at RWTH Aachen University, Germany.

Research Areas

Authored Publications
Google Publications
Other Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Connecting Vision and Language with Video Localized Narratives
    Vittorio Ferrari
    IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) 2023 (to appear)
    Preview abstract We propose Video Localized Narratives, a new form of multimodal video annotations connecting vision and language. In the original Localized Narratives, annotators speak and move their mouse simultaneously on an image, thus grounding each word with a mouse trace segment. However, this is challenging on a video. Our new protocol empowers annotators to tell the story of a video with Localized Narratives, capturing even complex events involving multiple actors interacting with each other and with several passive objects. We annotated 20k videos of the OVIS, UVO, and Oops datasets, totalling 1.7M words. Based on this data, we also construct new benchmarks for the video narrative grounding and video question-answering tasks, and provide reference results from strong baseline models. Our annotations are available at https://google.github.io/video-localized-narratives/. View details
    FEELVOS: Fast End-to-End Embedding Learning for Video Object Segmentation
    Yuning Chai
    Bastian Leibe
    Liang-chieh Chen
    International Conference on Computer Vision and Pattern Recognition (CVPR) (2019) (to appear)
    Preview abstract Recently, there has been a lot of progress for video object segmentation (VOS). However, many of the most successful methods are overly complicated, heavily rely on fine-tuning on the first frame, and/or are slow, and are hence of limited practical use. In this work, we propose FEELVOS as a simple and fast method which does not rely on fine-tuning. In order to segment a video frame, FEELVOS uses a semantic pixel-wise embedding together with a global and a local matching mechanism to transfer information from the first frame and from the previous frame of the video to the current frame. In contrast to previous work, our embedding is only used as an internal guidance of a convolutional network. Our novel dynamic segmentation head allows us to train the network including the embedding end-to-end for the multiple object segmentation task. We achieve a new state of the art in video object segmentation without fine-tuning on the DAVIS 2017 validation set with a J&F measure of 69.0%. View details
    No Results Found