Mohammad Taghi Saffar
Research Areas
Authored Publications
Google Publications
Other Publications
Sort By
Phenaki: Variable length video generation from open domain textual descriptions
Mohammad Babaeizadeh
Han Zhang
Santiago Castro
Julius Kunze
ICLR (2023)
Preview abstract
We present Phenaki, a model capable of realistic video synthesis given a sequence of textual prompts. Generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, we introduce a new causal model for learning video representation which compresses the video to a small discrete tokens representation. This tokenizer is auto-regressive in time, which allows it to work with variable-length videos. To generate video tokens from text we are using a bidirectional masked transformer conditioned on pre-computed text tokens. The generated video tokens are subsequently de-tokenized to create the actual video. To address data issues, we demonstrate how joint training on a large corpus of image-text pairs as well as a smaller number of video-text examples can result in generalization beyond what is available in the video datasets. Compared to the previous video generation methods, Phenaki can generate arbitrary long videos conditioned on a sequence of prompts (i.e. time variable text or story in open domain). To the best of our knowledge, this is the first time a paper studies generating videos from time variable prompts.
View details
Answer-Me: Multi-Task Open-Vocabulary Learning for Visual Question-Answering
Wei Li
Fred Bertsch
CVPR Workshop (2022)
Preview abstract
We present Answer-Me, a task-aware multi-task framework which unifies multiple question answering tasks, such as, visual question answering, visual entailment, visual reasoning. In contrast to previous works using contrastive or generative captioning training, we propose a novel and simple recipe to pretrain a vision-language joint model, which is multi-task as well, and uses the entire architecture end-to-end. Our results, which are in the challenging open-vocabulary generative setting, show state-of-the-art performance, zero-shot generalization, robustness to forgetting.
View details
FindIt: Generalized Localization with Natural Language Queries
Fred Bertsch
Wei Li
European Conference on Computer Vision (ECCV) (2022)
Preview abstract
We propose FindIt, a simple and versatile framework that unifies a variety of visual grounding and localization tasks including referring expression comprehension, text-based localization, and object detection. Key to our architecture is an efficient multi-scale fusion module that unifies the disparate localization requirements across the tasks. In addition, we discover that a standard object detector is surprisingly effective in unifying these tasks without a need for task-specific design, losses, or pre computed detections. Our end-to-end trainable framework responds flexibly and accurately to a wide range of referring expression, localization or detection queries for zero, one, or multiple objects. Jointly trained on these tasks, FindIt outperforms the state of the art on both referring expression and text-based localization, and shows competitive performance on object detection. Finally, FindIt generalizes better to out-of-distribution data and novel categories compared to strong singletask baselines. All of these are accomplished by a single, unified and efficient model
View details
Efficient Content-Based Sparse Attention with Routing Transformers
Ashish Teku Vaswani
David Grangier
Transactions of the Association for Computational Linguistics (2021)
Preview abstract
Self-attention has recently been adopted for
a wide range of sequence modeling problems.
Despite its effectiveness, self-attention suffers from quadratic compute and memory
requirements with respect to sequence length.
Successful approaches to reduce this complexity focused on attending to local sliding windows or a small set of locations independent
of content. Our work proposes to learn dynamic sparse attention patterns that avoid
allocating computation and memory to attend to content unrelated to the query of
interest. This work builds upon two lines of
research: it combines the modeling flexibility
of prior work on content-based sparse attention with the efficiency gains from approaches
based on local, temporal sparse attention. Our
model, the Routing Transformer, endows selfattention with a sparse routing module based
on online k-means while reducing the overall complexity of attention to O(n^1.5d) from
O(n^2d) for sequence length n and hidden dimension d. We show that our model outperforms comparable sparse attention models on
language modeling on Wikitext-103 (15.8 vs
18.3 perplexity), as well as on image generation on ImageNet-64 (3.43 vs 3.44 bits/dim)
while using fewer self-attention layers. Additionally, we set a new state-of-the-art on
the newly released PG-19 data-set, obtaining
a test perplexity of 33.2 with a 22 layer Routing Transformer model trained on sequences
of length 8192.
View details
No Results Found