Jump to Content

VideoPoet: A Large Language Model for Zero-Shot Video Generation

Dan Kondratyuk
Lijun Yu
Xiuye Gu
Rachel Hornung
Hassan Akbari
Ming-Chang Chiu
Josh Dillon
Agrim Gupta
Meera Hahn
Anja Hauth
David Hendon
Alonso Martinez
Grant Schindler
Huisheng Wang
Jimmy Yan
Xuan Yang
Lu Jiang
arxiv Preprint (2023) (to appear)

Abstract

We present VideoPoet, a language model capable of synthesizing high-quality video, with matching audio, from a large variety of conditioning signals. VideoPoet employs a decoder-only transformer architecture that processes multimodal inputs -- including images, videos, text, and audio. The training protocol follows that of Large Language Models (LLMs), consisting of two stages: pretraining and task-specific adaptation. During pretraining, VideoPoet incorporates a mixture of multimodal generative objectives within an autoregressive Transformer framework. The pretrained LLM serves as a foundation that can be adapted for a range of video generation tasks. We present empirical results demonstrating the model's state-of-the-art capabilities in zero-shot video generation, specifically highlighting VideoPoet's ability to generate high-fidelity motions. Project page: http://sites.research.google/videopoet/

Research Areas