Google Research

AssembleNet++: Assembling Modality Representations via Attention Connectivity

European Conference on Computer Vision (ECCV) (2020)

Abstract

We create a family of powerful video models which are able to: (i) learn interactions between semantic object information and raw appearance and motion features, and (ii) deploy attention in order to better learn the importance of features at each convolutional block of the network. A new network component named peer-attention is introduced, which dynamically learns the attention weights using another block or modality. Even without any pre-training, our models outperform the previous work on standard public activity recognition datasets with continuous videos, establishing new state-of-the-art. We also confirm that our findings of having neural connectivity from the object modality and the use of peer-attention is generally applicable for different existing architectures, improving their performances.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work