Jump to Content

Recognizing Multimodal Entailment (tutorial at ACL 2021)

Afsaneh Hajiamin Shirazi
Blaž Bratanič
Christina Liu
Gabriel Fedrigo Barcik
Georg Fritz Osang
Jared Frank
Lucas Smaira
Ricardo Abasolo Marino
Roma Patel
Vaiva Imbrasaite
(2021) (to appear)
Google Scholar

Abstract

How information is created, shared and consumed has changed rapidly in recent decades, in part thanks to new social platforms and technologies on the web. With ever-larger amounts of unstructured and limited labels, organizing and reconciling information from different sources and modalities is a central challenge in machine learning. This cutting-edge tutorial aims to introduce the multimodal entailment task, which can be useful for detecting semantic alignments when a single modality alone does not suffice for a whole content understanding. Starting with a brief overview of natural language processing, computer vision, structured data and neural graph learning, we lay the foundations for the multimodal sections to follow. We then discuss recent multimodal learning literature covering visual, audio and language streams, and explore case studies focusing on tasks which require fine-grained understanding of visual and linguistic semantics question answering, veracity and hatred classification. Finally, we introduce a new dataset for recognizing multimodal entailment, exploring it in a hands-on collaborative section. Overall, this tutorial gives an overview of multimodal learning, introduces a multimodal entailment dataset, and encourages future research in the topic.