EmoNets: Multimodal deep learning approaches for emotion recognition in video

Samira Ebrahimi Kahou
Xavier Bouthillier
Caglar Gulcehre
Vincent Michalski
Kishore Konda
Sébastien Jean
Pierre Froumenty
Yann Dauphin
Nicolas Boulanger-Lewandowski
Raul Chandias Ferrari
Mehdi Mirza
David Warde-Farley
Aaron Courville
Pascal Vincent
Roland Memisevic
Christopher Pal
Yoshua Bengio
Journal on Multimodal User Interfaces, 10(2016), 99–111

Abstract

The task of the Emotion Recognition in the Wild (EmotiW) Challenge is to assign one of seven emotions to short video clips extracted from Hollywood style movies. The videos depict acted-out emotions under realistic conditions with a large degree of variation in attributes such as pose and illumination, making it worthwhile to explore approaches which consider combinations of features from multiple modalities for label assignment. In this paper we present our approach to learning several specialist models using deep learning techniques, each focusing on one modality. Among these are a convolutional neural network, focusing on capturing visual information in detected faces, a deep belief net focusing on the representation of the audio stream, a K-Means based “bag-of-mouths” model, which extracts visual features around the mouth region and a relational autoencoder, which addresses spatio-temporal aspects of video.

Research Areas