Jump to Content

Large-scale representation learning from visually grounded untranscribed speech

Gabriel Ilharco Magalhaes
Proceedings of the Conference on Natural Language Learning (2019)
Google Scholar

Abstract

Systems that learn from associating images with their spoken audio captions are an important step towards visually grounded language acquisition. We describe a scalable method of automatically generating diverse audio data from image caption datasets. This supports pre-training deep networks for encoding both audio and images, by training a dual encoder that learns to align latent representations of both modalities. We fine-tune these models on the Flickr8k Audio Captions Corpus and obtain state-of-the-art retrieval results---improving retrieval in the top 10 from 29.6\% to 49.5\%. We additionally obtain human ratings on model outputs to better assess the impact of incidentally matching image-caption pairs that were not associated in the data, and find that strict corpus based evaluation substantially underestimates the quality of the retrieved results.