Jump to Content

Transformer-based Localization from Embodied Dialog with Large-scale Pre-training

James M. Rehg
The Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics, Association for Computational Linguistics (2022)

Abstract

We address the challenging task of Localization via Embodied Dialog (LED). Given a dialog from two agents, an Observer navigating through an unknown environment and a Locator who is attempting to identify the Observer's location, the goal is to predict the Observer's final location in a map. we develop a novel LED-Bert architecture and present an effective pretraining strategy. We show that a graph-based scene representation is more effective than the top-down 2D maps used in prior work. Our approach outperforms previous baselines.