Diverse Top-K Decoding for Non-Autoregressive Semantic Parsing via Intent Conditioning
Abstract
Semantic parser is a core component of modern vir-tual assistants like Google Assistant and Amazon Alexa.While sequence-to-sequence based auto-regressive (AR) ap-proaches are common for conversational semantic parsing,recent studies (Babu et al. 2021; Shrivastava et al. 2021) em-ploy non-autoregressive (NAR) decoders to reduce inferencelatency while maintaining competitive parsing quality. How-ever, a major drawback of NAR decoders is the difficulty of generating top-koutputs with approaches such as beam search. Due to inherent ambiguity in natural language, gener-ating diverse top-koutputs is essential for conversational se-mantic parsers. To address this challenge, we propose a novelNAR semantic parser which introduces intent conditioning on the decoder. Inspired by the traditional intent and slot tagging parsers, we decouple the first intent prediction from the rest of the parse. The intent conditioning allows the model to better control beam-search and improves the quality and diversity oftop-koutputs. Since we do not have top-klabels during train-ing, to avoid training and inference mismatch, we introduce a hybrid teacher-forcing approach. We evaluate our proposedapproach on conversational semantic parsing datasets, TOP and TOPv2. Similar to the existing NAR models we maintain theO(1)decoding time complexity while generating more diverse outputs and improving top-3 exact match (EM) by2.4points. In comparison with AR models, our approach speeds up beam-search inference by6.7times on CPU with compet-itive top-kEM