Shachi Paul
Research Areas
Authored Publications
Sort By
Preview abstract
Semantic parser is a core component of modern vir-tual assistants like Google Assistant and Amazon Alexa.While sequence-to-sequence based auto-regressive (AR) ap-proaches are common for conversational semantic parsing,recent studies (Babu et al. 2021; Shrivastava et al. 2021) em-ploy non-autoregressive (NAR) decoders to reduce inferencelatency while maintaining competitive parsing quality. How-ever, a major drawback of NAR decoders is the difficulty of generating top-koutputs with approaches such as beam search. Due to inherent ambiguity in natural language, gener-ating diverse top-koutputs is essential for conversational se-mantic parsers. To address this challenge, we propose a novelNAR semantic parser which introduces intent conditioning on the decoder. Inspired by the traditional intent and slot tagging parsers, we decouple the first intent prediction from the rest of the parse. The intent conditioning allows the model to better control beam-search and improves the quality and diversity oftop-koutputs. Since we do not have top-klabels during train-ing, to avoid training and inference mismatch, we introduce a hybrid teacher-forcing approach. We evaluate our proposedapproach on conversational semantic parsing datasets, TOP and TOPv2. Similar to the existing NAR models we maintain theO(1)decoding time complexity while generating more diverse outputs and improving top-3 exact match (EM) by2.4points. In comparison with AR models, our approach speeds up beam-search inference by6.7times on CPU with compet-itive top-kEM
View details
Preview abstract
Understanding tables is an important aspect of natural language understanding. Existing models for table understanding require linearization of table contents in certain levels, where row or column orders are encoded as unwanted biases. Such spurious biases make the model vulnerable to row and column order perturbations. Also, prior work did not explicitly and thoroughly model structural biases, hindering the table-text modeling ability. In this work, we propose a robust table-text encoding architecture TableFormer, where tabular structural biases are incorporated completely through learnable attention biases. TableFormer is invariant to row and column orders, and could understand tables better due to its tabular inductive biases. Experiments showed that TableFormer outperforms strong baselines in all settings on SQA, WTQ and TabFact table reasoning datasets, and achieves state-of-the-art performance on SQA, especially when facing answer-invariant row and column perturbations (6% improvement over the best baseline), because previous SOTA models' performance drops by 4% - 6% when facing such perturbations while TableFormer is not affected.
View details