Efficient Encoders for Incremental Sequence Tagging

Aditya Gupta
Ayush Kaushal
Manaal Faruqui
ARR (2023)
Google Scholar

Abstract

A baseline method of running the bidirectional models like BERT in streaming NLU text setting would be to run it again for each new (sub)token received. Here, no previously computed features are re-used and a restart is done from scratch at each timestep for the newly received token with the new prefix. This lead to computational inefficiency (measured as FLOP Count with lower count being better). \name~ addresses this issue by reducing the FLOP Count of having bidirectional features for streaming setting and also improves the performance or generalization to incomplete inputs (partials). \name~ has two components - a partially bidirectional encoder model and an adapter to guide the restarts of bidirectional layer. Our evaluations showed that these gains are observed while maintaining a similar performance over the complete input over 4 sequence tagging datasets.