Jump to Content

MobileBERT: a Compact Task-Agnostic BERT for Resource-Limited Devices

Zhiqing Sun
Xiaodan Song
Renjie Liu
Yiming Yang
ACL (2020) (to appear)

Abstract

Natural Language Processing (NLP) has recently achieved great success by using huge pre-trained models with hundreds of millions of parameters. However, these models suffer from heavy model sizes and high latency such that they cannot be deployed to resource-limited mobile devices. In this paper, we propose MobileBERT for compressing and accelerating the popular BERT model. Like the original BERT, MobileBERT is task-agnostic, that is, it can be generically applied to various downstream NLP tasks via simple fine-tuning. Basically, MobileBERT is a thin version of $\text{BERT}_\text{LARGE}$, while equipped with bottleneck structures and a carefully designed balance between self-attentions and feed-forward networks. To train MobileBERT, we %investigate a variety of knowledge transfer strategies to transfer the intrinsic knowledge from a teacher model, first train a specially designed teacher model, an inverted-bottleneck incorporated $\text{BERT}_\text{LARGE}$ model. Then, we conduct knowledge transfer from this teacher to MobileBERT. Empirical studies show that MobileBERT is 4.3$\times$ smaller and 5.5$\times$ faster than $\text{BERT}_\text{BASE}$ while achieving competitive results on well-known benchmarks. On the natural language inference tasks of GLUE, $\text{MobileBERT}$ achieves a GLUE score of $77.7$ ($0.6$ lower than $\text{BERT}_\text{BASE}$), and 62 ms latency on a Pixel 4 phone. On the SQuAD v1.1/v2.0 question answering task, $\text{MobileBERT}$ achieves a dev F1 score of $90.0/79.2$ ($1.5/2.1$ higher than $\text{BERT}_\text{BASE}$).