Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 19 publications
Ensemble Distillation for BERT-Based Ranking Models
Shuguang Han
Mike Bendersky
Proceedings of the 2021 ACM SIGIR International Conference on the Theory of Information Retrieval (ICTIR ’21)
How multilingual is Multilingual BERT?
Telmo Pires
Eva Schlinger
Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics (2019)
Sparse Mixers: Combining MoE and Mixing to build a more efficient BERT
James Patrick Lee-Thorp
Joshua Ainslie
Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 58–75
What Happens To BERT Embeddings During Fine-tuning?
Amil Merchant
Elahe Rahimtoroghi
Proceedings of the 2020 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, Association for Computational Linguistics (to appear)
The MultiBERTs: BERT Reproductions for Robustness Analysis
Steve Yadlowsky
Jason Wei
Naomi Saphra
Iulia Raluca Turc
2022
Assessing ASR Model Quality on Disordered Speech using BERTScore
Qisheng Li
Katie Seaver
Richard Jonathan Noel Cave
Katrin Tomanek
Proc. 1st Workshop on Speech for Social Good (S4SG) (2022), pp. 26-30 (to appear)
Jigsaw @ AMI and HaSpeeDe2: Fine-Tuning a Pre-TrainedComment-Domain BERT Model
Alyssa Whitlock Lees
Ian Kivlichan
Proceedings of Seventh Evaluation Campaign of Natural Language Processing and Speech Tools for Italian. Final Workshop (EVALITA 2020), CEUR.org, Online (to appear)