Junwen Bai

Junwen Bai

I'm a Research Scientist at Google. I am interested in the general areas of machine learning and language technology, with research focuses on sequence representation learning and probabilistic modeling, often under scenarios with low-supervision. I have developed scalable and general machine learning methods for real-world problems including automatic speech recognition, climate change and scientific discovery.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Joint Unsupervised and Supervised Training for Multilingual ASR
    Yu Zhang
    IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE (2022), pp. 6402-6406
    Preview abstract Self-supervised training has been showing promising gains in pretraining models and facilitating the downstream finetuning for speech recognition. Effective self-supervised losses designed for large-scale unlabeled data can help learn the useful latent structures. Most existing methods adopt a 2-stage scheme where the self-supervised loss is optimized in the first pretraining stage, and the standard supervised finetuning resumes in the second stage. However, the pretrained checkpoint selection is known to be tricky and tedious, and pure finetuning can cause catastrophic forgetting of the learnt representations. To address these concerns, we propose an end-to-end (E2E) Joint Unsupervised and Supervised Training (JUST) method to combine the supervised RNN-T loss and the self-supervised contrastive and masked language modeling (MLM) losses. We apply our method to a challenging multilingual automatic speech recognition (ASR) task and validate its performance on the public dataset \textit{Multilingual LibriSpeech} (MLS), which includes 8 languages and is extremely imbalanced. On MLS, we explore (1) JUST trained from scratch, and (2) JUST finetuned from a pretrained checkpoint. Experiments show that JUST can consistently outperform other existing state-of-the-art (SOTA) methods by 10\%, and beat the monolingual baseline by a significant margin, demonstrating JUST's capability of handling low-resource languages in multilingual ASR. Our average WER of all languages outperforms monolingual baselines by 33.3\%, and the state-of-the-art 2-stage XLSR by 32\%. On low-resource language like Polish, our WER is less than half of the monolingual WER baseline and even beats the supervised transfer learning method using external supervision. View details