- Ajit Apte
- Allen Wu
- Ambarish Jash
- Amol H Wankhede
- Ankit Kumar
- Ayooluwakunmi Jeje
- Dima Kuzmin
- Ellie Ka In Chio
- Harry Fung
- Heng-Tze Cheng
- Jon Effrat
- Moustafa Farid Taha Mohammed Alzantot
- Nitin Jindal
- Pei Cao
- Santiago Ontanon
- Sarvjeet Singh
- Senqiang Zhou
- Sukhdeep S. Sodhi
- Tameen Khan
- Tarush Bali
- Tushar Deepak Chandra
Abstract
As more and more online search queries come from voice, automatic speech recognition becomes a key component to deliver relevant search results. Errors introduced by automatic speech recognition (ASR) lead to irrelevant search results returned to the user, thus causing user dissatisfaction. In this paper, we introduce an approach, Mondegreen, to correct voice queries in text space without depending on audio signals, which may not always be available due to system constraints or privacy or bandwidth (for example, some ASR systems run on-device) considerations. We focus on voice queries transcribed via several proprietary commercial ASR systems. These queries come from users making internet, or online service search queries. We first present an analysis showing how different the language distribution coming from user voice queries is from that in traditional text corpora used to train off-the-shelf ASR systems. We then demonstrate that Mondegreen can achieve significant improvements in increased user interaction by correcting user voice queries in one of the largest search systems in Google. Finally, we see Mondegreen as complementing existing highly-optimized production ASR systems, which may not be frequently retrained and thus lag behind due to vocabulary drifts.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work