
Bhuvana Ramabhadran
Bhuvana Ramabhadran (IEEE Fellow, 2017, ISCA Fellow 2017) currently leads a team of researchers at Google, focusing on semi-supervised learning for speech recognition and multilingual speech recognition. Previously, she was a Distinguished Research Staff Member and Manager in IBM Research AI, at the IBM T. J. Watson Research Center, Yorktown Heights, NY, USA, where she led a team of researchers in the Speech Technologies Group and coordinated activities across IBM’s world wide laboratories in the areas of speech recognition, synthesis, and spoken term detection. She has served as an elected member of the IEEE SPS Speech and Language Technical Committee (SLTC), for two terms since 2010 and as its elected Vice Chair and Chair (2014–2016) and currently serves as an Advisory Member. She has served as the Area Chair for ICASSP (2011–2018), on the editorial board of the IEEE Transactions on Audio, Speech, and Language Processing (2011–2015), and on the IEEE SPS conference board (2017-2018) during which she also served as the conference board’s liaison with the ICASSP organizing committees , and as Regional Director-At-Large (2018-2020), where she coordinated work across all US IEEE chapters . She currently serves as the Chair of the IEEE Flanagan Speech & Audio Award Committee and currently serves as a Member-at-Large of the IEEE SPS Board of Governors. She serves on the International Speech Communication Association (ISCA) board and has served as the area chair for Interspeech conferences since 2012. In addition to organizing several workshops at ICML, HLT-NAACL, NeurIPS and ICML, she has also served as an adjunct professor at Columbia University, where she co-taught a graduate course on speech recognition. She has served as the (Co/-)Principal Investigator on several projects funded by the National Science Foundation, EU and iARPA, spanning speech recognition, information retrieval from spoken archives, keyword spotting in many languages. She has published over 150 papers and been granted over 40 U.S. patents. Her research interests include speech recognition and synthesis algorithms, statistical modeling, signal processing, and machine learning. Some of her recent work has focused on the use of speech synthesis to improve core speech recognition performance and self-supervised learning.
Research Areas
Authored Publications
Sort By
Google
Virtuoso: Massive Multilingual Speech-Text Joint Semi-Supervised Learning for Text-to-Speech
Takaaki Saeki
Zhehuai Chen
Nobuyuki Morioka
Yu Zhang
ICASSP (2023)
Twenty-Five Years of Evolution in Speech and Language Processing
Preview
Michael Picheny
Dilek Hakkani-Tur
IEEE Signal Processing Magazine, 40 (2023), pp. 27-39
Reducing domain mismatch in self-supervised speech pretraining
Yu Zhang
submission to Interspeech 2022 (2022) (to appear)
Multilingual Second-Pass Rescoring for Automatic Speech RecognitionSystems
Pedro Moreno Mengibar
ICASSP (2022)
On Weight Interpolation of the Hybrid Autoregressive Transducer Model
David Rybach
Interspeech 2022, Interspeech 2022 (2022) (to appear)
Maestro-U: Leveraging joint speech-text representation learning for zero supervised speech ASR
Zhehuai Chen
Yu Zhang
Pedro Moreno Mengibar
Nanxin Chen
IEEE SLT (2022)
Ask2Mask: Guided Data Selection for Masked Speech Modeling
Pedro Jose Moreno Mengibar
Yu Zhang
IEEE Journal of Selected Topics in Signal Processing (2022)
MAESTRO: Matched Speech Text Representations through Modality Matching
Pedro Jose Moreno Mengibar
Yu Zhang
Zhehuai Chen
interspeech 2022 (2022) (to appear)