Jump to Content

Joint Speech Recognition and Speaker Diarization via Sequence Transduction

Hagen Soltau
Proc. Interspeech (2019) (to appear)

Abstract

Speech applications dealing with conversations require not only recognizing the spoken words, but also determining who spoke when. The task of assigning words to speakers is typically addressed by merging the outputs of two separate systems, namely, an automatic speech recognition (ASR) system and a speaker diarization (SD) system. The two systems are trained independently with different objective functions. Often the SD systems operate directly on the acoustics and are not constrained to respect word boundaries and this deficiency is overcome in an ad hoc manner. Motivated by recent advances in sequence to sequence learning, we propose a novel approach to tackle the two tasks by a joint ASR and SD system using a recurrent neural network transducer. Our approach utilizes both linguistic and acoustic cues to infer speaker roles, as opposed to typical SD subsystems, which only use acoustic cues. We evaluate the performance of our model on a large corpus of medical conversations between physicians and patients and find that our approach improves performance by about 86% word-level diarization error rate over a competitive conventional baseline.