An Investigation Into On-device Personalization of End-to-end Automatic Speech Recognition Models
Abstract
Speaker-independent speech recognition systems trained with data from many users are generally robust against speaker variability and works well for many unseen speakers. However, it still does not generalize well to users with very different speech characteristics. This issue can be addressed by building a personalized system that works well for each specific user. In this paper, we investigate into securely training personalized end-to-end speech recognition models on mobile devices so that user data and models are kept on mobile devices without communicating with a server. We study how the mobile training environment impacts the performance by simulating on-device data consumption. We conduct experiments using data collected from speech impaired users for personalization. Our results show that personalization achieved 63.7% relative word error rate reduction when trained in a server environment and 58.1% in a mobile environment. Moving to on-device personalization resulted in 18.7% performance degradation, in exchange for improved scalability and data privacy. To train the model on device, we split the gradient computation into two and achieved 45% memory reduction at the expense of 42% increase in training time.