- Aishanee Shah
- Andrew Hard
- Cameron Nguyen
- Ignacio Lopez Moreno
- Kurt Partridge
- Niranjan Subrahmanya
- Pai Zhu
- Rajiv Mathews
Interspeech (2020)
We demonstrate that a production-quality keyword-spotting model can be trained on-device using federated learning and achieve comparable false accept and false reject rates to a centrally-trained model. To overcome the algorithmic constraints associated with fitting on-device data (which are inherently non-independent and identically distributed), we conduct thorough empirical studies of optimization algorithms and hyperparameter configurations using large-scale federated simulations. And we explore techniques for utterance augmentation and data labeling to overcome the physical limitations of on-device training.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work