FEDAQT: ACCURATE QUANTIZED TRAINING WITH FEDERATED LEARNING

Renkun Ni
Yonghui Xiao
Oleg Rybakov
Phoenix Meadowlark
Tom Goldstein
Google Scholar

Abstract

Federated learning has been widely used to train automatic speech recognition models, where the training procedure is decentralized to client devices to avoid data privacy concerns by keeping the training data locally. However, the limited computation resources on client devices prevent training with large models. Recently, quantization-aware training has shown the potential to train a quantized neural network with similar performance to the full-precision model while keeping the model size small and inference faster. However, these quantization methods will not save memory during training since they still keep the full-precision model. To address this issue, we propose a new quantization training framework for federated learning which saves the memory usage by training with quantized variables directly on local devices. We empirically show that our method can achieve comparable WER while only using 60% memory of the full-precision model.