Zero-shot Cross-domain Knowledge distillation: A Case study on YouTube Music

Srivaths Ranganathan
Chieh Lo
Bernardo Cunha
Li Wei
Aniruddh Nath
Shawn Andrews
Gergo Varady
Yanwei Song
Jochen Klingenhoefer
2025

Abstract

Knowledge Distillation (KD) has been widely used to improve the quality of latency sensitive models serving live traffic. However, applying KD in production recommender systems with low traffic is challenging: the limited amount of data restricts the teacher model size, and the cost of training a large dedicated teacher may not be justified. Cross-domain KD offers a cost-effective alternative by leveraging a teacher from a data-rich source domain, but introduces unique technical difficulties, as the features, user interfaces, and prediction tasks can significantly differ.
We present a case study of using zero-shot cross-domain KD for multi-task ranking models, transferring knowledge from a (100X) large-scale video recommendation platform (YouTube) to a music recommendation application with significantly lower traffic. We present offline and live experiment results and share learnings from evaluating different KD techniques in this setting across two ranking models on the YouTube Music application. Our results demonstrate that zero-shot cross-domain KD is a practical and effective approach to improve the performance of a ranking model on a low traffic surface.
×