- Ankit Kumar
- Cosmo Du
- Dima Kuzmin
- Ed H. Chi
- Ellie Chio
- Heng-Tze Cheng
- John Roberts Anderson
- Li Zhang
- Nitin Jindal
- Pei Cao
- Ritesh Agarwal
- Sarvjeet Singh
- Steffen Rendle
- Tao Wu
- Tushar Deepak Chandra
- Wen Li
- Xiang Ma
Abstract
Most search retrieval and recommender systems predict top-K items given a query by learning directly from a large training set of (query, item) pairs, where a query can include natural language (NL), user, and context features. These approaches fall into the traditional supervised learning framework where the algorithm trains on labeled data from the target task. In this paper, we propose a new zero-shot transfer learning framework, which first learns representations of items and their NL features by predicting (item, item) correlation graphs as an auxiliary task, followed by transferring learned representations to solve the target task (query-to-item prediction), without having seen any (query, item) pairs in training. The advantages of applying this new framework include: (1) Cold-starting search and recommenders without abundant query-item data; (2) Generalizing to previously unseen or rare (query, item) pairs and alleviating the "rich get richer" problem; (3) Transferring knowledge of (item, item) correlation from domains outside of search. We show that the framework is effective on a large-scale search and recommender system.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work