Aditi Chaudhary

Aditi Chaudhary

Aditi is a Research Scientist at Google Research. Her primary interests include working on NLP problems, currently focussing on synthetic data generation and multilinguality. Prior to joining Google, Aditi graduated with a PhD from Carnegie Mellon University. For more details please check https://aditi138.github.io/webpage/.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract Recent developments in large language models (LLMs) have shown promise in their ability to generate synthetic query-document pairs by prompting LLMs with as few as 8 demonstrations \cite{dai2022promptagator}. This has enabled building better IR models especially for tasks which have no training data readily available. Typically, such synthetic query generation (QGen) approaches condition on an input context (e.g. document) and generate a query that is relevant to that context or condition the QGen model additionally on the relevance label (e.g. relevant vs irrelevant) to generate queries across relevance buckets. However, we find that such QGen approaches are sub-optimal as it requires the model to reason about the desired label and the input from only a handful of examples, which is not trivial, especially when the relevance buckets are nuanced. In this work, we propose to reduce this burden of LLMs by generating queries simultaneously for different labels (e.g. relevance buckets). We hypothesize that instead of asking the model to generate, say, an irrelevant query given an input context, asking the model to generate an irrelevant query with respect to a relevant query is a much simpler task setup for the model to reason about. Extensive experimentation across seven IR datasets shows that synthetic queries generated in such a fashion translates to a better downstream performance, suggesting that the generated queries are indeed of higher quality. View details
    Preview abstract Query-document relevance prediction is a critical problem in Information Retrieval systems. This problem has increasingly been tackled using (pretrained) transformer-based models which are finetuned using large collections of labeled data. However, in specialized domains such as e-commerce and healthcare, the viability of this approach is limited by the dearth of large in-domain data. To address this paucity, recent methods leverage these powerful models to generate high-quality task and domain-specific synthetic data. Prior work has largely explored synthetic data generation or query generation (QGen) for Question-Answering (QA) and binary (yes/no) relevance prediction, where for instance, the QGen models are given a document, and trained to generate a query relevant to that document. However in many problems, we have a more fine-grained notion of relevance than a simple yes/no label. Thus, in this work, we conduct a detailed study into how QGen approaches can be leveraged for nuanced relevance prediction. We demonstrate that – contrary to claims from prior works – current QGen approaches fall short of the more conventional cross-domain transfer-learning approaches. Via empirical studies spanning three public e-commerce benchmarks, we identify new shortcomings of existing QGen approaches – including their inability to distinguish between different grades of relevance. To address this, we introduce label-conditioned QGen models which incorporates knowledge about the different relevance. While our experiments demonstrate that these modifications help improve performance of QGen techniques, we also find that QGen approaches struggle to capture the full nuance of the relevance label space and as a result the generated queries are not faithful to the desired relevance label. View details