Jump to Content

India research lab

At India Research Lab, our mission is to contribute towards fundamental advances in computer science and apply our research to tackle big problems and deliver impact for India, Google, and the communities around the world.

India research lab team

India research lab

About the team

Our impact and goals for the next 5 years are aligned along three dimensions:

Scientific impact

Advance the state of the art for every problem that we pursue.

  • Build models of human cognition and cognition-inspired AI algorithms.
  • Understand limitations of deep learning systems, improve their safety and robustness and address issues including calibration, fairness and explainability of ML solutions.
  • Combine computer vision based methods with personalised knowledge graphs to better understand images.
  • Pursue basic research in HCI, to ensure that our technologies touching end users are informed by an evolved understanding of human factors.

Societal impact

Demonstrate positive societal impact through our research in areas such as health, ecology and wildlife conservation.

  • Transform healthcare using ML: Our researchers have developed deep learning based solutions for more effective screening of diabetic retinopathy, which are being deployed at hospitals in India.
  • AI for Social Good program to address issues like public health, education and wildlife conservation, in partnership with NGOs and academic researchers, while making fundamental advances to the underlying scientific areas like multi-agent systems, ML and HCI.
  • Develop additional novel solutions focussed on the prevention and wellness for major diseases like CVD and diabetes; improve health outcomes at lower costs and solve for the acute shortage of doctors in countries like India.

Product impact

Make significant improvements to Google products to make them more helpful to our users.

  • Advance the state of the art and apply ML in areas like natural language understanding (NLU) and user understanding to address the unique challenges in the Indian context (e.g. code mixing in Search, diversity of languages, dialects and accents in Assistant) .
  • Enhance overall capabilities (e.g. ability of the Assistant to handle conversations requiring a combination of knowledge, reasoning and personalization), do better user modeling and to improve fraud detection in GPay.

Team focus summaries

Advertising sciences

The next billion users (NBU) present a unique set of challenges to the search and the advertising business. We aim to solve the research challenges around NBU ads first and extend these solutions broadly to the FBU market as well.

Key technical areas

  • Machine Learning
  • Natural Language Processing
  • Predictive Modeling

Group members

  • Aravindan Raghuveer, Engineering Manager (Team Lead)
  • Abhirut Gupta, Research Software Engineer
  • Anand Brahmbhatt, Pre-doctoral Researcher
  • Anirban Laha, Research Software Engineer
  • Kushal Chauhan, Research Software Engineer
  • Navodita Sharma, Research Software Engineer
  • Preksha Nema, Research Scientist
  • Rishi Saket, Research Scientist
  • Shreyas Havaldar, Pre-doctoral Researcher
  • Sneha Mondal, Research Software Engineer
  • Soumya Sharma, Pre-doctoral Researcher
  • Yukti Makhija, Pre-doctoral Researcher

Cognitive modeling and machine learning (CogML)

We build expressive, robust machine learning systems, drawing functional and algorithmic inspiration from human cognition. We are also working to develop a deeper understanding of human cognition, with applications in ML systems design, user modeling, and personalisation.

Key technical areas

  • Multi-task and continual learning
  • Meta-learning
  • Neuro-symbolic architectures
  • Interpretability & attribution
  • User modeling & personalization
  • Computational cognitive modeling

Group members

  • Pradeep Shenoy, Research Scientist (Team Lead)
  • Jeevesh Juneja, Pre-doctoral Researcher
  • Nishant Jain, Pre-doctoral Researcher
  • Rishabh Tiwari, Pre-doctoral Researcher
  • Shubham Mittal, Pre-doctoral Researcher
  • Tarun Verma, Software Engineer

Earth observation science (EOS)

We use machine learning and earth observation imagery to solve geoscience research questions, motivated by applications in climate adaptation and social impact.

Key technical areas

  • Computer Vision
  • Machine Learning
  • Remote Sensing

Group members

Health

Our health research aims to create a mobile platform to collect vital healthcare data in novel ways and use them to predict medical conditions and subsequently suggest behavioral changes as preventive measures.

Key technical areas

  • Behaviour Sciences
  • Natural Language Processing
  • Machine Learning
  • Computer Vision

Group members

  • Narayan Hegde, Software Engineer (Team Lead)
  • Abhimanyu Singh, Product Manager
  • Jatin Alla, Pre-doctoral Researcher
  • Pradeep Kumar S, Technical Program Manager
  • Sriram Lakshminarasimhan, Software Engineer

Machine learning and optimization

We conduct fundamental research in ML. The ML algorithms, and deep learning in particular, are effective at optimizing accuracy on a given training-test dataset, but the generalization ability of the algorithms can be very fragile, typically requiring multiple rounds of tuning and iteration for real-world deployment. Our goal is to design robust, rigorous ML algorithms that work (almost) out of the box.

Key technical areas

  • Algorithm robustness
  • ML optimization

Group members

  • Prateek Jain, Research Scientist (Team Lead)
  • Aishwarya P S, Software Engineer
  • Aniket Das, Pre-doctoral Researcher
  • Anirudh GP, Software Engineer
  • Arun Suggala, Research Scientist
  • Dheeraj Nagaraj, Research Scientist
  • Gaurav Srivastava, Software Engineer
  • Harshit Varma, Pre-doctoral Researcher
  • Karthikeyan Shanmugam, Research Scientist
  • Nithi Gupta, Software Engineer
  • Pranav Nair, Pre-doctoral Researcher
  • Praneeth Netrapalli, Research Scientist
  • Ramnath Kumar, Pre-doctoral Researcher
  • Soumyabrata Pal, Visiting Researcher
  • Varun Yerram, Pre-doctoral Researcher
  • Yashas Samaga B L, Pre-doctoral Researcher

Mixed-mode user understanding (M2U2)

We work on fundamental computer vision and machine learning research problems with a focus on creating impact in the areas of healthcare, agriculture and accessibility.

Key technical areas

  • Computer vision
  • Machine learning
  • Graph neural networks

Group members

  • Alok Talekar, Software Engineer
  • Amandeep Kaur, Pre-doctoral Researcher
  • Anirban Santara, Research Software Engineer
  • Debapriya Tula, Pre-doctoral Researcher
  • Gagan Jain, Pre-doctoral Researcher
  • Ishan Deshpande, Research Engineer
  • Nidhi Hegde, Pre-doctoral Researcher
  • Nikita Saxena, Pre-doctoral Researcher
  • Radhika Dua, Pre-doctoral Researcher
  • Sharad Shriram, Pre-doctoral Researcher
  • Sujoy Paul, Research Scientis

Multi-agent systems for societal impact (MASSI)

We apply AI methodologies, such as reinforcement learning and multiagent systems, to big problems in public health, education, disaster prevention and conservation, by supporting projects led by social organizations spanning seventeen countries across Asia-Pacific and Sub-Saharan Africa.

Key technical areas

  • Public health
  • Conservation
  • Agriculture

Group members

  • Milind Tambe, Principal Scientist (Team Lead)
  • Aparna Taneja, Software Engineer
  • Arpan Dasgupta, Pre-doctoral Researcher
  • Arshika Lalan, Pre-doctoral Researcher
  • Divy Thakkar, Program Manager
  • Jashn Arora, Pre-doctoral Researcher
  • Manish Jain, Software Engineer

Natural language understanding (NLU)

We aim to democratize information access and make Google products awesome for Indian language users. At a global level, we aim to make time a horizontal in natural language problems and enable reasoning with it.

Key technical areas

  • Multilingual learning
  • Representation learning
  • Learning from low resources
  • Conversational AI
  • Knowledge graphs
  • Temporal reasoning

Group members

  • Partha Talukdar, Research Scientist (Team Lead)
  • Bidisha Samanta, Research Engineer
  • Dinesh Tewari, Research Program Manager
  • Harman Singh, Pre-doctoral Researcher
  • Kartikeya Badola, Software Engineer
  • Nitish Gupta, Research Scientist
  • Palak Jain, Software Engineer
  • Rachit Bansal, Pre-doctoral Researcher
  • Sagar Gubbi, Visiting Researcher
  • Shachi Dave, Software Engineer
  • Shikhar Bharadwaj, Pre-doctoral Researcher
  • Shikhar Vashishth, Research Scientist
  • Siddhesh Pawar, Pre-doctoral Researcher
  • Sriram (Sri) Ganapathy, Visiting Faculty Researcher

Featured publications

Evaluating Inclusivity, Equity, and Accessibility of NLP Technology: A Case Study for Indian Languages
Simran Khanuja
Sebastian Ruder
Findings of the Association for Computational Linguistics: EACL 2023
Preview abstract In order for NLP technology to be widely applicable and useful, it needs to be **inclusive** of users across the world's languages, **equitable**, i.e., not unduly biased towards any particular language, and **accessible** to users, particularly in low-resource settings where compute constraints are common. In this paper, we propose an evaluation paradigm that assesses NLP technologies across all three dimensions, hence quantifying the diversity of users they can serve. While inclusion and accessibility have received attention in recent literature, quantifying equity is relatively unexplored. We propose to address this gap using the *Gini coefficient*, a well-established metric used for estimating societal wealth inequality. Using our paradigm, we highlight the distressed state of utility and equity of current technologies for Indian (IN) languages. Our focus on IN is motivated by their linguistic diversity and their large, varied speaker population. To improve upon these metrics, we demonstrate the importance of region-specific choices in model building and dataset creation and also propose a novel approach to optimal resource allocation in pursuit of building linguistically diverse, equitable technologies. View details
Preview abstract We explore a fundamental question in language model pre-training with huge amounts of unlabeled and randomly sampled text data - should every data sample have equal contribution to the model learning. To this end, we use self-influence (SI) scores as an indicator of sample importance, analyzing the relationship of self-influence scores with the sample quality and probing the efficacy of SI scores for offline pre-training dataset filtering. Building upon this, we propose PRESENCE: Pre-training data REweighting with Self-influENCE, an online and adaptive pre-training data re-weighting strategy using self-influence scores. PRESENCE is a two-phased learning method: In the first phase of learning, the data samples with higher SI scores are emphasized more, while in the subsequent phase of learning, the data samples with higher SI scores are de-emphasized to limit the impact of noisy and unreliable samples. We validate PRESENCE over $2$ model sizes of multilingual-t5 with $5$ datasets across $3$ tasks, obtaining significant performance improvements over the baseline methods considered. Through extensive ablations and qualitative analyses, we put forward a new research direction for language model pre-training. View details
Preview abstract Pretrained multilingual models such as mBERT and multilingual T5 (mT5) have been successful at many Natural Language Processing tasks. The shared representations learned by these models facilitate cross lingual transfer in case of low resource settings. In this work, we study the usability of these models for morphology analysis tasks such as root word extraction and morphological feature tagging for Indian langauges. In particular, we use the mT5 model to train gender, number and person tagger for langauges from 2 Indian families. We use data from 6 Indian langauges: Marathi, Hindi, Bengali, Tamil, Telugu and Kannada to fine-tune a multilingual GNP tagger and root word extractor. We demonstrate the usability of multilingual models for few shot cross-lingual transfer through an average 7\% increase in GNP tagging in case of cross-lingual settings as compared to a monolingual setting and through controlled experiments. We also provide insights into cross-lingual transfer of morphological tags for verbs and nouns; which also provides a proxy for quality of the multilingual representations of word markers learned by the model. View details
1-Pager: One Pass Answer Generation and Evidence Retrieval
Palak Jain
The 2023 Conference on Empirical Methods in Natural Language Processing (2023) (to appear)
Preview abstract We present 1-PAGER the first system that answers a question and retrieves evidence using a single Transformer-based model and decoding process. 1-PAGER incrementally partitions the retrieval corpus using constrained decoding to select a document and answer string, and we show that this is competitive with comparable retrieve-and-read alternatives according to both retrieval and answer accuracy metrics. 1-PAGER also outperforms the equivalent ‘closed-book’ question answering model, by grounding predictions in an evidence corpus. While 1-PAGER is not yet on-par with more expensive systems that read many more documents before generating an answer, we argue that it provides an important step toward attributed generation by folding retrieval into the sequence-to-sequence paradigm that is currently dominant in NLP. We also show that the search paths used to partition the corpus are easy to read and understand, paving a way forward for interpretable neural retrieval. View details
Preview abstract Learning from label proportions (LLP) is a generalization of supervised learning in which the training data is available as sets or bags of feature-vectors (instances) along with the average instance-label of each bag. The goal is to train a good instance classifier. While most previous works in LLP have focused on training models on such training data, computational learnability in LLP only recently been explored by [Saket21,Saket22], who showed worst case intractability of properly learning linear threshold functions (LTFs) from label proportions while not ruling out efficient algorithms for this problem under distributional assumptions. In this work we show that it is indeed possible to efficiently learn LTFs using LTFs when given access to random bags of some label proportion in which feature-vectors are independently sampled from a fixed Gaussian distribution N(mu, Sigma), conditioned on the label assigned by the target LTF. Our method estimates a matrix by sampling pairs of feature-vector from the bags with and without replacement, and we prove that the principal component of this matrix necessarily yields the normal vector of the LTF. For some special cases with N(0, I) we provide a simpler expectation based algorithm. We include an experimental evaluation of our learning algorithms along with a comparison of with those of [Saket21, Saket22] and random LTFs, demonstrating the effectiveness of our techniques. View details
Preview abstract The options framework in Hierarchical Reinforcement Learning breaks down overall goals into a combination of options or simpler tasks and associated policies, allowing for abstraction in the action space. Ideally, these options can be reused across different higher-level goals; indeed, many previous approaches have proposed limited forms of transfer of prelearned options to new task settings. We propose a novel "option indexing" approach to hierarchical learning (OI-HRL), where we learn an affinity function between options and the functionalities (or affordances) supported by the environment. This allows us to effectively reuse a large library of pretrained options, in zero-shot generalization at test time, by restricting goal-directed learning to only those options relevant to the task at hand. We develop a meta-training loop that learns the representations of options and environment affordances over a series of HRL problems, by incorporating feedback about the relevance of retrieved options to the higher-level goal. In addition to a substantial decrease in sample complexity compared to learning HRL policies from scratch, we also show significant gains over baselines that have the entire option pool available for learning the hierarchical policy. View details
Preview abstract Invariant representations are transformations of the covariates such that the best model on top of the representation is invariant across training environments. In the context of linear Structural Equation Models (SEMs), invariant representations might allow us to learn models with out-of-distribution guarantees, i.e., models that are robust to interventions in the SEM. To address the invariant representation problem in a finite sample setting, we consider the notion of $\epsilon$-approximate invariance. We study the following question: If a representation is approximately invariant with respect to a given number of training interventions, will it continue to be approximately invariant on a larger collection of unseen intervened SEMs? Inspired by PAC learning, we obtain finite-sample out-of-distribution generalization guarantees for approximate invariance that holds probabilistically over a family of linear SEMs without faithfulness assumptions. View details
Preview abstract Stein Variational Gradient Descent (SVGD) is a popular nonparametric variational inference algorithm which simulates an interacting particle system to approximate a target distribution. While SVGD has demonstrated promising empirical performance across various domains, and its population (i.e, infinite-particle) limit dynamics is well studied, the behavior of SVGD in the finite-particle regime is much less understood. In this work, we design two computationally efficient variants of SVGD, namely VP-SVGD and RB-SVGD, with provably fast finite-particle convergence rates. By introducing the notion of \emph{virtual particles}, we develop novel stochastic approximations of population-limit SVGD dynamics in the space of probability measures, which is exactly implementable using only a finite number of particles. Our algorithms can be viewed as specific random-batch approximations of SVGD, which are computationally more efficient than ordinary SVGD. We establish that the $n$ particles output by VP-SVGD and RB-SVGD, run for $T$ steps, are i.i.d samples from a distribution whose Kernel Stein Discrepancy to the target is at most $O(T^{\nicefrac{-1}{6}})$ under standard assumptions. Our results hold under a mild growth condition on the potential function, which is significantly weaker than the isoperimetric assumptions (e.g. Poincare Inequality) or information-transport conditions (e.g. Talagrand's Inequality $\mathsf{T}_1$) generally considered in prior works. As a corollary, we consider the convergence of the empirical measure (of the particles output by VP-SVGD and RB-SVGD) to the target distribution and demonstrate a \emph{double exponential improvement} over the best known finite-particle analysis of SVGD. View details
Preview abstract We consider regression where the noise distribution depends on the covariates (i.e, heteroscedastic noise), which captures popular settings such as linear regression with multiplicative noise occuring due to covariate uncertainty. In particular we consider linear regression where the noise variance is an unknown rank-1 quadratic function of the covariates. While an application of least squares regression can achieve an error rate of $\nicefrac{d}{n}$, this ignores the fact that the magnitude of the noise can be very small for certain values of the covariates, which can aid faster learning. Our algorithm \ouralg~runs a parameter estimation algorithm and a noise distribution model learning algorithm are run alternately, using each other's outputs to iteratively obtain better estimates of the parameter and the noise distribution model respectively. This achieves an error rate of $\nicefrac{1}{n} + \nicefrac{d^2}{n^2}$, which we show is minimax optimal up to logarithmic factors. A sub-routine for \ouralg~performs phase estimation with multiplicative noise maybe of independent interest. View details
Front-door Adjustment Beyond Markov Equivalence with Limited Graph Knowledge
Abhin Shah
Murat Kocaoglu
Neural Information Processing Systems 2023 (NeurIPS 2023) (2023) (to appear)
Preview abstract Causal effect estimation from data typically requires assumptions about the cause-effect relations either explicitly in the form of a causal graph structure within the Pearlian framework, or implicitly in terms of (conditional) independence statements between counterfactual variables within the potential outcomes framework. When the treatment variable and the outcome variable are confounded, front-door adjustment is an important special case where, given the graph, causal effect of the treatment on the target can be estimated using post-treatment variables. However, the exact formula for front-door adjustment depends on the structure of the graph, which is difficult to learn in practice. In this work, we provide testable conditional independence statements to compute the causal effect using front-door-like adjustment without knowing the graph under limited structural side information. We show that our method is applicable in scenarios where knowing the Markov equivalence class is not sufficient for causal effect estimation. We demonstrate the effectiveness of our method on a class of random graphs as well as real causal fairness benchmarks. View details

Highlighted work

Some of our locations

Some of our people