Our mission is to spread useful AI effectively around the world.
About the team
The Google Cloud AI research team tackles underexplored, real-world challenges for Google Cloud customers. We work on a range of inspiring problems based on Google Cloud customer needs, identifying research topics that maximize both scientific and real-world impact. As such, we collaborate closely with product teams to put our research results in the hands of our customers, and publish the findings in top ML venues.
Innovations coming out of the Google Cloud AI research team help explain the behavior of sophisticated machine learning models and improve current capabilities by enabling more efficient use of data. Additionally, we proactively identify market needs, and work with customers to identify specific use cases where innovation is needed (e.g. recommendation systems, document understanding, infectious disease modeling).
Team focus summaries
Explainability is required to effectively use AI in real-world applications such as finance, healthcare, retail, manufacturing and others. Data scientists, business decision makers, regulators and others all need to know why AI models make certain decisions, and our researchers are working on a wide range of approaches to increase model explainability, including sample-based, feature-based or concept-based methods that utilize reinforcement learning, attention based architectures, prototypical learning, surrogate model optimization on all kinds of required data types and high impact tasks.
Data-efficient learning is important, as for many AI deployments it is necessary to train models with only 100s of training examples. To this end Cloud AI researchers conduct research into active learning, self-supervised representation learning, transfer learning, domain adaptation and meta learning.
High-impact enterprise data types
Cloud AI researchers are looking at ways to advance the state of the art for specific data types such as time series and tabular data (two of the most common data types in AI deployments), which have received significantly less focus in the research community compared to other data types. In time series, we are actively developing new deep learning models with complex inputs – for example, the team’s novel Temporal Fusion Transformer architecture is state-of-the-art in terms of performance across a wide range of datasets. In tabular data, we developed TabNet, a new deep learning method for tabular data that achieves state-of-the-art performance on many datasets and yields interpretable insights.
Specific important enterprise use cases
Cloud AI researchers also conduct research targeting specific enterprise use cases, such as recommendation systems, which play a key role in the retail industry and face challenges in personalization, contextualization, trending, and diversification. We develop recommendation models that support event time-aware features, which captures user history events effectively for homepage recommendations. We also work on end-to-end document understanding which requires a holistic comprehension of structured information of a variety of documents, and recently developed contributed to society by providing a novel approach to forecasting the progression of COVID-19 that integrates machine learning into compartmental disease modeling.
A novel graph-based architecture with Rich Attention Transformer designed for form document understanding. FormNet recovers local syntactic information and achieves SOTA performance on public benchmarks.
A model agnostic approach that allows users to specify input-output relationships that their system should obey and integrates them with a DNN in a way that can be modified at inference time.
A novel continual learning framework that avoids catastrophic forgetting and maintains high accuracy without having to retain past training data.
An anomaly detection method using an ensemble of one-class classifiers and a self-supervised data representation and refinement process to achieve robust results on a completely uncurated dataset.
A framework that locally distills a block box model into an interpretable model of our choice (e.g. shallow decision tree or linear regression) without sacrificing performance.
A new deep learning model for time-series that beats other algorithms by a large margin and provides useful explanations in various forms.
A new deep learning method for tabular data that improves over other DNN and ensemble decision tree models on many datasets and provides interpretable insights.
A framework that enables quantitatively compute the importance of each training sample for the model, which yields better quality estimates than the alternatives with significant time savings.
Association for Computational Linguistics (ACL) (2021)
Transactions on Machine Learning Research (TMLR) (2022)
International Journal of Forecasting (2021)
International Conference on Learning Representations (ICLR) (2021)