We create useful solutions to fundamental computational problems with impact on Google’s products and scientific progress.
About the team
Our team works on finding solutions to computational problems in theory and algorithms, machine learning, journalism, speech, and other data-driven disciplines, with impact on Google’s products and scientific progress.
To achieve this double objective, we focus on two tools: software libraries to vehicle the research findings to products and services, and publications to make the work known to the community.
Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet Talwalkar
Natalia Ponomareva, Thomas Colthurst, Gilbert Hendry, Salem Haykal, Soroush Radpour
Sashank J. Reddi, Satyen Kale, Sanjiv Kumar
Immanuel Bayer, Xiangnan He, Bhargav Kanagal, Steffen Rendle
Yesterday, we announced the launch of Android Wear 2.0, along with brand new wearable devices, that will run Google's first entirely “on-device” ML technology for powering smart messaging.
Today we are announcing tf.Transform, a library for TensorFlow that allows users to define preprocessing pipelines and run these using large scale data processing frameworks.
That’s why we developed algorithms for Explore in Docs, a collaboration between the Coauthor and Apps teams that uses powerful Google infrastructure, best-in-class information retrieval, machine learning, and machine translation technologies.
We’re making the Fact Check label in Google News available everywhere, and expanding it into Search globally in all languages.
Collaborative Machine Learning without Centralized Training Data
We recently provided many exciting improvements to Gboard for Android, working towards our vision of creating an intelligent mechanism that enables faster input while offering suggestions and correcting mistakes, in any language you choose.
Today we present TensorFlow Lattice, a set of prebuilt TensorFlow Estimators that are easy to use, and TensorFlow operators to build your own lattice models.
To provide better discovery and rich content for books, movies, events, recipes, reviews and a number of other content categories with Google Search, we rely on structured data that content providers embed in their sites using schema.org vocabulary.
Many real-world applications include nuanced information about the relationships between data items. We focus on extending ML approaches to better model the relationships contained in such information networks. These models (e.g. semi-supervised similarity ranking & clustering, neural graph embedding, and graph convolutional approaches) are useful in a wide range of ML applications for many Google products.
Supervised machine learning
We have a long history of building and applying ML techniques at Google, having previously developed a Core Google API for supervised machine learning and more recently researching and developing tools for the TensorFlow ecosystem (e.g. tf.Transform, kernel methods, gradient boosted trees). We also actively collaborate with product groups across Google (e.g. Docs, Search, Ads, Geo) to help deploy ML-based solutions and actively publish cutting edge research (e.g. automatic hyperparameter tuning, compact multi-class gradient boosted trees).
Large-scale machine learning
We focus on large scale machine learning including supervised learning (e.g. deep learning and kernel-based learning), and semi/unsupervised learning (e.g. streaming clustering and efficient similarity search). The research areas include distributed optimization, personalization and privacy-preserving learning, on-device learning and inference, recommendation systems, data-dependent hashing, and learning-based vision. We develop principled approaches and apply them to Google’s products. Our team regularly publishes in top-tier learning conferences and journals. Our team’s work has been applied across Google, powering Search and Display Ads, YouTube, Android, Play, Gmail, Assistant and Google Shopping.
We provide fast clustering of the datasets that can scale to billions of datapoints, and a streaming throughput of hundreds of thousands of points per second. The goal is to provide scalable nonparametric clustering without making simplistic generative assumptions like convexity of clusters which are rarely true in practice. The team develops techniques that can handle drift in data distributions over time. These techniques are being used in a large number of applications including dynamic spam detection in multiple products and semantic expansion in NLP.
Modeling and data science
We sift through data to discover, understand, and model implicit signals in user behavior. We partner with Product Areas such as Ads, YouTube, Android, and more to add machine learning functionality to products across Google. Due to the open ended nature of data mining, ongoing projects vary and currently include smart notifications on Android, Ads Pricing optimizations, differential privacy work, and more.
Structured Data plays an essential role in Google's products and features including Fact Check in Google News and Search, Knowledge Panel, Structured Snippets, Search Q&A, etc. The goals of the Structured Data group are: 1) working with various product teams closely and leverage our expertise in structured data to solve challenging technical problems and initiate new product features; 2) providing scientific expertise in computational journalism across Google in the fight against digital misinformation; 3) drive a long-term agenda that will advance state-of-the-art research in structured data with real world impact. We use a wide range of techniques including machine learning, data mining, NLP, information retrieval and extraction.
We develop techniques for large scale similarity search in massive databases with arbitrary data types (sparse or dense high dimensional data) and similarity measures (metric/non-metric, potentially learned from data). The focus has been on developing data-dependent ML-based hashing techniques and tree-hash hybrids that are driving a multitude of applications at Google. This team also develops techniques for fast inference in machine learning models including neural networks, often improving the speeds over 50x while maintaining near exact accuracy.
Speech and language algorithms
Our mission is to accurately and efficiently represent, combine, optimize and search models of speech and text. In particular, we devise automata, grammars, neural and other models that represent word histories, context-dependent lexicons for speech and keyboard, written-to-spoken transductions and extractions of dates, times, currency, measures, etc, and transliteration and contextual models of language. These can be combined and optimized to give high-accuracy, efficient speech recognition and synthesis, text normalization, and more. We provide efficient decoding algorithms to search these models. This work is used extensively in Google's speech and text processing infrastructure.
Sensitive content detection
Our mission is to create a comprehensive set of classifiers for detecting offensive, inappropriate & controversial content in images and video. We accomplish this using a variety of techniques, including ensembles of ML models that are trained on images and text from the web. We also apply transfer learning on deep vision models for domain-specific classifier creation.
ML for personalization and assistance
Multiple teams within Google AI develop algorithms and ML systems for understanding user preferences and delighting users by providing personalized and targeted experiences. Examples of projects in this research area include measuring and modeling user attention and satisfaction, characterizing users in terms of their actions on Google, understanding intents in user-user and user-to-business conversations, applying ML to understand topics of interest on the web, and modeling users’ online journeys.
Semi-supervised and unsupervised machine learning
Semi-supervised learning is becoming increasingly critical to solving many real-world product problems where data is sparse, sparsely labeled, or noisy, and our mission is to develop semi-supervised ML systems that operate at Google scale. The research has a broad range of applications across Google in query understanding, conversation understanding, and media understanding.
ML model compression for mobile devices
We develop systems for transforming cloud-resident ML models to highly efficient models that run on resource-constrained mobile devices.
Media understanding in conversations
Our mission is to enrich electronic conversations or provide assistance in conversations by understanding media using multi-modal signals from images, video, text, and the web. We accomplish this by marrying machine vision models with ML-enabled natural language understanding and generation systems.
Combinatorial machine learning
Many fundamental learning problems we solve at Google have non-trivial combinatorial structure that prevents the application of general purpose ML algorithms. They exhibit complex and discontinuous loss functions (e.g., in pricing) or combinatorial explosions (such as contextual bandits, feature selection, or integer programming) and may require solutions that are robust against strategic behavior. Our team pushes the boundaries in these areas through research that blends techniques from learning theory, game theory, and discrete/continuous optimization.
Glassbox Learning does R&D into making ML more controllable and interpretable, without sacrificing accuracy. An important line of research is how to translate policy goals about metrics and fairness into machine learning training. For interpretability, Glassbox provides end-to-end guarantees on the relationship of inputs to outputs, such as monotonicity and other shape constraints. To achieve these goals, Glassbox researches and utilizes new algorithms for constrained optimization.
Dataset Search, also known as Science Search, is a project to index all datasets on the web and to make the metadata (and, where possible, the data itself) searchable and useful. Datasets and related data tend to be spread across multiple data repositories on the web. In most cases, data is not linked nor has it been indexed which makes searching tedious or, in some cases, impossible.
AdaNets adaptively learn both the structure of the network and its weights. They are based on deep boosting with solid theoretical analysis, including data-dependent generalization guarantees.
Some of our people
Machine learning has already transformed our computational solutions. The future is even more exciting: tackling more complex and more challenging learning problems with modern theoretical and algorithmic advances.