Make machines intelligent. Improve people’s lives.
About the team
History of research breakthroughs
Google Brain started in 2011 at X as an exploratory lab and was founded by Jeff Dean, Greg Corrado and Andrew Ng, along with other engineers and is now part of Google Research. Since then, we continually rethink our approach to machine learning and are proud of our breakthroughs, which include:
- AI infrastructure (developing TensorFlow)
- Sequence-to-sequence learning, leading to Transformers and BERT
- AutoML, pioneering automated machine learning for production use
Our research breakthroughs enable Google’s mission to organize the world's information and make it universally accessible and useful.
Google impact
As part of Google and Alphabet, the Brain team has access to resources and unparalleled collaboration opportunities that have a positive impact on products and society. Our broad and fundamental research goals allow us to collaborate with and contribute to many teams across Alphabet, which deploy our cutting-edge technology into products used by billions of users, positively impacting society and the research community.
Open and bottom-up culture
We believe that openly disseminating research is critical to a healthy exchange of ideas, leading to rapid progress in the field.
As such, we regularly publish our research at top academic conferences and journals, and release our tools, such as TensorFlow and Jax, as open-source projects.
Team members are encouraged to set their own research goals, allowing the Brain team to maintain a portfolio of projects across varied time horizons, research areas and levels of risk.
Research areas
Team focus summaries
Highlighted projects
Improving communication for people with speech impairments.
An open source research project exploring the role of machine learning as a tool in the creative process.
Hyper local weather forecasts using deep learning
Optimize long-term values of Google recommendation products using reinforcement learning and other ML techniques.
LaMDA (Language Model For Dialog Applications) is a language model tasked with generating sensible, specific, interesting, and appropriate dialog on open-domain, multi-turn conversations.
Approaching automated machine learning and “learning to learn”.
Machine learning can help read the language of life by reading the chemical formulas of proteins and telling us their purpose.
Using AI research to radically improve the software development process
Featured publications
NIPS (2017)
International Conference on Machine Learning (2021)