Google Research


Tackling fundamental questions in deep learning and physics using a scientific approach. Our main focus is on understanding and improving the capabilities of large language models.

About the team

Our goal is to understand the principles that govern machine learning and other physical systems by developing models and experimentally testing hypotheses. We are focused on understanding the limitations of large scale transformer models and extending their capabilities to solving challenging problems in areas such as mathematics, science, programming, algorithms, and planning.

In these domains, agents can make use of very long context, adaptive inference-time compute (e.g., scratchpad, recurrence, memory), external tools (e.g., library of functions, search engine, calculator, additional models), or other methods to solve out-of-training-domain problems when using instructions and provided with a few examples.

Research areas

Team focus summaries

Highlighted projects

Featured publications

Some of our locations

Some of our people

Join us

We're looking for talented people to join our team