Athena

Solving fundamental computational problems that deliver meaningful impact for Google’s products, society, and scientific progress.

two people talking over coffee

Solving fundamental computational problems that deliver meaningful impact for Google’s products, society, and scientific progress.

About the team

Athena is an international team of research scientists and engineers who tackle product-inspired problems with novel solutions to assist, complement, empower, and inspire people — from the everyday to the imaginative. Our work spans algorithms, artificial intelligence (AI), language understanding, and many other fields, and yields state-of-the-art breakthroughs in areas like efficiency, privacy, and user engagement.

We collaborate closely with partners across Google to take discoveries from publication to implementation for the Company's largest and most trusted products. Beyond Google's portfolio of products and services, our contributions to AI, computer science and machine learning power scientific advances for climate science, journalism, microeconomics and other data-driven disciplines.

We recognize that AI is a foundational and transformational technology and are proud to contribute to a long history of responsible innovation. Our commitment to Responsible AI principles ensure we develop and use technologies in ways that are socially beneficial, avoid bias, are built and tested for safety, are accountable to people and aligned with our values

Team focus summaries

Featured publications

LaMDA: Language Models for Dialog Applications
Aaron Daniel Cohen
Alena Butryna
Alicia Jin
Apoorv Kulshreshtha
Ben Zevenbergen
Chung-ching Chang
Cosmo Du
Daniel De Freitas Adiwardana
Dehao Chen
Dmitry (Dima) Lepikhin
Erin Hoffman-John
Igor Krivokon
James Qin
Jamie Hall
Joe Fenton
Johnny Soraker
Kathy Meier-Hellstern
Maarten Paul Bosma
Marc Joseph Pickett
Marcelo Amorim Menegali
Marian Croak
Maxim Krikun
Noam Shazeer
Rachel Bernstein
Ravi Rajakumar
Ray Kurzweil
Romal Thoppilan
Steven Zheng
Taylor Bos
Toju Duke
Tulsee Doshi
Vincent Y. Zhao
Will Rusch
Yuanzhong Xu
Zhifeng Chen
arXiv (2022)
Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization
Liam Li
Kevin Jamieson
Ameet Talwalkar
Journal of Machine Learning Research, 18-185 (2018), pp. 1-52
FNet: Mixing Tokens with Fourier Transforms
Ilya Eckstein
James Patrick Lee-Thorp
Joshua Ainslie
NAACL 2022 (Association for Computational Linguistics)
Understanding Robustness of Transformers for Image Classification
Daliang Li
Thomas Unterthiner
Proceedings of the IEEE/CVF International Conference on Computer Vision (2021) (to appear)
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
Tu Vu
Rami Al-Rfou
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics (2022)

Highlighted work

Some of our people