Solving fundamental computational problems that deliver meaningful impact for Google’s products, society, and scientific progress.
About the team
Athena is an international team of research scientists and engineers who tackle product-inspired problems with novel solutions to assist, complement, empower, and inspire people — from the everyday to the imaginative. Our work spans algorithms, artificial intelligence (AI), language understanding, and many other fields, and yields state-of-the-art breakthroughs in areas like efficiency, privacy, and user engagement.
We collaborate closely with partners across Google to take discoveries from publication to implementation for the Company's largest and most trusted products. Beyond Google's portfolio of products and services, our contributions to AI, computer science and machine learning power scientific advances for climate science, journalism, microeconomics and other data-driven disciplines.
We recognize that AI is a foundational and transformational technology and are proud to contribute to a long history of responsible innovation. Our commitment to Responsible AI principles ensure we develop and use technologies in ways that are socially beneficial, avoid bias, are built and tested for safety, are accountable to people and aligned with our values.
Research areas
Team focus summaries
Highlighted projects
Introducing an efficient differential privacy (DP) algorithm for computing heatmaps with provable guarantees.
Since 1930, 50% of California's largest wildfires occurred in the last 5 years. Learn how Google Research's Foresight team predicts wildfire behavior using AI & simulations.
PaLI is a simple, reusable and scalable architecture that can reuse previously trained models. It is trained on WebLI data to perform a wide range of tasks across image-only, language-only, and image-language domains.
Algorithms can achieve strong guarantees even when the feedback from the model is in the form of a weak hint in bandit-like settings.
Dataset Search is a dedicated search engine for datasets with more than 45 million indexed datasets from more than 13,000 websites covering many disciplines and topics, including government, scientific, and commercial datasets.
Our research on efficient architectures reduce cost and latency to enable ML breakthroughs in production and business deployments.
We're pushing the bounds of clustering, optimization and scalability for algorithms that power Google-scale services. Innovation in algorithmic theory sets the foundation for our global knowledge graph with applications for Ads, Maps, YouTube and more.
An evaluation dataset used to measure machine translation systems’ ability to support regional varieties through a case study on Brazilian vs. European Portuguese and Mainland vs. Taiwan Mandarin Chinese.
A vision-only approach that aims to achieve general UI understanding completely from raw pixels — a key step towards achieving intelligent UI behaviors.
A novel zero-shot transfer learning approach to improve model performance on a target domain with no labels using the knowledge learned by the model from another related source domain with adequately labeled data.
Transfer learning can be used to improve the accuracy of differentially private image classification models by leveraging knowledge learned from pre-training tasks. This is especially useful when there is limited or low-quality data available for the target problem.
Featured publications
Journal of Machine Learning Research, vol. 18-185 (2018), pp. 1-52
NAACL 2022 (Association for Computational Linguistics)
Proceedings of the IEEE/CVF International Conference on Computer Vision (2021) (to appear)
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, Association for Computational Linguistics (2022)
International Conference on Machine Learning (ICML) 2021 (2020)