Accelerating the magic cycle of research breakthroughs and real-world applications
October 31, 2025
Yossi Matias, Vice President & Head of Google Research
From earth science to genomics to quantum, we share the latest scientific breakthroughs from Google Research and how today’s powerful AI tools and platforms are accelerating innovation.
Last week at our flagship Research@ event in Mountain View, we shared some of Google Research’s latest announcements, from understanding earth to advancements in genomics to advancements in quantum computing. Working collaboratively with colleagues across the company, our teams drive breakthrough research and accelerate real-world solutions for products, businesses, science and society. As research comes to reality, we uncover new research opportunities, driving innovation further and faster. I call this powerful, cyclical relationship between research and real-world impact the magic cycle of research.
This cycle is accelerating significantly these days, propelled by more powerful models, new agentic tools that help accelerate scientific discovery, and open platforms and tools. We see this momentum across domains.
Our latest research breakthroughs
At Research@MTV last week, we highlighted three of our latest breakthroughs: Google Earth AI, DeepSomatic, and Quantum Echoes.
Google Earth AI: Unprecedented planetary understanding
Earth AI is a powerful collection of geospatial AI models and reasoning designed to address critical global challenges; it gives users an unprecedented level of understanding about what is happening across the planet.
For years we’ve been developing state-of-the-art geo-spatial AI models including floods, wildfires, cyclones, air quality, pollen, weather nowcasting and long range forecasting, agriculture, population dynamics, AlphaEarth Foundations and mobility. These models, developed by teams across Google, are already helping millions of people worldwide and we keep making progress. We have just expanded access to our new Remote Sensing Foundations and new global Population Dynamics Foundations. And we can now share that our riverine flood models — expanded over the years to cover 700 million people in 100 countries — now provide forecasts covering over 2B people in 150 countries for significant riverine flood events.
Earth AI is a Google-wide program building on our long-standing efforts. Our latest research updates to Earth AI integrate and synthesize these vast amounts of real-world imagery, population and environmental data. Using LLMs and their reasoning capabilities, the Earth AI geospatial reasoning agent can understand nuanced concepts and discover correlations across multiple datasets and models. This agent allows users to ask complex questions and receive answers in plain language, making Earth AI capabilities accessible even to non-experts. Users can quickly generate insights from business logic use cases and supply chain management to crisis resilience and international policy.
In our evaluations, Geospatial Reasoning Agent improved responses over baseline models that did not have access to Earth AI models and tools. We share the results in our research blog and our technical report.
Google Earth with Gemini capabilities will soon be powered by our Earth AI imagery models, enabling users to search for objects in satellite imagery. Plus, our powerful models are now available to trusted testers on Google Cloud. And we continue to hear from our partners about diverse important use cases, including testimonials from Give Directly, McGill and Partners, Cooper/Smith, WPP, WHO AFRO, Planet Labs and Airbus.
DeepSomatic & Cell2Sentence: Toward precision medicine to fight cancer
DeepSomatic, published in Nature Biotechnology, is our newest of many AI tools designed to help the scientific community and health practitioners.
DeepSomatic builds on 10 years of genomics research at Google. Since 2015, we’ve been building models like DeepConsensus and DeepVariant to help us better understand the genome. With these models, we’ve helped map human and non-human genomes and used this information to inform our understanding of disease.
Some cancers have complex genetic signatures that may make them targets for tailored treatments based on their specific mutations. So, we asked ourselves if we could sequence the genomes of these cancerous cells more precisely. The result, DeepSomatic, is our new open-source AI-powered tool to help scientists and doctors make sense of genetic variants in cancer cells.
The model works by first turning genetic sequencing data into a set of images and then using a convolutional neural network to differentiate between the reference genome, the non-cancer germline variants in that individual, and the cancer-caused somatic variants in the tumor.
Identifying cancer variants could potentially lead to brand-new therapies, and it could help clinicians decide between treatments such as chemotherapy and immunotherapy. Our partners at Children’s Mercy are using it to pinpoint how and why a particular form of cancer affects a patient in order to create personalized cures.
DeepSomatic follows other breakthroughs which share the same goal of using AI to help fight cancer. We also just released a 27 billion parameter foundation model for single-cell analysis, C2S-Scale, in collaboration with Google DeepMind. This builds upon our work from earlier this year, in collaboration with Yale, and recently generated a novel hypothesis about cancer cellular behavior. With more clinical tests, this may reveal a promising new pathway for developing therapies to fight cancer.
Quantum Echoes: A big step toward real-world applications
To accelerate the next exponential wave of scientific discovery, we’re looking to our strategic, long-term investment in quantum computing.
Our foundation rests on decades of research, leading to our hardware milestone on the Willow chip in late 2024. This work is supported by Michel Devoret, our Chief Scientist of Quantum Hardware, who together with with former Quantum AI hardware lead John Martinis, and John Clarke of the University of California, Berkeley, became 2025 Physics Nobel Laureates for their research in the 1980s that laid the groundwork for today's superconducting qubits.
Now we’ve announced a new verifiable quantum advantage, published in the cover of Nature. Our “Quantum Echoes” algorithm runs on our Willow chip 13,000 times faster than the best classical algorithm on one of the world’s fastest supercomputers. It offers a new way to explain interactions between atoms in a real world molecule observed using nuclear magnetic resonance spectroscopy. This is the world’s first algorithm to demonstrate verifiable quantum advantage and points towards practical applications of quantum computing that are beyond the capabilities of classical computers.
Quantum computing has the potential to meaningfully advance drug design and help make fusion energy a reality. And given our latest breakthrough, we’re optimistic that we’ll start to see real-world applications within five years.
Fireside Chat about Quantum AI with James Manyika and Hartmut Neven.
From accelerating scientific discovery to algorithmic innovation
We also shared some of the work across various domains where teams are driving breakthrough research and accelerating real-world solutions. The breadth and depth of the opportunities is ever increasing. Here are a few recent examples.
Health & Science
AI co-scientist is a multi-agent AI system built as a virtual scientific collaborator to help scientists generate novel hypotheses and research proposals, and to accelerate scientific and biomedical discoveries. Our new AI-powered empirical software system, a Gemini-backed coding agent, helps scientists write expert-level empirical software. It accelerates the historically slow task of creating custom software to evaluate and iteratively improve scientific hypotheses. This opens the door to a future where scientists can easily, rapidly, and systematically investigate hundreds or thousands of potential solutions to the problems that motivate their research.
AMIE, a conversational medical AI agent, demonstrates clinical reasoning and communication on par with primary care physicians in both multimodal and multi-visit settings. As we explore how AMIE may translate to real-world environments, we are testing it under physician oversight, including in a partnership with Beth Israel Deaconess Medical Center to evaluate AMIE with real-world patients.
MedGemma, part of our Health AI Developer Foundations (HAI-DEF) collection, is Google's most capable open model for multimodal medical comprehension. Since launch MedGemma and HAI-DEF have >1M downloads and >40K unique users.
Factuality & Efficiency
We continue advancing our research on factuality and grounding for LLMs, including studying how LLMs convey uncertainty, assessing whether LLMs encode more factual knowledge in their parameters than they express in their outputs, and more. We expand to multimodal content - for example, Time-Aligned Captions and our contrastive sequential video diffusion method focus on making scenes in videos visually consistent, helping improve the quality of our image and video models.
Improving the efficiency of LLMs remains a high priority goal across the industry. Building on our speculative decoding work which enabled substantial efficiency gains without any compromise on quality, we keep seeing many new approaches, such as our recent speculative cascades. We keep advancing other techniques for efficiency and for energy innovation techniques.
Algorithmic innovation
Algorithmic research contributes to new Ads model connecting advertisers to customers, continued research on our large-scale optimisations, enhancements to Google Maps routing and improved voice search in India. Privacy research includes recent advances such as confidential federated analytics, differentially private synthetic data and provably private insights into AI use. We are making progress on TimesFM which has hundreds of millions of queries per month in BigQuery alone, and recently introduced a novel approach using in-context fine-tuning.
We keep exploring new ways to improve learning and education, building on our earlier work on LearnLM, such as Learn Your Way to improve learning efficacy. And we keep exploring AI innovations such as the use of diffusion models for real-time game engines, which inspire new horizons for simulating immersive world environments.
At Research@ Mountain View, Yossi Matias joins Alex Kantrowitz on the Big Technology Podcast to discuss our research efforts in areas like cancer treatment and Quantum.
AI as an amplifier of human ingenuity
The magic cycle of research is quickly gaining momentum. This is propelled by more powerful models, by agentic tools like the AI co-scientist and AI-based expert-level empirical software that help accelerate scientific discovery, and open platforms and tools like MedGemma, HAI-DEF and DeepSomatic. Innovation today is happening at unprecedented speed.
The latest advancements point to a world where AI is not just a tool, but an essential partner and collaborator. This partnership is already taking shape in tangible ways, empowering researchers, engineers, healthcare workers, and educators. With humans at the steering wheel, we can leverage AI to bring new ideas to life and take on the challenges that matter most.
This fusion of human ingenuity with the powerful capabilities of AI will fuel further innovation and accelerate its impact for people at a global scale, defining a new era of scientific discovery for the benefit of everyone, everywhere.