Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 10795 publications
    Productionizing Quantum Mass Production
    Bill Huggins
    Nathan Wiebe
    arXiv for now (2026) (to appear)
    Preview abstract For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step. View details
    FreshBrew: A Benchmark for Evaluating AI Agents on Java Code Migration
    Diganta Misra
    Yanqi Luo
    Anjali Sridhar
    Justine Gehring
    Silvio Soares Ribeiro Junior
    2026
    Preview abstract AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization. View details
    Preview abstract Transformers, while powerful, suffer from quadratic computational complexity and the ever-growing Key-Value (KV) cache of the attention mechanism. This paper introduces Trellis, a novel Transformer architecture with bounded memory that learns how to compress its key-value memory dynamically at test time. Trellis replaces the standard KV cache with a fixed-size memory and train a two-pass recurrent compression mechanism to store new keys and values into memory. To achieve this, it leverages an online gradient descent procedure with a forget gate, enabling the compressed memory to be updated recursively while learning to retain important contextual information from incoming tokens at test time. Extensive experiments on language modeling, common-sense reasoning, recall-intensive tasks, and time series show that the proposed architecture outperforms strong baselines. Notably, its performance gains increase as the sequence length increases, highlighting its potential for long-context applications. View details
    Preview abstract Google has a long tradition of open-source software, which encompasses the field of operations research with OR-Tools. In development since 2008, it offers several solvers useful to many OR practitioners: - PDLP, a revolutionary first-order linear solver that is reshaping the landscape of linear optimisation; - CP-SAT, an award-winning constraint-programming solver; - Glop, an accurate linear solver; - Routing, a vehicle routing solver underpinning Google Maps Platform Route Optimization. OR-Tools has long had its features accessible from other languages: the core algorithms are implemented in C++ for performance, but users can tap into them in Python, Java, C#, or Go. It is recently available in Julia too, with a current focus on the linear and constraint solvers, either locally or remotely. We provide a wrapper for our solvers that brings them to JuMP.jl through MathOptInterface.jl. This tutorial will walk you through the features of OR-Tools and its solvers, then show examples of using OR-Tools from within Julia, either through JuMP or a lower-level interface. We will also share our experience of C++-Julia interop. View details
    Preview abstract Understanding fine-grained temporal dynamics is crucial in egocentric videos, where continuous streams capture frequent, close-up interactions with objects. In this work, we bring to light that current egocentric video question-answering datasets often include questions that can be answered using only few frames or commonsense reasoning, without being necessarily grounded in the actual video. Our analysis shows that state-of-the-art Multi-Modal Large Language Models (MLLMs) on these benchmarks achieve remarkably high performance using just text or a single frame as input. To address these limitations, we introduce EgoTempo, a dataset specifically designed to evaluate temporal understanding in the egocentric domain. EgoTempo emphasizes tasks that require integrating information across the entire video, ensuring that models would need to rely on temporal patterns rather than static cues or pre-existing knowledge. Extensive experiments on EgoTempo show that current MLLMs still fall short in temporal reasoning on egocentric videos, and thus we hope EgoTempo will catalyze new research in the field and inspire models that better capture the complexity of temporal dynamics. Dataset and code are available at https://github.com/google-research-datasets/egotempo.git. View details
    Matryoshka Model Learning for Improved Elastic Student Models
    Chetan Verma
    Cho-Jui Hsieh
    Ngot Bui
    Yang Zhang
    Wen Chen
    Xin Liu
    Inderjit Dhillon
    2025
    Preview abstract Industry-grade ML models are carefully designed to meet rapidly evolving serving constraints, which requires significant resources for model development. In this paper, we propose MatTA, a framework for training multiple accurate Student models using a novel Teacher-TA-Student recipe. TA models are larger versions of the Student models with higher capacity, and thus allow Student models to better relate to the Teacher model and also bring in more domain-specific expertise. Furthermore, multiple accurate Student models can be extracted from the TA model. Therefore, despite only one training run, our methodology provides multiple servable options to trade off accuracy for lower serving cost. We demonstrate the proposed method, MatTA, on proprietary datasets and models. Its practical efficacy is underscored by live A/B tests within a production ML system, demonstrating 20% improvement on a key metric. We also demonstrate our method on GPT-2 Medium, a public model, and achieve relative improvements of over 24% on SAT Math and over 10% on the LAMBADA benchmark. View details
    The vast world of quantum advantage
    Robert Huang
    John Preskill
    Soonwon Choi
    ArXiv (2025)
    Preview abstract The quest to identify quantum advantages, where quantum physics truly outperforms classical physics, lies at the heart of quantum technology. While quantum devices promise extraordinary capabilities, from exponential computational speedups to unprecedented measurement precision, distinguishing genuine advantages from mere illusions remains a formidable challenge. In this endeavor, quantum theorists are like prophets trying to foretell a future where quantum technologies reign supreme. Yet, the boundary between visionary insight and unfounded fantasy is perilously thin. In this perspective, we explore the properties defining an ideal quantum advantage and examine our mathematical tools for navigating the vast world of quantum advantages across computation, learning, sensing, communication, and beyond. We show that some quantum advantages are inherently unpredictable using classical resources alone, suggesting a landscape far richer than what we can currently foresee. While mathematical rigor remains our indispensable guide in this exploration, the ultimate power of quantum technologies may emerge from the quantum advantages we cannot yet conceive. View details
    On the Design of the Binaural Rendering Library for Eclipsa Audio Immersive Audio Container
    Tomasz Rudzki
    Gavin Kearney
    AES 158th Convention of the Audio Engineering Society (2025)
    Preview abstract Immersive Audio Media and Formats (IAMF), also known as Eclipsa Audio, is an open-source audio container developed to accommodate multichannel and scene-based audio formats. Headphone-based delivery of IAMF audio requires efficient binaural rendering. This paper introduces the Open Binaural Renderer (OBR), which is designed to render IAMF audio. It discusses the core rendering algorithm, the binaural filter design process as well as real-time implementation of the renderer in a form of an open-source C++ rendering library. Designed for multi-platform compatibility, the renderer incorporates a novel approach to binaural audio processing, leveraging a combination of spherical harmonic (SH) based virtual listening room model and anechoic binaural filters. Through its design, the IAMF binaural renderer provides a robust solution for delivering high-quality immersive audio across diverse platforms and applications. View details
    Analyzing Similarity Metrics for Data Selection for Language Model Pretraining
    Dylan Sam
    Afshin Rostamizadeh
    Gui Citovsky
    Advances in Neural Information Processing Systems (NeurIPS) (2025) (to appear)
    Preview abstract Measuring similarity between training examples is critical for curating high-quality and diverse pretraining datasets for language models. However, similarity is typically computed with a generic off-the-shelf embedding model that has been trained for tasks such as retrieval. Whether these embedding-based similarity metrics are well-suited for pretraining data selection remains largely unexplored. In this paper, we propose a new framework to assess the suitability of a similarity metric specifically for data curation in language model pretraining applications. Our framework's first evaluation criterion captures how well distances reflect generalization in pretraining loss between different training examples. Next, we use each embedding model to guide a standard diversity-based data curation algorithm and measure its utility by pretraining a language model on the selected data and evaluating downstream task performance. Finally, we evaluate the capabilities of embeddings to distinguish between examples from different data sources. With these evaluations, we demonstrate that standard off-the-shelf embedding models are not well-suited for the pretraining data curation setting, underperforming even remarkably simple embeddings that are extracted from models trained on the same pretraining corpus. Our experiments are performed on the Pile, for pretraining a 1.7B parameter language model on 200B tokens. We believe our analysis and evaluation framework serves as a foundation for the future design of embeddings that specifically reason about similarity in pretraining datasets. View details
    Necro-reaper: Pruning away Dead Memory Traffic in Warehouse-Scale Computers
    Proceedings of the 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems, Association for Computing Machinery (2025)
    Preview abstract Memory bandwidth is emerging as a critical bottleneck in warehouse-scale computing (WSC). This work reveals that a significant portion of memory traffic in WSC is surprisingly unnecessary, consisting of unnecessary writebacks of deallocated data and fetches of uninitialized data. This issue is particularly acute in WSC, where short-lived heap allocations bigger than a cache line are prevalent. To address this problem, this work proposes a pragmatic approach tailored to WSC. Leveraging the existing WSC ecosystem of vertical integration, profile-guided compilation flows, and customized memory allocators, this work presents Necro-reaper, a novel software/hardware co-design that avoids dead memory traffic without requiring the hardware tracking of prior work. New ISA instructions enable the hardware to avoid unnecessary dead traffic, while extended software components, including a profile-guided compiler and memory allocator, optimize the utilization of these instructions. Evaluation across a diverse set of 10 WSC workloads demonstrates that Necro-reaper achieves a geomean memory traffic reduction of 26% and a geomean IPC increase of 6%. View details
    Preview abstract Background: Providers spend a large percentage of their day using electronic health record (EHR) technology and frequently report frustration when EHR tasks are time-consuming and effortful. To solve these challenges, artificial intelligence (AI)–based enhancements to EHR technology are increasingly being deployed. However, AI-based implementations for EHR features often lack user-centered evaluation. Objective: This study evaluates, using a user-centered approach, the implementation of an AI-powered search and clinical discovery tool within an EHR system. Methods: We conducted a mixed methods study consisting of interviews, observations, and surveys for 5 months. Results: High adoption rates for the AI-based features (163/176, 93% users after 3 months) and significant increases across key metrics, including user satisfaction (U=49; P<.001) and perception of time saved (U=49; P<.001), demonstrated that the AI-based features were not only successfully integrated into various clinical workflows but also improved the user experience for clinicians. Conclusions: Our results underscore the feasibility and effectiveness of using a user-centered approach for the deployment of clinical AI tools. High adoption rates and positive user experiences were driven by our user-centered research program, which emphasized close collaboration with users, rapid incorporation of feedback, and tailored user training. This study program can be used as a starting framework for the design and integration of human-centered research methods for AI tool deployment in clinical settings. View details
    Preview abstract Scaling inference-time computation in Large Language Models (LLMs) dramatically improves their capabilities for solving complex problems. While test-time scaling has shown promise in many tasks such as code generation and mathematical reasoning, integration of inference-time algorithms into multi-agent frameworks for planning and reasoning remains under-explored. To this end, we explore popular inference-time algorithms—Best of N, Tree of Thought (ToT), and REward BAlanced SEarch (REBASE)—with proposed feedback-driven refinement. Our feedback-driven refinement employs specialized agents: a constraint agent to enforce task instance-specific constraints, and a verifier agent to evaluate plan quality. Furthermore, we hypothesize that test-time scaling can be proportional to instance-level complexity. Thus, we propose an additional selection agent to dynamically optimize algorithm choice. We evaluate our proposed approaches on four different benchmarks, i.e., NATURAL PLAN, GPQA, OlympiadBench, and DocFinQA. Experimental results show that our methods outperform strong baselines, achieving state-of-the-art results in NATURAL PLAN, OlympiadBench , and DocFinQA. Our key findings demonstrate that constraint-guided iterative refinement and algorithm selection improves both planning and downstream reasoning in LLMs View details
    Preview abstract The increasing complexity of cybersecurity and artificial intelligence (AI) executive orders, frameworks, and policies has made translating high-level directives into actionable implementation a persistent challenge. Policymakers, framework authors, and engineering teams often lack a unified approach for interpreting and operationalizing these documents, resulting in inefficiencies, misalignment, and delayed compliance. While existing standards such as the Open Security Controls Assessment Language (OSCAL) address control-level specifications, no standardized, machine-readable format exists for authoring and structuring high-level governance documents. This gap hinders collaboration across disciplines and obscures critical directives’ underlying intent and rationale. This report introduces Governance Schema (GovSCH), an open-source schema designed to standardize the authoring and translation of cybersecurity and AI governance documents into a consistent, machine-readable format. By analyzing prior executive orders, regulatory frameworks, and policies, GovSCH identifies common structures and authoring practices to create an interoperable model that bridges policymakers, regulatory framework authors, and engineering teams. This approach enables more precise articulation of policy intent, improves transparency, and accelerates the technical implementation of regulations. Ultimately, GovSCH aims to enhance collaboration, standardization, and efficiency in cybersecurity and AI governance. View details
    Zero-Shot Image Moderation in Google Ads with LLM-Assisted Textual Descriptions and Cross-modal Co-embeddings
    Jimin Li
    Eric Xiao
    Katie Warren
    Enming Luo
    Krishna Viswanathan
    Ariel Fuxman
    Bill Li
    Yintao Liu
    (2025)
    Preview abstract We present a scalable and agile approach for ads image content moderation at Google, addressing the challenges of moderating massive volumes of ads with diverse content and evolving policies. The proposed method utilizes human-curated textual descriptions and cross-modal text-image co-embeddings to enable zero-shot classification of policy violating ads images, bypassing the need for extensive supervised training data and human labeling. By leveraging large language models (LLMs) and user expertise, the system generates and refines a comprehensive set of textual descriptions representing policy guidelines. During inference, co-embedding similarity between incoming images and the textual descriptions serves as a reliable signal for policy violation detection, enabling efficient and adaptable ads content moderation. Evaluation results demonstrate the efficacy of this framework in significantly boosting the detection of policy violating content. View details
    Participatory AI Considerations for Advancing Racial Health Equity
    Andrea G. Parker
    Jatin Alla
    Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI) (2025) (to appear)
    Preview abstract Health-related artificial intelligence (health AI) systems are being rapidly created, largely without input from racially minoritized communities who experience persistent health inequities and stand to be negatively affected if these systems are poorly designed. Addressing this problematic trend, we critically review prior work focused on the participatory design of health AI innovations (participatory AI research), surfacing eight gaps in this work that inhibit racial health equity and provide strategies for addressing these gaps. Our strategies emphasize that “participation” in design must go beyond typical focus areas of data collection, annotation, and application co-design, to also include co-generating overarching health AI agendas and policies. Further, participatory AI methods must prioritize community-centered design that supports collaborative learning around health equity and AI, addresses root causes of inequity and AI stakeholder power dynamics, centers relationalism and emotion, supports flourishing, and facilitates longitudinal design. These strategies will help catalyze research that advances racial health equity. View details