Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10827 publications
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Preview abstract
JuMP and MathOptInterface.jl give access to many solvers, both very common in the industry and more specialised. Google offers its own in-house solvers as part of the open-source package OR-Tools: Glop, a simplex solver; CP-SAT, an award-winning constraint-programming solver; PDLP, a first-order solver for large-scale linear programming.
ORTools.jl is a recent package that gives access to these solvers through MathOptInterface.jl. It supports both local and remote use, meaning that users do not need a local installation to solve linear and integer problems thanks to Google Cloud. More recently, ORTools.jl started offering a native interface for constraint programming building upon the work in MathOptInterface.jl.
However, OR-Tools have more than this to offer, including a scalable routing solver for large-scale VRPs or state-of-the-art set-cover solver. MathOptInterface.jl does not yet propose an interface for these problems and we would like to gauge the community’s interest in these specific solvers.
View details
Deterministic Parallel High-Quality Hypergraph Partitioning
Nikolai Maas
Lars Gottesbüren
Robert Krause
2025
Preview abstract
We present a deterministic parallel multilevel algorithm for balanced hypergraph partitioning that matches the state of the art for non-deterministic algorithms. Deterministic parallel algorithms produce the same result in each invocation, which is crucial for reproducibility. Moreover, determinism is highly desirable in application areas such as VLSI design. While there has been tremendous progress in parallel hypergraph partitioning algorithms recently,
deterministic counterparts for high-quality local search techniques are missing. Consequently, solution quality is severely lacking in comparison to the non-deterministic algorithms.
In this work we close this gap. First, we present a generalization of the recently proposed Jet refinement algorithm. While Jet is naturally amenable to determinism, significant changes are necessary to achieve competitive performance on hypergraphs. We also propose an improved deterministic rebalancing algorithm for Jet. Moreover, we consider the powerful but slower flow-based refinement and introduce a scheme that enables deterministic results while building upon a non-deterministic maximum flow algorithm.
As demonstrated in our thorough experimental evaluation, this results in the first deterministic parallel partitioner that is competitive to the highest quality solvers. With Jet refinement, we match or exceed the quality of Mt-KaHyPar's non-deterministic default configuration while being only 15% slower on average. We observe self-relative speedups of up to 55 on 64 cores with a 22.5x average speedup.
Our deterministic flow-based refinement exceeds the quality of the non-deterministic variant by roughly 1% on average but requires 31% more running time.
View details
Preview abstract
Large language models (LLMs) require significant memory to store Key-Value (KV) embeddings in their KV cache, especially when handling long-range contexts. Quantization of these KV embeddings is a common technique to reduce memory consumption.
This work introduces PolarQuant, a novel quantization method employing random preconditioning and polar transformation. Our method first preconditions the embedding vectors using a random projection matrix. Then, we transform these vectors into polar coordinates and quantize the resulting polar representation.
Our key insight is that, after random preconditioning, the angles in the polar representation exhibit a tightly bounded and concentrated distribution with an analytically computable form. This eliminates the need for explicit normalization, a computationally expensive step required by traditional quantization methods.
Normalization introduces significant memory overhead because quantization parameters (e.g., zero point and scale) must be stored in full precision for each data block. This can add 1 to 2 bits per quantized value, depending on the block size. PolarQuant bypasses this normalization step, enabling substantial memory savings.
Empirical evaluation demonstrates that PolarQuant achieves lower memory overheads than existing normalization-based KV quantization techniques. Moreover, it improves performance across various generation tasks, particularly those involving long-context understanding.
View details
DialogLab: Authoring, Simulating, and Testing Dynamic Group Conversations in Hybrid Human-AI Conversations
Erzhen Hu
Mingyi Li
Alex Olwal
Seongkook Heo
UIST '25: Proceedings of the 38th Annual ACM Symposium on User Interface Software and Technology, ACM (2025), 210:1-20
Preview abstract
Designing compelling multi-party conversations involving both humans and AI agents presents significant challenges, particularly in balancing scripted structure with emergent, human-like interactions. We introduce DialogLab, a prototyping toolkit for authoring, simulating, and testing hybrid human-AI dialogues. DialogLab provides a unified interface to configure conversational scenes, define agent personas, manage group structures, specify turn-taking rules, and orchestrate transitions between scripted narratives and improvisation. Crucially, DialogLab allows designers to introduce controlled deviations from the script—through configurable agents that emulate human unpredictability—to systematically probe how conversations adapt and recover. DialogLab facilitates rapid iteration and evaluation of complex, dynamic multi-party human-AI dialogues. An evaluation with both end users and domain experts demonstrates that DialogLab supports efficient iteration and structured verification, with applications in training, rehearsal, and research on social dynamics. Our findings show the value of integrating real-time, human-in-the-loop improvisation with structured scripting to support more realistic and adaptable multi-party conversation design.
View details
Earth AI: Unlocking Geospatial Insights with Foundation Models and Cross-Modal Reasoning
Aaron Bell
Aviad Barzilai
Roy Lee
Gia Jung
Charles Elliott
Adam Boulanger
Amr Helmy
Jacob Bien
Ruth Alcantara
Nadav Sherman
Hassler Thurston
Yotam Gigi
Bolous Jaber
Vered Silverman
Luke Barrington
Tim Thelin
Elad Aharoni
Kartik Hegde
Yuval Carny
Shravya Shetty
Yehonathan Refael
Stone Jiang
David Schottlander
Juliet Rothenberg
Luc Houriez
Yochai Blau
Joydeep Paul
Yang Chen
Yael Maguire
Aviv Slobodkin
Shlomi Pasternak
Alex Ottenwess
Jamie McPike
Per Bjornsson
Natalie Williams
Reuven Sayag
Thomas Turnbull
Ali Ahmadalipour
David Andre
Amit Aides
Ean Phing VanLee
Niv Efron
Monica Bharel
arXiv (preprint 2025), arXiv, arXiv:2510.18318
https://doi.org/10.48550/arXiv.2510.18318
(2025)
Preview abstract
Geospatial data offers immense potential for understanding our planet. However, the sheer volume and diversity of this data along with its varied resolutions, timescales, and sparsity pose significant challenges for thorough analysis and interpretation. The emergence of Foundation Models (FMs) and Large Language Models (LLMs) offers an unprecedented opportunity to tackle some of this complexity, unlocking novel and profound insights into our planet.
This paper introduces a comprehensive approach to developing Earth AI solutions, built upon foundation models across three key domains—Planet-scale Imagery, Population, and Environment—and an intelligent Gemini-powered reasoning engine. We present rigorous benchmarks showcasing the power and novel capabilities of our foundation models and validate that they provide complementary value to improve geospatial inference. We show that the synergy between these models unlocks superior predictive capabilities. To handle complex, multi-step queries, we developed a Gemini-powered agent that jointly reasons over our multiple foundation models along with large geospatial data sources and tools to unlock novel geospatial insights. On a new benchmark of real-world crisis scenarios, our agent demonstrates the ability to deliver critical and timely insights, effectively bridging the gap between raw geospatial data and actionable understanding.
View details
Preview abstract
We revisit the fundamental question of formally defining what constitutes a reconstruction attack. While often clear from the context, our exploration reveals that a precise definition is much more nuanced than it appears, to the extent that a single all-encompassing definition may not exist. Thus, we employ a different strategy and aim to "sandwich" the concept of reconstruction attacks by addressing two complementing questions: (i) What conditions guarantee that a given system is protected against such attacks? (ii) Under what circumstances does a given attack clearly indicate that a system is not protected? More specifically,
* We introduce a new definitional paradigm -- Narcissus Resiliency -- to formulate a security definition for protection against reconstruction attacks. This paradigm has a self-referential nature that enables it to circumvent shortcomings of previously studied notions of security. Furthermore, as a side-effect, we demonstrate that Narcissus resiliency captures as special cases multiple well-studied concepts including differential privacy and other security notions of one-way functions and encryption schemes.
* We formulate a link between reconstruction attacks and Kolmogorov complexity. This allows us to put forward a criterion for evaluating when such attacks are convincingly successful.
View details
Safe Coding: Rigorous Modular Reasoning about Software Safety
Queue, 23(5) (2025)
Preview abstract
Many persistent and dangerous software vulnerabilities, including memory safety violations and code injection, arise from a common root cause: Developers inadvertently violate the implicit safety preconditions of widely-used programming constructs. These preconditions—such as pointer validity, array-access bounds, and the trustworthy provenance of code fragments to be evaluated as SQL, HTML, or JavaScript—are traditionally the developer's responsibility to ensure. In complex systems, meeting these obligations often relies on non-local, whole-program invariants that are notoriously difficult to reason about correctly, leading to vulnerabilities that are difficult to detect after the fact.
This article introduces Safe Coding, a collection of software design patterns and practices designed to cost-effectively provide a high degree of assurance against entire classes of such vulnerabilities. The core principle of Safe Coding is to shift responsibility for safety from individual developers to the programming language, software libraries, and frameworks. This is achieved by systematically eliminating the direct use of risky operations—those with complex safety preconditions—in application code. Instead, these operations are encapsulated within safe abstractions: modules with public APIs that are safe by design, whose implementations fully ensure all module-internal safety preconditions through a combination of local runtime checks and by elevating safety preconditions into type invariants.
Safe Coding facilitates a modular and compositional approach to whole-program safety: Difficult reasoning is localized to the implementation of safe abstractions, which undergo focused expert scrutiny. The composition of these abstractions with the majority of the codebase (which is kept free of risky operations) is then automatically verified by the language’s type checker. This form of compositional reasoning, drawing from patterns used in formal software verification, can be viewed as a semi-formal approach that balances rigor with broad applicability to large industrial codebases. We discuss the successful application of these practices at Google, where they have nearly eliminated vulnerabilities such as Cross-Site Scripting (XSS) and SQL injection, and their critical role in ensuring memory safety in Rust, collectively demonstrating a favorable cost-assurance tradeoff for achieving software safety at scale.
View details
Preview abstract
Motivated by the growing demand for serving large language model inference requests, we study distributed load balancing for global serving systems with network latencies. We consider a fluid model in which continuous flows of requests arrive at different frontends and need to be routed to distant backends for processing whose processing rates are workload dependent. Network latencies can lead to long travel times for requests and delayed feedback from backends. The objective is to minimize the average latency of requests, composed of the network latency and the serving latency at the backends.
We introduce Distributed Gradient Descent Load Balancing (DGD-LB), a probabilistic routing algorithm in which each frontend adjusts the routing probabilities dynamically using gradient descent. Our algorithm is distributed: there is no coordination between frontends, except by observing the delayed impact other frontends have on shared backends. The algorithm uses an approximate gradient that measures the marginal impact of an additional request evaluated at a delayed system state. Equilibrium points of our algorithm minimize the centralized optimal average latencies, and we provide a novel local stability analysis showing that our algorithm converges to an optimal solution when started sufficiently close to that point. Moreover, we present sufficient conditions on the step-size of gradient descent that guarantee convergence in the presence of network latencies. Numerical experiments show that our algorithm is globally stable and optimal, confirm our stability conditions are nearly tight, and demonstrate that DGD-LB can lead to substantial gains relative to other load balancers studied in the literature when network latencies are large.
View details
Preview abstract
We describe an efficient quantum algorithm for solving the linear matrix equation AX+XB=C, where A, B and C are given complex matrices and X is unknown. This is known as the Sylvester equation, a fundamental equation with applications in control theory and physics. Rather than encoding the solution in a quantum state in a fashion analogous to prior quantum linear algebra solvers, our approach constructs the solution matrix X in a block-encoding, rescaled by some factor. This allows us to obtain certain properties of the entries of X exponentially faster than would be possible from preparing X as a quantum state. The query and gate complexities of the quantum circuit that implements this block-encoding are almost linear in a condition number that depends on A and B, and depend logarithmically in the dimension and inverse error. We show how our quantum circuits can solve BQP-complete problems efficiently, discuss potential applications and extensions of our approach, its connection to Riccati equation, and comment on open problems.
View details
Procurement Auctions via Approximate Submodular Optimization
Amin Karbasi
Grigoris Velegkas
Forty-second International Conference on Machine Learning (2025)
Preview abstract
We study the problem of procurement auctions, in which an auctioneer seeks to acquire services from a group of strategic sellers with private costs. The quality of the services is measured through some \emph{submodular} function that is known to the auctioneer. Our goal is to design \emph{computationally efficient} procurement auctions that (approximately) maximize the difference between the quality of the acquired services and the total cost of the sellers, in a way that is incentive compatible (IC) and individual rational (IR) for the sellers, and generates non-negative surplus (NAS) for the auctioneer.
Leveraging recent results from the literature of \emph{non-positive} submodular function maximization, we design computationally efficient frameworks that transform submodular function optimization algorithms to \emph{mechanisms} that are IC and IR for the sellers, NAS for the auctioneer, and \emph{approximation-preserving}. Our frameworks are general and work both in the \emph{offline} setting where the auctioneer can observe the bids and the services of all the sellers simultaneously, and in the \emph{online} setting where the sellers arrive in an adversarial order and the auctioneer has to make an irrevocable decision whether to purchase their service or not. We further investigate whether it is possible to convert state-of-art submodular optimization algorithms into a descending auction. We focurs in the adversarial setting, meaning that the schedule of the descending prices is determined by an advesary. We show that a submodular optimization algorithm satisfying bi-criteria $(\alpha, 1)$-approximation in welfare can be effectively converted to a descending auction in the adversarial setting in if and only if $\alpha \leq \frac 1 2$. Our result highlights the importance of a carefully designed schedule of descending prices to effectively convert a submodular optimization algorithm satisfying bi-criteria $(\alpha, 1)$-approximation in welfare with $\alpha > \frac 1 2$ to a descending auction. We also further establish a connection between descending auctions and online submodular optimization algorithms.
We demonstrate the practical applications of our frameworks by instantiating them with different state-of-the-art submodular optimization algorithms and comparing their welfare performance through empirical experiments on publicly available datasets that consist of thousands of sellers.
View details
Preview abstract
Modern foundation models are trained on diverse datasets to enhance generalization across tasks and domains. A central challenge in this process is determining how to effectively mix and sample data from multiple sources. This naturally leads to a multi-task learning (MTL) perspective. While prior work in MTL has emphasized mitigating gradient conflicts, we observe that large-scale pre-training scenarios-such as multilingual or multi-domain training-often exhibit little to no gradient conflict. Motivated by this observation, we propose PiKE (Positive gradient interaction-based K task weights Estimator), an adaptive data mixing algorithm that dynamically adjusts sampling weights during training. PiKE exploits non-conflicting gradient interactions to minimize a near-tight upper bound on the average loss decrease at each step, while incurring negligible computational overhead. We provide theoretical convergence guarantees and show that PiKE outperforms static and nonadaptive mixing baselines. Furthermore, we extend PiKE to promote balanced learning across tasks. Extensive experiments on large-scale language model pre-training confirm that PiKE achieves faster convergence and improved downstream performance compared to existing approaches.
View details
Automated loss of pulse detection on a commercial smartwatch
Kamal Shah
Yiwen Chen
Anthony Stange
Lawrence Cai
Matt Wimmer
Pramod Rudrapatna
Shelten Yuen
Anupam Pathak
Shwetak Patel
Mark Malhotra
Marc Stogaitis
Jeanie Phan
Ali Connell
Jim Taylor
Jacqueline Shreibati
Daniel McDuff
Tajinder Gadh
Jake Sunshine
Nature, 642 (2025), pp. 174-181
Preview abstract
Out-of-hospital cardiac arrest is a time-sensitive emergency that requires prompt identification and intervention: sudden, unwitnessed cardiac arrest is nearly unsurvivable. A cardinal sign of cardiac arrest is sudden loss of pulse. Automated biosensor detection of unwitnessed cardiac arrest, and dispatch of medical assistance, may improve survivability given the substantial prognostic role of time, but only if the false-positive burden on public emergency medical systems is minimized. Here we show that a multimodal, machine learning-based algorithm on a smartwatch can reach performance thresholds making it deployable at a societal scale. First, using photoplethysmography, we show that wearable photoplethysmography measurements of peripheral pulselessness (induced through an arterial occlusion model) manifest similarly to pulselessness caused by a common cardiac arrest arrhythmia, ventricular fibrillation. On the basis of the similarity of the photoplethysmography signal (from ventricular fibrillation or arterial occlusion), we developed and validated a loss of pulse detection algorithm using data from peripheral pulselessness and free-living conditions. Following its development, we evaluated the end-to-end algorithm prospectively: there was 1 unintentional emergency call per 21.67 user-years across two prospective studies; the sensitivity was 67.23% (95% confidence interval of 64.32% to 70.05%) in a prospective arterial occlusion cardiac arrest simulation model. These results indicate an opportunity, deployable at scale, for wearable-based detection of sudden loss of pulse while minimizing societal costs of excess false detections.
View details
Better autoregressive regression with LLMs via regression-aware fine-tuning
Zhao Meng
Aditya Menon
The Thirteenth International Conference on Learning Representations (2025)
Preview abstract
Decoder-based large language models (LLMs) have proven highly versatile, with remarkable successes even on problems ostensibly removed from traditional language generation. One such example is solving regression problems, where the targets are real numbers rather than textual tokens. A common approach to use LLMs on such problems is to perform fine-tuning based on the cross-entropy loss, and use autoregressive sampling at inference time. Another approach relies on fine-tuning a separate predictive head with a suitable loss such as squared error. While each approach has had success, there has been limited study on principled ways of using decoder LLMs for regression. In this work, we compare different prior works under a unified view, and introduce regression-aware fine-tuning(RAFT), a novel approach based on the Bayes-optimal decision rule. We demonstrate how RAFT improves over established baselines on several benchmarks and model families.
View details