Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 11114 publications
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Phoenix: Rowhammer Attacks on DDR5 with Self-Correcting Synchronization
Preview
Michele Marazzi
Kaveh Razavi
Salman Qazi
Diego Meyer
Patrick Jattke
IEEE Security & Privacy (S&P) (2026)
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
We study the problem of allocating access point bandwidth to users of a wireless network in the presence of adversarial jamming. Specifically, we consider a setting in which the network designer acts first and allocates access point bandwidth to the users of the network, before an adversary applies a jamming strategy to reduce the bandwidth of a subset (or all) of the access points. We consider a strong adversary who has complete information and can optimize the jamming strategy, subject to power budget constraints. In turn, the network designer must allocate the resources in anticipation of the adversary's actions.
We explain that our model gives rise to a special network interdiction model, which differs from the standard setting in two ways: The first is that the interdictor is given the benefit of responding, rather than leading the game. The second is that the interdiction is fractional and performed at the node level of the network. The interdiction then propagates to all edges incident to the access point.
In terms of technical results, we provide an allocation algorithm that is based on linear programming duality and show that the algorithm can solve the problem optimally, assuming knowledge of the adversary's budget constraints. We conduct experiments on synthetic data to show the extent to which the algorithm improves the total utilized bandwidth over the algorithm that optimizes bandwidth allocation while being oblivious to the adversary's existence.
View details
Preview abstract
How many T gates are needed to approximate an arbitrary n-qubit quantum state to within
a given precision ϵ? Improving prior work of Low, Kliuchnikov and Schaeffer, we show that the
optimal asymptotic scaling is Θ(sqrt{2^n log(1/ε)} + log(1/ε)) if we allow an unlimited number of ancilla qubits. We also show that this is the optimal T-count for implementing an arbitrary
diagonal n-qubit unitary to within error ϵ. We describe an application to batched synthesis of
single-qubit unitaries: we can approximate a tensor product of m = O(log log(1/ϵ)) arbitrary
single-qubit unitaries to within error ϵ with the same asymptotic T-count as is required to
approximate just one single-qubit unitary.
View details
CrossCheck: Input Validation for WAN Control Systems
Rishabh Iyer
Isaac Keslassy
Sylvia Ratnasamy
Networked Systems Design and Implementation (NSDI) (2026) (to appear)
Preview abstract
We present CrossCheck, a system that validates inputs to the Software-Defined Networking (SDN) controller in a Wide Area Network (WAN). By detecting incorrect inputs—often stemming from bugs in the SDN control infrastructure—CrossCheck alerts operators before they trigger network outages.
Our analysis at a large-scale WAN operator identifies invalid inputs as a leading cause of major outages, and we show how CrossCheck would have prevented those incidents. We deployed CrossCheck as a shadow validation system for four weeks in a production WAN, during which it accurately detected the single incident of invalid inputs that occurred while sustaining a 0% false positive rate under normal operation, hence imposing little additional burden on operators. In addition, we show through simulation that CrossCheck reliably detects a wide range of invalid inputs (e.g., detecting demand perturbations as small as 5% with 100% accuracy) and maintains a near-zero false positive rate for realistic levels of noisy, missing, or buggy telemetry data (e.g., sustaining zero false positives with up to 30% of corrupted telemetry data).
View details
Preview abstract
Source-to-source compilers may perform inefficiently by executing transpilation passes on scripts that do not contain the specific language features a pass is designed to transform, potentially leading to redundant processing. A compiler can analyze a script to generate a per-script feature map, for example, by identifying language features in its abstract syntax tree (AST). Before executing a transpilation pass, the compiler can check this map and may bypass the pass for that script if the specific feature targeted by the pass is not present. This feature map can also be dynamically updated throughout the compilation process as other passes transform the code. This method of conditional pass execution based on content-aware analysis may reduce redundant AST traversals, which could decrease overall compilation time and computational resource consumption.
View details
Preview abstract
This article delves into how Google Site Reliability Engineers (SREs) leverage Gemini 3 and the Gemini CLI to aggressively reduce Mean Time to Mitigation (MTTM) during real-world outages. By focusing on the SRE motto of "Eliminate Toil," the article walks through a simulated incident, demonstrating how an agentic CLI acts as a human-in-the-loop copilot across the entire incident lifecycle: from initial paging and investigation, through safe, tool-driven mitigation and root cause analysis, to automated postmortem generation and action item filing. This direct integration of Gemini's reasoning capabilities with operational data and internal tools creates a virtuous cycle where past incident learnings continuously inform and improve future solutions.
View details
Preview abstract
Audio Description ( AD) provides essential access to visual media for blind and low vision ( BLV) audiences. Yet current AD production tools remain largely inaccessible to BLV video creators, who possess valuable expertise but face barriers due to visually- driven interfaces. We present ADCanvas, a multimodal authoring system that supports non- visual control
over audio description ( AD) creation. ADCanvas combines conversational interaction with keyboard- based playback control and a plain- text, screen reader–
accessible editor to support end- to- end AD authoring and visual question answering ( VQA). Combining screen- reader- friendly controls with a multimodal
LLM agent, ADCanvas supports live VQA, script generation, and AD modification. Through a user study with 12 BLV video creators, we find that users adopt
the conversational agent as an informational aide and drafting assistant, while maintaining agency through verification and editing. For example, participants
saw themselves as curators who received information from the model and filtered it down for their audience. Our findings offer design implications for
accessible media tools, including precise editing controls, accessibility support for creative ideation, and configurable rules for human- AI collaboration.
View details
SNPeek: Side-Channel Analysis for Privacy Applications on Confidential VMs
Ruiyi Zhang
Albert Cheu
Adria Gascon
Michael Schwarz
Octavian Suciu
Network and Distributed System Security (NDSS) (2026)
Preview abstract
Confidential virtual machines (CVMs) based on trusted execution environments (TEEs) enable new privacy-preserving solutions. But CVMs are not a privacy panacea, as they are vulnerable to side-channel attacks that may compromise confidentially of workloads.
In this work, we develop the FARFETCH’D framework to help developers evaluate side-channel assisted privacy attacks that are broadly applicable to CVMs. The privacy reduction due to these attacks heavily depend on the execution environment and the workload, which varies vastly:What are avail-able attack primitives? How does the particular privacy work-load behave?This makes manual investigation and efficiently mitigating software-based side channels a cumbersome and impossible task. FARFETCH’D solves this challenge by providing a set of configurable attack primitives that can execute on real CVM hardware and automated ML-based analysis pipelines. We evaluate the effectiveness of FARFETCH’D on privacy-preserving workloads. Our results show that our approach is effective at pinpointing the vulnerability of privacy apps against side channels and help evaluating mitigation based on oblivious memory and differential privacy.
View details
ARM MTE Performance in Practice
Preview
Taehyun Noh
Yingchen Wang
Tal Garfinkel
Mahesh Madhav
Mattan Erez
Shravan Narayan
Usenix Security (2026)
A probabilistic framework for learning non‐intrusive corrections to long‐time climate simulations from short‐time training data
Benedikt Barthel
Rob Carver
Fei Sha
Themistoklis Sapsis
Journal of Advances in Modeling Earth Systems (2026)
Preview abstract
Despite advances in high performance computing, accurate numerical simulations of global atmospheric dynamics remain a challenge. The resolution required to fully resolve the vast range scales as well as the strong coupling with—often not fully-understood—physics renders such simulations computationally infeasible over time horizons relevant for long-term climate risk assessment. While data-driven parameterizations have shown some promise of alleviating these obstacles, the scarcity of high-quality training data and their lack of long-term stability typically hinders their ability to capture the risk of rare extreme events. In this work we present a general strategy for training variational (probabilistic) neural network models to non-intrusively correct under-resolved long-time simulations of turbulent climate systems. The approach is based on the paradigm introduced by Barthel Sorensen et al. (2024, https://doi.org/10.1029/2023ms004122) which involves training a post-processing correction operator on under-resolved simulations nudged toward a high-fidelity reference. Our variational framework enables us to learn the dynamics of the underlying system from very little training data and thus drastically improve the extrapolation capabilities of the previous deterministic state-of-the art—even when the statistics of that training data are far from converged. We investigate and compare three recently introduced variational network architectures and illustrate the benefits of our approach on an anisotropic quasi-geostrophic flow. For this prototype model our approach is able to not only accurately capture global statistics, but also the anistropic regional variation and the statistics of multiple extreme event metrics—demonstrating significant improvement over previously introduced deterministic architectures.
View details
Who Controls the Curriculum for AI? The Limits of Participatory Design for Educational AI
Michael Madaio
Learning Under Algorithmic Conditions, University of Minnesota Press (2026)
Preview abstract
Participatory design is a long-standing effort to shift control over technology design from technologists to users and communities impacted by technologies. For educational AI, this means involving students, families, teachers, and other stakeholders in shaping the design of AI systems. While promising, in this article, I situate the recent calls for participatory design of educational AI systems within a different historical tradition—that of contests over local control of educational curricula. I argue that approaches that attempt to steer the design and development of educational AI through participatory methods may inadvertently reproduce the history of political contestation of educational curricula, in ways that may privilege the most powerful communities, rather than those inequitably impacted. What might it look like to treat participatory AI design as a site for political contestation? How might these approaches avoid reproducing the same majoritarian tendencies that led to educational inequities in the first place?
View details
ALF: Advertiser Large Foundation Model for Multi-Modal Advertiser Understanding
Sunny Rajagopalan
Alireza Golestaneh
Shubhra Chandra
Min Zhou
Jonathan Vronsky
Songbai Yan
2026
Preview abstract
We present ALF (Advertiser Large Foundation model), a multi-modal transformer architecture for understanding advertiser behavior and intent across text, image, video and structured data modalities. Through contrastive learning and multi-task optimization, ALF creates unified advertiser representations that capture both content and behavioral patterns. Our model achieves state-of-the-art performance on critical tasks including fraud detection, policy violation identification, and advertiser similarity matching. In production deployment, ALF reduces false positives by 90\% while maintaining 99.8\% precision on abuse detection tasks. The architecture's effectiveness stems from its novel combination of multi-modal transformations, intersample attention mechanism, spectrally normalized projections, and calibrated probabilistic outputs.
View details
Preview abstract
The advent of 3D Gaussian Splatting has revolutionized graphics rendering by offering high visual quality and fast rendering speed. However, training large-scale scenes at high quality remains challenging due to the substantial memory demands required to store Gaussians and optimizer states. To address these limitations, we propose GS-Offload, fast and memory-efficient training system for 3D Gaussian Splatting. GS-Offload stores Gaussians and optimizer states in host memory and selectively transfer only the necessary data to GPU memory on demand, significantly reducing GPU memory usage. With carefully designed software pipelining and CPU-side optimizer acceleration, GS-Offload achieves training speed near that of GPU-only setups, while significantly lowering GPU memory demands.
View details