Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10795 publications
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
A Call to Action: Advancing the Conversation Around Neurodivergent Education-Employment Transitions
Dannie Lynn Fountain
Vicki Baker
Kevin Danley
Closing the Gap (2025)
Preview abstract
Neurodiversity is still largely stigmatized and excluded from DEIB frameworks and related organizational initiatives, despite the increased recognition regarding the benefits of neuroinclusion within the education and corporate spheres. We seek to address this knowledge-to-practice gap through the creation of the Neurodiversity Engagement Framework. By highlighting supports needed for neurodivergent individuals, and those that support them, the framework helps neurodivergent individuals navigate within and across higher education and industry contexts. Informed by an interdisciplinary review of literature from higher education, industry, and corporate leadership contexts, the Neurodiversity Engagement Framework brings to light prevailing challenges within practices and policies, serving as a guide for the creation of a more supportive foundation for neurodiverse individuals to thrive. In this manuscript, readers are encouraged to consider the myriad of impacts that neurodiversity has on higher education and industry experiences and the ways that organizations can be more proactive in their support of this growing population. To conclude, we offer a roadmap for future research and practice to further elucidate ways academic and corporation leaders and policymakers can effectively support neurodivergent individuals.
View details
PolyPath: Adapting a Large Multimodal Model for Multislide Pathology Report Generation
Lin Yang
Shravya Shetty
Tiam Jaroensri
Faruk Ahmed
Daniel Golden
Modern Pathology (2025)
Preview abstract
The interpretation of histopathology cases underlies many important diagnostic and treatment decisions in medicine. Notably, this process typically requires pathologists to integrate and summarize findings across multiple slides per case. Existing vision-language capabilities in computational pathology have so far been largely limited to small regions of interest, larger regions at low magnification, or single whole-slide images (WSIs). This limits interpretation of findings that span multiple high-magnification regions across multiple WSIs. By making use of Gemini 1.5 Flash, a large multimodal model with a 1-million token context window, we demonstrate the ability to generate bottom-line diagnoses from up to 40,000 image patches of size 768 × 768 pixels from multiple WSIs at 10× magnification. This is the equivalent of up to 11 hours of video at 1 fps. Expert pathologist evaluations demonstrate that the generated report text is clinically accurate and equivalent to or preferred over the original reporting for 68% (95% CI, 60%-76%) of multi-slide examples with up to 5 slides. Although performance decreased for examples with ≥6 slides, this study demonstrates the promise of leveraging the long-context capabilities of modern large multimodal models for the uniquely challenging task of medical report generation where each case can contain thousands of image patches.
View details
Diamond Circuits for Surface Codes
arXiv (2025)
Preview abstract
This short paper describes a new circuit to measure surface codes, which allows them to be implemented on the heavy-square lattice. The circuits perform far worse than the usual surface code, but are more efficient in terms of the distance they can achieve for a given number of qubits and couplers.
Paper Abstract:
We present and benchmark an interesting subfamily of circuits within the LUCI framework, which we refer to as diamond circuits, that implement a surface code on a Lieb or “Heavy-Square” lattice. This makes them more qubit- and measurement-efficient than previous constructions. These circuits are built around a mid-cycle state that resembles a Bravyi-Bacon-Shor surface code on the data and measurement qubits. These circuits preserve the spacelike distance of the code, but suffer a penalty in timelike distance. This could be useful in regimes where quantum computers are limited by the number of control lines or frequency collisions.
View details
Faster electronic structure quantum simulation by spectrum amplification
Guang Hao Low
Robbie King
Alec White
Rolando Somma
Dominic Berry
Qiushi Han
Albert Eugene DePrince III
arXiv (2025) (to appear)
Preview abstract
We discover that many interesting electronic structure Hamiltonians have a compact and close-to-frustration-free sum-of-squares representation with a small energy gap. We show that this gap enables spectrum amplification in estimating ground state energies, which improves the cost scaling of previous approaches from the block-encoding normalization factor $\lambda$ to just $\sqrt{\lambda E_{\text{gap}}}$. For any constant-degree polynomial basis of fermionic operators, a sum-of-squares representation with optimal gap can be efficiently computed using semi-definite programming. Although the gap can be made arbitrarily small with an exponential-size basis, we find that the degree-$2$ spin-free basis in combination with approximating two-body interactions by a new Double-Factorized (DF) generalization of Tensor-Hyper-Contraction (THC) gives an excellent balance of gap, $\lambda$, and block-encoding costs. For classically-hard FeMoco complexes -- candidate applications for first useful quantum advantage -- this combination improves the Toffoli gates cost of the first estimates with DF [Phys. Rev. Research 3, 033055] or THC [PRX Quantum 2, 030305] by over two orders of magnitude.
https://drive.google.com/file/d/1hw4zFv_X0GeMpE4et6SS9gAUM9My98iJ/view?usp=sharing
View details
Binamix -- A Python Library for Generating Binaural Audio Datasets
Dan Barry
Davoud Shariat Panah
Alessandro Ragano
Andrew Hines
AES 158th Audio Engineering Society Convention (2025)
Preview abstract
The increasing demand for spatial audio in applications such as virtual reality, immersive media, and spatial audio research necessitates robust solutions to generate binaural audio data sets for use in testing and validation. Binamix is an open-source Python library designed to facilitate programmatic binaural mixing using the extensive SADIE II Database, which provides Head Related Impulse Response (HRIR) and Binaural Room Impulse Response (BRIR) data for 20 subjects. The Binamix library provides a flexible and repeatable framework for creating large-scale spatial audio datasets, making it an invaluable resource for codec evaluation, audio quality metric development, and machine learning model training. A range of pre-built example scripts, utility functions, and visualization plots further streamline the process of custom pipeline creation. This paper presents an overview of the library’s capabilities, including binaural rendering, impulse response interpolation, and multi-track mixing for various speaker layouts. The tools utilize a modified Delaunay triangulation technique to achieve accurate HRIR/BRIR interpolation where desired angles are not present in the data. By supporting a wide range of parameters such as azimuth, elevation, subject Impulse Responses (IRs), speaker layouts, mixing controls, and more, the library enables researchers to create large binaural datasets for any downstream purpose. Binamix empowers researchers and developers to advance spatial audio applications with reproducible methodologies by offering an open-source solution for
binaural rendering and dataset generation. We release the library under the Apache 2.0 License at https://github.com/QxLabIreland/Binamix/
View details
Deterministic Parallel High-Quality Hypergraph Partitioning
Nikolai Maas
Lars Gottesbüren
Robert Krause
2025
Preview abstract
We present a deterministic parallel multilevel algorithm for balanced hypergraph partitioning that matches the state of the art for non-deterministic algorithms. Deterministic parallel algorithms produce the same result in each invocation, which is crucial for reproducibility. Moreover, determinism is highly desirable in application areas such as VLSI design. While there has been tremendous progress in parallel hypergraph partitioning algorithms recently,
deterministic counterparts for high-quality local search techniques are missing. Consequently, solution quality is severely lacking in comparison to the non-deterministic algorithms.
In this work we close this gap. First, we present a generalization of the recently proposed Jet refinement algorithm. While Jet is naturally amenable to determinism, significant changes are necessary to achieve competitive performance on hypergraphs. We also propose an improved deterministic rebalancing algorithm for Jet. Moreover, we consider the powerful but slower flow-based refinement and introduce a scheme that enables deterministic results while building upon a non-deterministic maximum flow algorithm.
As demonstrated in our thorough experimental evaluation, this results in the first deterministic parallel partitioner that is competitive to the highest quality solvers. With Jet refinement, we match or exceed the quality of Mt-KaHyPar's non-deterministic default configuration while being only 15% slower on average. We observe self-relative speedups of up to 55 on 64 cores with a 22.5x average speedup.
Our deterministic flow-based refinement exceeds the quality of the non-deterministic variant by roughly 1% on average but requires 31% more running time.
View details
Day-of-the-week Awareness in Time of Day Breakpoints for Traffic Light Plans
Ori Rottenstreich
Eliav Buchnik
Shai Ferster
Tom Kalvari
Ron Tsibulsky
Danny Veikherman
Jack Haddad
2025
Preview abstract
Time-of-day breakpoints (TODs) refer to the times over the day in which the plan of a traffic light is changed. Traditionally, TODs are selected jointly for all weekdays (Monday-Friday), typically with additional TODs dedicated to weekends. In this paper, we present an alternative approach motivated by traffic characteristics that can differ among the weekdays Monday-Friday and consider TODs which are day-of-the-week aware. The traffic-aware approach studies similarities among days and computes TODs that can be shared among days with similar characteristics but can also have other forms for weekdays with unique characteristics. Based on traffic properties derived from anonymized trajectories, we apply the new methodology to compute time-of-day breakpoints that are day-of-the-week aware in the city of Rio de Janeiro, Brazil and estimate the impact of the new methodology.
View details
Spherical dimension
Bogdan Chornomaz
Shay Moran
Tom Waknine
2025
Preview abstract
We introduce and study the \emph{spherical dimension}, a natural topological relaxation of the VC dimension that unifies several results in learning theory where topology plays a key role in the proofs. The spherical dimension is defined by extending the set of realizable datasets (used to define the VC dimension) to the continuous space of realizable distributions. In this space, a shattered set of size d (in the VC sense) is completed into a continuous object, specifically a d-dimensional sphere of realizable distributions. The spherical dimension is then defined as the dimension of the largest sphere in this space. Thus, the spherical dimension is at least the VC dimension.
The spherical dimension serves as a common foundation for leveraging the Borsuk-Ulam theorem and related topological tools. We demonstrate the utility of the spherical dimension in diverse applications, including disambiguations of partial concept classes, reductions from classification to stochastic convex optimization, stability and replicability, and sample compression schemes. Perhaps surprisingly, we show that the open question posed by Alon, Hanneke, Holzman, and Moran (FOCS 2021) of whether there exist non-trivial disambiguations for halfspaces with margin is equivalent to the basic open question of whether the VC and spherical dimensions are finite together.
View details
Preview abstract
Cloud platforms have been virtualizing storage devices like flash-based solid-state drives (SSDs) to make effective use of storage resources. They enable either software-isolated instance or hardware-isolated instance for facilitating the storage sharing between multi-tenant applications. However, for decades, they have to combat the fundamental tussle between the performance isolation and resource utilization. They suffer from either long tail latency caused by weak isolation or low storage utilization caused by strong isolation.
In this paper, we present FleetIO, a learning-based storage virtualization framework that employs reinforcement learning (RL) for managing virtualized SSDs. FleetIO explores the unique features of RL to handle the dynamic changes of application workloads and storage states, and integrates the storage scheduling into the RL decision-making process. It achieves both performance isolation and improved storage utilization by enabling dynamic fine-grained storage harvesting across co-located application instances, while minimizing its negative impact on their service-level objectives (SLOs). FleetIO clusters workloads into different types (e.g., latency-sensitive and bandwidth-intensive) based on the collected I/O traces at runtime, and fine-tunes the RL reward functions for each type of workloads. We implement FleetIO on a real programmable SSD board and evaluate it with diverse cloud applications. We show that FleetIO improves the overall storage utilization of the shared SSD by up to 1.4×, and decreases the tail latency of I/O requests by 1.5× on average, compared to the state-of-the-art storage sharing approaches.
View details
Preview abstract
Several systems rely on traceroute to track a large number of Internet paths as they change over time. Monitoring systems perform this task by remapping paths periodically or whenever a change is detected. This paper shows that such complete remapping is inefficient, because most path changes are localized to a few hops of a path. We develop RemapRoute, a tool to remap a path locally given the previously known path and a change point. RemapRoute sends targeted probes to locate and remap the often few hops that have changed. Our evaluation with trace-driven simulations and in a real deployment shows that local remapping reduces the average number of probes issued during remapping by 63% and 79%, respectively, when compared with complete remapping. At the same time, our results show that local remapping has little impact on the accuracy of inferred paths.
View details
Simulation-Based Inference: A Practical Guide
Michael Deistler
Jan Boelts
Peter Steinbach
Guy Moss
Thomas Moreau
Manuel Gloeckler
Pedro L. C. Rodriguez
Julia Linhart
Janne K. Lappalainen
Benjamin Kurt Miller
Pedro J. Goncalves
Cornelius Schröder
Jakob H. Macke
arXiv (2025)
Preview abstract
A central challenge in many areas of science and engineering is to identify model parameters that are consistent with empirical data and prior knowledge. Bayesian inference offers a principled framework for this task, but can be computationally prohibitive when models are defined by stochastic simulators. Simulation-Based Inference (SBI) provides a suite of methods to overcome this limitation and has enabled scientific discoveries in fields such as particle physics, astrophysics and neuroscience. The core idea of SBI is to train neural networks on data generated by a simulator, without requiring access to likelihood evaluations. Once trained, the neural network can rapidly perform inference on empirical observations without requiring additional optimization or simulations. In this tutorial, we provide a practical guide for practitioners aiming to apply SBI methods. We outline a structured SBI workflow and offer practical guidelines and diagnostic tools for every stage of the process--from setting up the simulator and prior, choosing the SBI method and neural network architecture, training the inference model, to validating results and interpreting the inferred parameters. We illustrate these steps through examples from astrophysics, psychophysics, and neuroscience. This tutorial empowers researchers to apply state-of-the-art SBI methods, facilitating efficient parameter inference for scientific discovery.
View details
Reconfigurable Stream Network Architecture
Chengyue Wang
Jason Cong
James Hoe
International Symposium on Computer Architecture (ISCA) (2025)
Preview abstract
As AI systems grow increasingly specialized and complex, managing hardware heterogeneity becomes a pressing challenge. How can we efficiently coordinate and synchronize heterogeneous hardware resources to achieve high utilization? How can we minimize the friction of transitioning between diverse computation phases, reducing costly stalls from initialization, pipeline setup, or drain? Our insight is that a network abstraction at the ISA level naturally unifies heterogeneous resource orchestration and phase transitions. This paper presents a Reconfigurable Stream Network Architecture (RSN), a novel ISA abstraction designed for the DNN domain. RSN models the datapath as a circuit-switched network with stateful functional units as nodes and data streaming on the edges. Programming a computation corresponds to triggering a path. Software is explicitly exposed to the compute and communication latency of each functional unit, enabling precise control over data movement for optimizations such as compute-communication overlap and layer fusion. As nodes in a network naturally differ, the RSN abstraction can efficiently virtualize heterogeneous hardware resources by separating control from the data plane, enabling low instruction-level intervention. We build a proof-of-concept design RSN-XNN on VCK190, a heterogeneous platform with FPGA fabric and AI engines. Compared to the SOTA solution on this platform, it reduces latency by 6.1x and improves throughput by 2.4x-3.2x. Compared to the T4 GPU with the same FP32 performance, it matches latency with only 18% of the memory bandwidth. Compared to the A100 GPU at the same 7nm process node, it achieves 2.1x higher energy efficiency in FP32.
View details
Preview abstract
Recently, Chevignard et al proposed a way to factor $n$ bit RSA integers using only $(0.5 + \epsilon)n$ logical qubits.
In this paper, I streamline Chevignard's algorithm and estimate its physical cost accounting for the overhead of error correction.
I reduce its Toffoli count by more than 100x, and show that this implies a 2048 bit RSA integer could be factored in less than a week using less than one million noisy qubits (compared to 20 million in Gidney+Eker{\aa} 2019).
I make the same assumptions as in Gidney+Eker{\aa} 2019: a square grid of qubits with nearest neighbor connections, a gate error rate of $0.1\%$, a surface code cycle time of 1 microsecond, and a control system reaction time of $10$ microseconds.
View details