Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10633 publications
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
Despite the advent of legislation such as the General Data Protection Regulation (GDPR) with its associated "Right to be Forgotten" (RTBF), few, if any, studies have measured user reactions to realistic edge cases with public-interest content. Surveying both users covered by and excluded from RTBF, this vignette-based survey experiment sought to better understand how users think of delisting content from search engine results and what factors influence user perceptions. While leaving information accessible in search engine results generally leads to warmer feelings towards those search engines than delisting it, we find that users do prefer different outcomes
depending on contextual elements specific to given cases. We also find that whether a country has active RTBF legislation does seem to be associated with both knowledge and attitudes about RTBF, but is unlikely to explain all of it. These results indicate a complex context around removing public-interest content from search engines’ results; it is essential that experts sensitive to local context perform the review in order to ensure that removal requests are handled in a way that meets users’ expectations.
View details
Improved FPT Approximation Scheme and Approximate Kernel for Biclique-Free Max k-Weight SAT: Greedy Strikes Back
Theoretical Computer Science, 1028 (2025)
Preview abstract
In the Max k-Weight SAT (aka Max SAT with Cardinality Constraint) problem, we are given a CNF formula with n variables and m clauses together with a positive integer k. The goal is to find an assignment where at most k variables are set to one that satisfies as many constraints as possible. Recently, Jain et al. (SODA 2023) gave an FPT approximation scheme (FPT-AS) with running time 2^O((dk/ε)^d) * (n + m)^O(1) for Max k-Weight SAT when the incidence graph is K_{d,d}-free. They asked whether a polynomial-size approximate kernel exists. In this work, we answer this question positively by giving an (1 − ε)-approximate kernel with (dk/ε)^O(d) variables. This also implies an improved FPT-AS with running time (dk/ε)^O(dk) * (n+m)^O(1)-time algorithm for the problem. Our approximate kernel is based mainly on a couple of greedy strategies together with a sunflower lemma-style reduction rule.
View details
Deterministic Parallel High-Quality Hypergraph Partitioning
Nikolai Maas
Lars Gottesbüren
Robert Krause
2025
Preview abstract
We present a deterministic parallel multilevel algorithm for balanced hypergraph partitioning that matches the state of the art for non-deterministic algorithms. Deterministic parallel algorithms produce the same result in each invocation, which is crucial for reproducibility. Moreover, determinism is highly desirable in application areas such as VLSI design. While there has been tremendous progress in parallel hypergraph partitioning algorithms recently,
deterministic counterparts for high-quality local search techniques are missing. Consequently, solution quality is severely lacking in comparison to the non-deterministic algorithms.
In this work we close this gap. First, we present a generalization of the recently proposed Jet refinement algorithm. While Jet is naturally amenable to determinism, significant changes are necessary to achieve competitive performance on hypergraphs. We also propose an improved deterministic rebalancing algorithm for Jet. Moreover, we consider the powerful but slower flow-based refinement and introduce a scheme that enables deterministic results while building upon a non-deterministic maximum flow algorithm.
As demonstrated in our thorough experimental evaluation, this results in the first deterministic parallel partitioner that is competitive to the highest quality solvers. With Jet refinement, we match or exceed the quality of Mt-KaHyPar's non-deterministic default configuration while being only 15% slower on average. We observe self-relative speedups of up to 55 on 64 cores with a 22.5x average speedup.
Our deterministic flow-based refinement exceeds the quality of the non-deterministic variant by roughly 1% on average but requires 31% more running time.
View details
Preview abstract
Virtual hand representation in Head-Mounted Displays (HMDs) offers immersive and intuitive interactions in Virtual Reality (VR). However, current hand tracking algorithms are prone to errors, which can disrupt the user experience and hinder task performance. This paper presents a novel method for providing users with visual feedback when the quality of hand tracking decreases. Our approach employs a notification modal that warns users of potential failures. We identified three common hand tracking failure scenarios and evaluated the effectiveness of our method in two distinct VR tasks: object manipulation and complex assembly tasks. Results show that our early warning system reduces task completion time, lowers hand-tracking failures by up to 83%, decreases errors, improves system usability, and reduces cognitive load. This work contributes to the development of more robust and user-friendly VR HMD applications by enhancing hand tracking reliability, usability, and workload.
View details
Quantum Simulation of Chemistry via Quantum Fast Multipole Transform
Dominic Berry
Kianna Wan
Andrew Baczewski
Elliot Eklund
Arkin Tikku
arXiv:2510.07380 (2025)
Preview abstract
Here we describe an approach for simulating quantum chemistry on quantum computers with significantly lower asymptotic complexity than prior work.
The approach uses a real-space first-quantised representation of the molecular Hamiltonian which we propagate using high-order product formulae.
Essential for this low complexity is the use of a technique similar to the fast multipole method for computing the Coulomb operator with $\widetilde{\cal O}(\eta)$ complexity for a simulation with $\eta$ particles. We show how to modify this algorithm so that it can be implemented on a quantum computer. We ultimately demonstrate an approach with $t(\eta^{4/3}N^{1/3} + \eta^{1/3} N^{2/3} ) (\eta Nt/\epsilon)^{o(1)}$ gate complexity, where $N$ is the number of grid points, $\epsilon$ is target precision, and $t$ is the duration of time evolution.
This is roughly a speedup by ${\cal O}(\eta)$ over most prior algorithms.
We provide lower complexity than all prior work for $N<\eta^6$ (the only regime of practical interest), with only first-quantised interaction-picture simulations providing better performance for $N>\eta^6$. However, we expect the algorithm to have large constant factors that are likely to limit its practical applicability.
View details
Preview abstract
Building on the linear programming approach to competitive equilibrium pricing, we develop a general method for constructing iterative auctions that achieve Vickrey-Clarke-Groves (VCG) outcomes. We show how to transform a linear program characterizing competitive equilibrium prices into one that characterizes universal competitive equilibrium (UCE) prices, which elicit precisely the information needed to compute VCG payments. By applying a primal-dual algorithm to these transformed programs, we derive iterative auctions that maintain a single price path, eliminating the overhead and incentive problems associated with multiple price paths used solely for payment calculations. We demonstrate the versatility of our method by developing novel UCE auctions for multi-unit settings and deriving an iterative UCE variant of the Product-Mix auction. The resulting auctions combine the transparency of iterative price discovery with the efficiency and incentive properties of the VCG mechanism.
View details
Preview abstract
During remote communication, participants often share both digital and physical content, such as product designs, digital assets, and environments, to enhance mutual understanding. Recent advances in augmented communication have facilitated users to swiftly create and share digital 2D copies of physical objects from video feeds into a shared space. However, conventional 2D representations of digital objects limits spatial referencing in immersive environments. To address this, we propose Thing2Reality, an Extended Reality (XR) meeting platform that facilitates spontaneous discussions of both digital and physical items during remote sessions. With Thing2Reality, users can quickly materialize ideas or objects in immersive environments and share them as conditioned multiview renderings or 3D Gaussians. Thing2Reality enables users to interact with remote objects or discuss concepts in a collaborative manner. Our user studies revealed that the ability to interact with and manipulate 3D representations of objects significantly enhances the efficiency of discussions, with the potential to augment discussion of 2D artifacts.
View details
Validation of Quantum Elliptic Curve Point Addition Circuits
(2025) (to appear)
Preview abstract
Specific quantum algorithms exist to—in theory—
break elliptic curve cryptographic protocols. Implementing these
algorithms requires designing quantum circuits that perform elliptic curve arithmetic. To accurately judge a cryptographic protocol’s resistance against future quantum computers, researchers
figure out minimal resource-count circuits for performing these
operations while still being correct. To assure the correctness of
a circuit, it is integral to restore all ancilla qubits used to their
original states. Failure to do so could result in decoherence of the
computation’s final result. Through rigorous classical simulation
and unit testing, I surfaced four inconsistencies in the state-ofthe-art quantum circuit for elliptic curve point addition where
the circuit diagram states the qubits are returned in the original
(|0⟩) state, but the intermediate values are not uncomputed. I
provide fixes to the circuit without increasing the leading-order
gate cost.
View details
The ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning
Pratik Fegade
Proceedings of the 30th ACM International Conference on Architectural Support for Programming Languages and Operating Systems (2025), pp. 5-17
Preview abstract
A chief enabler of large-scale deep learning is the distribution of computation across multiple interconnected hardware accelerators. In order to unlock the maximum possible performance, a compiler must first select a reasonable strategy to parallelize a model's operations. Since neural network architectures admit multiple flavors of parallelism, determining the proper strategy for each instruction is a critical (albeit non-trivial) task. To solicit new ideas toward solving this challenging combinatorial optimization problem, we organized the ASPLOS 2025 / EuroSys 2025 Contest on Intra-Operator Parallelism for Distributed Deep Learning, a multi-month competition focused on advancing the state-of-the-art for model partitioning algorithms. In this paper, we offer a retrospective of this event, including the basic problem formulation, key challenges & opportunities, our new benchmark suite, and the quality of submissions received.
View details
Enhancing Performance of the Tesseract Decoder for Quantum Error Correction
DRAGANA GRBIC
Laleh Beni
Noah Shutty
2025
Preview abstract
In this paper I describe the performance enchantments I implemented in a quantum-error-correction decoder developed at Google. The decoder is an open-source project and I am documenting the speedups I achieved in this paper.
View details
Preview abstract
The integration of vector search into databases, driven by advancements in embedding models, semantic search, and Retrieval-Augmented Generation (RAG), enables powerful combined querying of structured and unstructured data. This paper focuses on filtered vector search (FVS), a core operation where relational predicates restrict the dataset before or during the vector similarity search (top-k). While approximate near neighbor (ANN) indices are commonly used to accelerate vector search by trading latency for recall, the addition of filters complicates performance optimization and makes achieving stable, declarative recall guarantees challenging. Filters alter the effective dataset size and distribution, impacting the search effort required. We discuss the primary FVS execution strategies – pre-filtering, post-filtering, and inline-filtering – whose efficiencies depend on factors like filter selectivity, cardinality, and data correlation. We review existing approaches that modify index structures and search algorithms (e.g., iterative post-filtering, filter-aware index traversal) to enhance FVS performance. This tutorial provides a comprehensive overview of filtered vector search, discussing its use cases, classifying current solutions and their trade-offs, and highlighting crucial research challenges and future directions for developing efficient and accurate FVS systems.
View details
Improved Lower Bound for Differentially Private Facility Location
Information Processing Letters, 187 (2025)
Preview abstract
We consider the differentially private (DP) facility location problem in the so called super-set output setting proposed by Gupta et al. [GLM+10]. The current best known expected approximation ratio for an ε-DP algorithm is O(log n / √ε) due to Cohen-Addad et al. [CEF+22] where n denote the size of the metric space, meanwhile the best known lower bound is Ω(1/√ε) [EGLW19].
In this short note, we give a lower bound of Ω(min{log n, √(log n/ε)}) on the expected approximation ratio of any ε-DP algorithm, which is the first evidence that the approximation ratio has to grow with the size of the metric space.
View details
Perceptual Evaluation of a Mix Presentation for Immersive Audio with IAMF
Carlos Tejeda-Ocampo
Toni Hirvonen
Ema Souza-Blanes
Mahmoud Namazi
AES 158th Convention of the Audio Engineering Society (2025)
Preview abstract
Immersive audio mix presentations involve transmitting and rendering several audio elements simultaneously. This enables next-generation applications, such as personalized playback. Using immersive loudspeaker and headphone MUSHRA tests, we investigate bitrate vs. quality for a typical mix presentation use case of a foreground stereo element, plus a background Ambisonics scene. For coding, we use Immersive Audio Model and Formats, a recently
proposed system for Next-Generation Audio. Excellent quality is achieved at 384 kbit/s even with reasonable amount of personalization. We also propose a framework for content-aware analysis that can significantly reduce the bitrate when using underlying legacy audio coding instances.
View details
The Case for Leveraging Transport Signals to Improve Internet Speed Test Efficiency
Cristina Leon
Computer Communication Review (2025) (to appear)
Preview abstract
Internet speed tests are an important tool to enable consumers and regulators to monitor the quality of Internet access. However, increased Internet speeds to the home and an increased demand for speed testing pose scaling challenges to providers of speed tests, who must maintain costly infrastructure to keep up with this demand. In recent years, this has led the popular NDT speed test to limit data transfer to a total of 250MB, which comes at the cost of accuracy for high bandwidth speed test clients.
In this paper, we observe that the NDT speed test server’s congestion control algorithm (BBRv1) is also trying to estimate the capacity of the connection. We leverage this observation and signals from BBR to improve the accuracy and efficiency of speed tests. We first show how leveraging signals from BBR can more than double the accuracy of a 10MB test–from 17% to 43%–for clients with speeds over 400Mbps.
We then show how using BBR signals to adaptively end the speed test reduces data transfer by 36% and increased accuracy by 13% for high bandwidth clients, relative to a 100MB fixed length test. Even accounting for clients that never observe enough samples to utilize the BBR signal, this adaptive approach still uses 25% less data than a fixed 100MB test with 37-44% higher accuracy.
View details