Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 11061 publications
CrossCheck: Input Validation for WAN Control Systems
Rishabh Iyer
Isaac Keslassy
Sylvia Ratnasamy
Networked Systems Design and Implementation (NSDI) (2026) (to appear)
Preview abstract
We present CrossCheck, a system that validates inputs to the Software-Defined Networking (SDN) controller in a Wide Area Network (WAN). By detecting incorrect inputs—often stemming from bugs in the SDN control infrastructure—CrossCheck alerts operators before they trigger network outages.
Our analysis at a large-scale WAN operator identifies invalid inputs as a leading cause of major outages, and we show how CrossCheck would have prevented those incidents. We deployed CrossCheck as a shadow validation system for four weeks in a production WAN, during which it accurately detected the single incident of invalid inputs that occurred while sustaining a 0% false positive rate under normal operation, hence imposing little additional burden on operators. In addition, we show through simulation that CrossCheck reliably detects a wide range of invalid inputs (e.g., detecting demand perturbations as small as 5% with 100% accuracy) and maintains a near-zero false positive rate for realistic levels of noisy, missing, or buggy telemetry data (e.g., sustaining zero false positives with up to 30% of corrupted telemetry data).
View details
Preview abstract
How many T gates are needed to approximate an arbitrary n-qubit quantum state to within
a given precision ϵ? Improving prior work of Low, Kliuchnikov and Schaeffer, we show that the
optimal asymptotic scaling is Θ(sqrt{2^n log(1/ε)} + log(1/ε)) if we allow an unlimited number of ancilla qubits. We also show that this is the optimal T-count for implementing an arbitrary
diagonal n-qubit unitary to within error ϵ. We describe an application to batched synthesis of
single-qubit unitaries: we can approximate a tensor product of m = O(log log(1/ϵ)) arbitrary
single-qubit unitaries to within error ϵ with the same asymptotic T-count as is required to
approximate just one single-qubit unitary.
View details
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
Semantic data models express high-level business concepts and metrics, capturing the business logic needed to query a database correctly. Most data modeling solutions are built as layers above SQL query engines, with bespoke query languages or APIs. The layered approach means that semantic models can’t be used directly in SQL queries. This paper focuses on an open problem in this space – can we define semantic models in SQL, and make them naturally queryable in SQL?
In parallel, graph query is becoming increasingly popular, including in SQL. SQL/PGQ extends SQL with an embedded subset of the GQL graph query language, adding property graph views and making graph traversal queries easy.
We explore a surprising connection: semantic data models are graphs, and defining graphs is a data modeling problem. In both domains, users start by defining a graph model, and need query language support to easily traverse edges in the graph, which means doing joins in the underlying data.
We propose some useful SQL extensions that make it easier to use higher-level data model abstractions in queries. Users can define a “semantic data graph” view of their data, encapsulating the complex business logic required to query the underlying tables correctly. Then they can query that semantic graph model easily with SQL.
Our SQL extensions are useful independently, simplifying many queries – particularly, queries with joins. We make declared foreign key relationships usable for joins at query time – a feature that seems obvious but is notably missing in standard SQL.
In combination, these extensions provide a practical approach to extend SQL incrementally, bringing semantic modeling and graph query together with the relational model and SQL.
View details
Consensus or Conflict? Fine-Grained Evaluation of Conflicting Answers in Question-Answering
Eviatar Nachshoni
Arie Cattan
Shmuel Amar
Ori Shapira
Ido Dagan
2025
Preview abstract
Large Language Models (LLMs) have demonstrated strong performance in question answering (QA) tasks. However, Multi-Answer Question Answering (MAQA), where a question may have several valid answers, remains challenging. Traditional QA settings often assume consistency across evidences, but MAQA can involve conflicting answers. Constructing datasets that reflect such conflicts is costly and labor-intensive, while existing benchmarks often rely on synthetic data, restrict the task to yes/no questions, or apply unverified automated annotation. To advance research in this area, we extend the conflict-aware MAQA setting to require models not only to identify all valid answers, but also to detect specific conflicting answer pairs, if any. To support this task, we introduce a novel cost-effective methodology for leveraging fact-checking datasets to construct NATCONFQA, a new benchmark for realistic, conflict-aware MAQA, enriched with detailed conflict labels, for all answer pairs. We evaluate eight high-end LLMs on NATCONFQA, revealing their fragility in handling various types of conflicts and the flawed strategies they employ to resolve them.
View details
Digital Shadow AI Risk Theoretical Framework (DART): A Framework for Managing Data Disclosure and Privacy Risks of AI tools at Work
Master's Thesis (2025) (to appear)
Preview abstract
The accelerated integration of generative AI technologies and agentic AI tools, particularly those like ChatGPT, into workplace settings has introduced complex challenges concerning data governance, regulatory compliance, and organizational privacy (GDPR 2016; CCPA/CPRA). This study introduces the Digital Shadow AI Risk Theoretical Framework (DART)—a novel theoretical framework designed to systematically identify, classify, and address the latent risks arising from the widespread, and often unregulated, use of AI systems in professional environments (NIST, 2023; OECD AI Policy Observatory, 2023). DART introduces six original, interrelated constructs developed in this study: Unintentional Disclosure Risk, Trust-Dependence Paradox, Data Sovereignty Conflict, Knowledge Dilution Phenomenon, Ethical Black Box Problem, and Organizational Feedback Loops. Each construct reflects a unique dimension of risk that emerges as organizations increasingly rely on AI-driven tools for knowledge work and decision-making.
The framework is empirically tested through a mixed-methods research design involving hypothesis testing and statistical analysis of behavioral data gathered from cross-sectional surveys of industry professionals. Two cross-industry surveys (Survey-1: 416 responses, 374 analyzed; Survey-2: 203 responses, 179 analyzed) and CB-SEM tests supported seven of eight hypotheses; H4 (sovereignty) was not significant; H7 (knowledge dilution) was confirmed in replication. The findings highlight critical gaps in employee training, policy awareness, and risk mitigation strategies—underscoring the urgent need for updated governance frameworks, comprehensive AI-use policies, and targeted educational interventions. This paper contributes to emerging scholarship by offering a robust model for understanding and mitigating digital risks in AI-enabled workplaces, providing practical implications for compliance officers, risk managers, and organizational leaders aiming to harness the benefits of generative AI responsibly and securely. The novelty of DART lies in its explicit theorization of workplace-level behavioral risks—especially Shadow AI, which unlike Shadow IT externalizes organizational knowledge into adaptive systems—thereby offering a unified framework that bridges fragmented literatures and grounds them in empirical evidence.
View details
The Power of Context: How Multimodality Improves Image Super-Resolution
Mojtaba Ardakani
Vishal M Patel
2025 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)
Preview abstract
Single-image super-resolution (SISR) remains challenging due to the inherent difficulty of recovering fine-grained details and preserving perceptual quality from low-resolution inputs. Existing methods often rely on limited image priors, leading to suboptimal results. We propose a novel approach that leverages the rich contextual information available in multiple modalities –including depth, segmentation, edges, and text prompts– to learn a powerful generative prior for SISR within a diffusion model framework. We introduce a flexible network architecture that effectively fuses multimodal information, accommodating an arbitrary number of input modalities without requiring significant modifications to the diffusion process. Crucially, we mitigate hallucinations, often introduced by text prompts, by using spatial information from other modalities to guide regional text-based conditioning. Each modality’s guidance strength can also be controlled independently, allowing steering outputs toward different directions, such as increasing bokeh through depth or adjusting object prominence via segmentation. Extensive experiments demonstrate that our model surpasses state-of-the-art generative SISR methods, achieving superior visual quality and fidelity.
View details
The FLuid Allocation of Surface Code Qubits (FLASQ) Cost Model for Early Fault-Tolerant Quantum Algorithms
Bill Huggins
Amanda Xu
Matthew Harrigan
Christopher Kang
Guang Hao Low
Austin Fowler
arXiv:2511.08508 (2025)
Preview abstract
Holistic resource estimates are essential for guiding the development of fault-tolerant quantum algorithms and the computers they will run on. This is particularly true when we focus on highly-constrained early fault-tolerant devices. Many attempts to optimize algorithms for early fault-tolerance focus on simple metrics, such as the circuit depth or T-count. These metrics fail to capture critical overheads, such as the spacetime cost of Clifford operations and routing, or miss they key optimizations. We propose the FLuid Allocation of Surface code Qubits (FLASQ) cost model, tailored for architectures that use a two-dimensional lattice of qubits to implement the two-dimensional surface code. FLASQ abstracts away the complexity of routing by assuming that ancilla space and time can be fluidly rearranged, allowing for the tractable estimation of spacetime volume while still capturing important details neglected by simpler approaches. At the same time, it enforces constraints imposed by the circuit's measurement depth and the processor's reaction time. We apply FLASQ to analyze the cost of a standard two-dimensional lattice model simulation, finding that modern advances (such as magic state cultivation and the combination of quantum error correction and mitigation) reduce both the time and space required for this task by an order of magnitude compared with previous estimates. We also analyze the Hamming weight phasing approach to synthesizing parallel rotations, revealing that despite its low T-count, the overhead from imposing a 2D layout and from its use of additional ancilla qubits will make it challenging to benefit from in early fault-tolerance. We hope that the FLASQ cost model will help to better align early fault-tolerant algorithmic design with actual hardware realization costs without demanding excessive knowledge of quantum error correction from quantum algorithmists.
View details
Preview abstract
A growing body of research has demonstrated that the behavior of large language models can be effectively controlled at inference time by directly modifying their internal states, either through vector additions to their activations or through updates to their weight matrices. These techniques, while powerful, are often guided by empirical heuristics, such as deriving ``steering vectors'' from the average activations of contrastive prompts. This work provides a theoretical foundation for these interventions, explaining how they emerge from the fundamental computations of the transformer architecture. Building on the recent finding that a prompt's influence can be mathematically mapped to implicit weight updates Dherin et al. (2025), we generalize this theory to deep, multi-block transformers. We show how the information contained in any chunk of a user prompt is represented and composed internally through virtual weight vectors and virtual weight matrices. We then derive a principled method for condensing this information into token-independent thought vectors and thought matrices. These constructs provide a theoretical explanation for existing vector- and matrix-based model editing techniques and offer a direct, computationally-grounded method for transforming textual input into reusable weight updates.
View details
REGEN: A Dataset and Benchmarks with Natural Language Critiques and Narratives
Kun Su
Krishna Sayana
Hubert Pham
James Pine
Yuri Vasilevski
Raghavendra Vasudeva
Liam Hebert
Ambarish Jash
Anushya Subbiah
Sukhdeep Sodhi
(2025)
Preview abstract
This paper introduces a novel dataset REGEN (Reviews Enhanced with GEnerative Narratives), designed to benchmark the conversational capabilities of recommender Large Language Models (LLMs), addressing the limitations of existing datasets that primarily focus on sequential item prediction. REGEN extends the Amazon Product Reviews dataset by inpainting two key natural language features: (1) user critiques, representing user "steering" queries that lead to the selection of a subsequent item, and (2) narratives, rich textual outputs associated with each recommended item taking into account prior context. The narratives include product endorsements, purchase explanations, and summaries of user preferences.
Further, we establish an end-to-end modeling benchmark for the task of conversational recommendation, where models are trained to generate both recommendations and corresponding narratives conditioned on user history (items and critiques). For this joint task, we introduce a modeling framework LUMEN (LLM-based Unified Multi-task Model with Critiques, Recommendations, and Narratives) which uses an LLM as a backbone for critiquing, retrieval and generation. We also evaluate the dataset's quality using standard auto-rating techniques and benchmark it by training both traditional and LLM-based recommender models. Our results demonstrate that incorporating critiques enhances recommendation quality by enabling the recommender to learn language understanding and integrate it with recommendation signals. Furthermore, LLMs trained on our dataset effectively generate both recommendations and contextual narratives, achieving performance comparable to state-of-the-art recommenders and language models.
View details
Preview abstract
This article is targeting an external publication like CACM and is meant to be an opinion piece.
Abstract:
Large Language Models (LLMs) have revolutionized the AI landscape, demonstrating remarkable capabilities across a wide range of tasks. Each new model seemingly reinforces the notion that modern transformer-based AI can conquer any challenge if armed with sufficient compute and data. However, the scaling-driven paradigm is far from a universal solution to AI’s diverse challenges. For example, while scaling has accelerated certain applications, such as robotics, it has yet to show significant impact in others, such as identifying misinformation. Currently, there is no clear framework for distinguishing which use cases thrive from scaling with more data and which demand alternative approaches.
We are beginning to observe that the shape of data itself may hold valuable clues that could inform the success of data-driven scaling. For instance, insights from topological data analysis suggest that examining structural patterns and stability of data across multiple scales can help determine when scaling will be advantageous.
Moreover, the practicalities of data acquisition impose additional constraints that we must factor into the scaling equation upfront. Factors such as availability of high quality data, with its highly nuanced definition, complexity and resource intensity of data collection, and availability of proper evaluation benchmarks determine not just the effectiveness but also viability of scaling.
We have translated these emerging insights about data shape and nature of data acquisition into a practical framework of questions that evaluate predictiveness of historical data, stability of data patterns, clarity of data requirements, feasibility of high-quality data collection, and ease of assessing data quality. Together, these answers can help practitioners make more informed decisions about when scaling is more likely to yield successful outcomes. We have applied the framework to several AI use cases as an example. These early observations highlight a critical need for continued research in this domain.
Full draft link: https://docs.google.com/document/d/1f-HQ69KA4Ec7lNeWI-lTHUurhC0WMO4lNUiDUAEPKes/edit?usp=sharing
View details
ESAM++: Efficient Online 3D Perception on the Edge
Qin Liu
Lavisha Aggarwal
Vikas Bahirwani
Lin Li
Aleksander Holynski
Saptarashmi Bandyopadhyay
Zhengyang Shen
Marc Niethammer
Ehsan Adeli
Andrea Colaco
2025
Preview abstract
Online 3D scene perception in real time is critical for robotics, AR/VR, and autonomous systems, particularly in edge computing scenarios where computational resources are limited. Recent state-of-the-art methods like EmbodiedSAM (ESAM) demonstrate the promise of online 3D perception by leveraging the 2D visual foundation model (VFM) with efficient 3D query lifting and merging. However, ESAM depends on a computationally expensive sparse 3D U-Net for point cloud feature extraction, which we identify as the primary efficiency bottleneck. In this paper, we propose a lightweight and scalable alternative for online 3D scene perception tailored to edge devices. Our method introduces a 3D Sparse FeaturePyramid Network (SFPN) that efficiently captures multi-scale geometric features from streaming 3D point clouds while significantly reducing computational over-head and model size. We evaluate our approach on four challenging segmentation benchmarks—ScanNet, ScanNet200, SceneNN, and 3RScan—demonstrating that our model achieves competitive accuracy with up to 3×faster inference and 3×small model size compared to ESAM, enabling practical deployment in real-world edge scenarios. Code and models will be released.
View details
Preview abstract
Large Language Models (LLMs) have demonstrated impressive capabilities across a range of natural language processing tasks. In particular, improvements in reasoning abilities and the expansion of context windows have opened new avenues for leveraging these powerful models.
NL2SQL is challenging in that the natural language question is inherently ambiguous, while the SQL generation requires a precise understanding of complex data schema and semantics. One approach to this semantic ambiguous problem is to provide more and sufficient contextual information.
In this work, we explore the performance and the latency trade-offs of the extended context window (a.k.a., long context) offered by Google's state-of-the-art LLM (\textit{gemini-1.5-pro}).
We study the impact of various contextual information, including column example values, question and SQL query pairs, user-provided hints, SQL documentation, and schema. To the best of our knowledge, this is the first work to study how the extended context window and extra contextual information can help NL2SQL generation with respect to both accuracy and latency cost.
We show that long context LLMs are robust and do not get lost in the extended contextual information. Additionally, our long-context NL2SQL pipeline based on Google's \textit{gemini-pro-1.5} achieve a strong performance with 67.41\% on BIRD benchmark (dev) without finetuning and expensive self-consistency based techniques.
View details
Linear-Time Multilevel Graph Partitioning via Edge Sparsification
Peter Sanders
Dominik Rosch
Nikolai Maas
Lars Gottesbüren
Daniel Seemaier
2025
Preview abstract
The current landscape of balanced graph partitioning is divided into high-quality but expensive multilevel algorithms and cheaper approaches with linear running time, such as single-level algorithms and streaming algorithms.
We demonstrate how to achieve the best of both worlds with a linear time multilevel algorithm.
Multilevel algorithms construct a hierarchy of increasingly smaller graphs by repeatedly contracting clusters of nodes.
Our approach preserves their distinct advantage, allowing refinement of the partition over multiple levels with increasing detail.
At the same time, we use edge sparsification to guarantee geometric size reduction between the levels and thus linear running time.
We provide a proof of the linear running time as well as additional insights into the behavior of multilevel algorithms, showing that graphs with low modularity are most likely to trigger worst-case running time.
We evaluate multiple approaches for edge sparsification and integrate our algorithm into the state-of-the-art multilevel partitioner KaMinPar, maintaining its excellent parallel scalability.
As demonstrated in detailed experiments, this results in a 1.49x average speedup (up to 4x for some instances) with only 1% loss in solution quality.
Moreover, our algorithm clearly outperforms state-of-the-art single-level and streaming approaches.
View details