Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10822 publications
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Beyond the Phone: Exploring Context-aware Interaction Between Mobile andMixed Reality Devices
Fengyuan Zhu
Daniel Kalmar
Mahdi Tayarani
2025
Preview abstract
Despite the surge in popularity of virtual reality (VR), mobile phones remain the primary medium for accessing digital content, offering both privacy and portability. This short paper presents Beyond the Phone, a novel framework that enhances mobile phones in VR with context-aware controls and spatial augmentation. We first establish a comprehensive design space through brainstorming and iterative discussions with VR experts. We then develop a proof-of-concept system that analyzes UI layouts to offer context-aware controls and spatial augmentation, targeting six key application areas within our design space. Finally, we demonstrate that our system can effectively adapt to a broad spectrum of applications at runtime, and discuss future directions with reviews with seven experts.
View details
InstructPipe: Generating Visual Blocks Pipelines with Human Instructions and LLMs
Jing Jin
Xiuxiu Yuan
Jun Jiang
Jingtao Zhou
Yiyi Huang
Zheng Xu
Kristen Wright
Jason Mayes
Mark Sherwood
Johnny Lee
Alex Olwal
Ram Iyengar
Na Li
Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI), ACM, pp. 23
Preview abstract
Visual programming has the potential of providing novice programmers with a low-code experience to build customized processing pipelines. Existing systems typically require users to build pipelines from scratch, implying that novice users are expected to set up and link appropriate nodes from a blank workspace. In this paper, we introduce InstructPipe, an AI assistant for prototyping machine learning (ML) pipelines with text instructions. We contribute two large language model (LLM) modules and a code interpreter as part of our framework. The LLM modules generate pseudocode for a target pipeline, and the interpreter renders the pipeline in the node-graph editor for further human-AI collaboration. Both technical and user evaluation (N=16) shows that InstructPipe empowers users to streamline their ML pipeline workflow, reduce their learning curve, and leverage open-ended commands to spark innovative ideas.
View details
The Pseudo-Dimension of Contracts
Paul Duetting
Michal Feldman
Tomasz Ponitka
Ermis Soumalis
EC '25: Proceedings of the 26th ACM Conference on Economics and Computation (2025), 514 - 539
Preview abstract
Algorithmic contract design studies scenarios where a principal incentivizes an agent to exert effort on her behalf. In this work, we focus on settings where the agent's type is drawn from an unknown distribution, and formalize an offline learning framework for learning near-optimal contracts from sample agent types. A central tool in our analysis is the notion of pseudo-dimension from statistical learning theory. Beyond its role in establishing upper bounds on the sample complexity, pseudo-dimension measures the intrinsic complexity of a class of contracts, offering a new perspective on the tradeoffs between simplicity and optimality in contract design. Our main results provide essentially optimal tradeoffs between pseudo-dimension and representation error (defined as the loss in principal's utility) with respect to linear and bounded contracts. Using these tradeoffs, we derive sample- and time-efficient learning algorithms, and demonstrate their near-optimality by providing almost matching lower bounds on the sample complexity. Conversely, for unbounded contracts, we prove an impossibility result showing that no learning algorithm exists.
Finally, we extend our techniques in three important ways. First, we provide refined pseudo-dimension and sample complexity guarantees for the combinatorial actions model, revealing a novel connection between the number of critical values and sample complexity. Second, we extend our results to menus of contracts, showing that their pseudo-dimension scales linearly with the menu size. Third, we adapt our algorithms to the online learning setting, where we show that, a polynomial number of type samples suffice to learn near-optimal bounded contracts. Combined with prior work, this establishes a formal separation between expert advice and bandit feedback for this setting.
View details
Procurement Auctions via Approximate Submodular Optimization
Amin Karbasi
Grigoris Velegkas
Forty-second International Conference on Machine Learning (2025)
Preview abstract
We study the problem of procurement auctions, in which an auctioneer seeks to acquire services from a group of strategic sellers with private costs. The quality of the services is measured through some \emph{submodular} function that is known to the auctioneer. Our goal is to design \emph{computationally efficient} procurement auctions that (approximately) maximize the difference between the quality of the acquired services and the total cost of the sellers, in a way that is incentive compatible (IC) and individual rational (IR) for the sellers, and generates non-negative surplus (NAS) for the auctioneer.
Leveraging recent results from the literature of \emph{non-positive} submodular function maximization, we design computationally efficient frameworks that transform submodular function optimization algorithms to \emph{mechanisms} that are IC and IR for the sellers, NAS for the auctioneer, and \emph{approximation-preserving}. Our frameworks are general and work both in the \emph{offline} setting where the auctioneer can observe the bids and the services of all the sellers simultaneously, and in the \emph{online} setting where the sellers arrive in an adversarial order and the auctioneer has to make an irrevocable decision whether to purchase their service or not. We further investigate whether it is possible to convert state-of-art submodular optimization algorithms into a descending auction. We focurs in the adversarial setting, meaning that the schedule of the descending prices is determined by an advesary. We show that a submodular optimization algorithm satisfying bi-criteria $(\alpha, 1)$-approximation in welfare can be effectively converted to a descending auction in the adversarial setting in if and only if $\alpha \leq \frac 1 2$. Our result highlights the importance of a carefully designed schedule of descending prices to effectively convert a submodular optimization algorithm satisfying bi-criteria $(\alpha, 1)$-approximation in welfare with $\alpha > \frac 1 2$ to a descending auction. We also further establish a connection between descending auctions and online submodular optimization algorithms.
We demonstrate the practical applications of our frameworks by instantiating them with different state-of-the-art submodular optimization algorithms and comparing their welfare performance through empirical experiments on publicly available datasets that consist of thousands of sellers.
View details
Safe Coding: Rigorous Modular Reasoning about Software Safety
Queue, 23(5) (2025)
Preview abstract
Many persistent and dangerous software vulnerabilities, including memory safety violations and code injection, arise from a common root cause: Developers inadvertently violate the implicit safety preconditions of widely-used programming constructs. These preconditions—such as pointer validity, array-access bounds, and the trustworthy provenance of code fragments to be evaluated as SQL, HTML, or JavaScript—are traditionally the developer's responsibility to ensure. In complex systems, meeting these obligations often relies on non-local, whole-program invariants that are notoriously difficult to reason about correctly, leading to vulnerabilities that are difficult to detect after the fact.
This article introduces Safe Coding, a collection of software design patterns and practices designed to cost-effectively provide a high degree of assurance against entire classes of such vulnerabilities. The core principle of Safe Coding is to shift responsibility for safety from individual developers to the programming language, software libraries, and frameworks. This is achieved by systematically eliminating the direct use of risky operations—those with complex safety preconditions—in application code. Instead, these operations are encapsulated within safe abstractions: modules with public APIs that are safe by design, whose implementations fully ensure all module-internal safety preconditions through a combination of local runtime checks and by elevating safety preconditions into type invariants.
Safe Coding facilitates a modular and compositional approach to whole-program safety: Difficult reasoning is localized to the implementation of safe abstractions, which undergo focused expert scrutiny. The composition of these abstractions with the majority of the codebase (which is kept free of risky operations) is then automatically verified by the language’s type checker. This form of compositional reasoning, drawing from patterns used in formal software verification, can be viewed as a semi-formal approach that balances rigor with broad applicability to large industrial codebases. We discuss the successful application of these practices at Google, where they have nearly eliminated vulnerabilities such as Cross-Site Scripting (XSS) and SQL injection, and their critical role in ensuring memory safety in Rust, collectively demonstrating a favorable cost-assurance tradeoff for achieving software safety at scale.
View details
Preview abstract
We study the effect of a firm's new information disclosure on the information asymmetry between its informed and uninformed investors and its liquidity. To do this, we employ advanced natural language processing (NLP) methods to introduce a novel measure of firms' 10-K filing predictability that quantifies the amount of new information in these reports. Our findings show that more new information is associated with higher bid-ask spreads and lower trading volumes, indicating increased information asymmetry and reduced liquidity, respectively. Notably, institutional ownership moderates these effects, suggesting that sophisticated investors can mitigate the adverse consequences of disclosure unpredictability. An event study analysis further reveals that more new information triggers increased trading activity and abnormal returns immediately after disclosure, though these effects are short-lived.
View details
HEART: Emotionally-driven test-time scaling of Language Models
Souradip Chakraborty
Gabriela Pinto
2025
Preview abstract
Test-time scaling has shown considerable success in improving the performance of language models on complex reasoning tasks without requiring fine-tuning. However, current strategies, such as self-reflection or ensembling, primarily focus on logical or structural refinement. They do not leverage the guiding potential of affective feedback. Inspired by psychological research showing that emotions can modulate cognitive performance, we introduce HEART--a novel framework that uses emotionally-driven prompts for iterative self-correction. HEART provides feedback on a models' incorrect response using a curated set of concise, emotionally charged phrases based on Paul Ekman's six basic emotions. By systematically varying the emotional tone of the feedback across iterations, our method guides the model to escape flawed reasoning paths and explore more promising alternatives. We evaluate our framework on challenging reasoning benchmarks including OlympiadBench, Humanity's Last Exam, and SimpleQA. Across these benchmarks, our approach delivers significantly deeper reasoning which leads to consistent and significant increase in accuracy compared to existing prompting methods. Crucially, these gains are observed across a diverse range of model architectures, demonstrating the broad applicability of our technique. Overall, our findings suggest that the next frontier in machine reasoning may lie not just in refining logic, but also in understanding and leveraging the 'HEART' of the models.
View details
Preview abstract
We study differential privacy (DP) in a multi-party setting where each party only trusts a (known) subset of the other parties with its data. Specifically, given a trust graph where vertices correspond to parties and neighbors are mutually trusting, we give a DP algorithm for aggregation with a much better privacy-utility trade-off than in the well-studied local model of DP (where each party trusts no other party). We further study a robust variant where each party trusts all but an unknown subset of at most t of its neighbors (where t is a given parameter), and give an algorithm for this setting. We complement our algorithms with lower bounds, and discuss implications of our work to other tasks in private learning and analytics.
View details
Envisioning Aboriginal and Torres Strait Islander AI Futures
Journal of Global Indigeneity (2025)
Preview abstract
In January 2025, over forty Aboriginal and Torres Strait Islander researchers, practitioners, community members, and allies, gathered at the Centre for Global Indigenous Futures at the Wallumattagal Campus of Macquarie University in Sydney to envisage Aboriginal and Torres Strait Islander AI futures. This publication reports on attendees' vision for the future of AI for Aboriginal and Torres Strait Islander people.
View details
Preview abstract
Online scams are a growing threat in India, impacting millions and causing substantial financial losses year over year. This white paper presents ShieldUp!, a novel mobile game prototype designed to inoculate users against common online scams by leveraging the principles of psychological inoculation theory. ShieldUp! exposes users to weakened versions of manipulation tactics frequently used by scammers, and teaches them to recognize and pre-emptively refute these techniques. A randomized controlled trial (RCT) with 3,000 participants in India was conducted to evaluate the game's efficacy in helping users better identify scams scenarios. Participants were assigned to one of three groups: the ShieldUp! group (play time: 15 min), a general scam awareness group (watching videos and reading tips for 10-15 min), and a control group (plays "Chrome Dino", an unrelated game, for 10 minutes). Scam discernment ability was measured using a newly developed Scam Discernment Ability Test (SDAT-10) before the intervention, immediately after, and at a 21-day follow-up. Results indicated that participants who played ShieldUp! showed a significant improvement in their ability to identify scams compared to both control groups, and this improvement was maintained at follow-up. Importantly, while both interventions initially led users to to show increased skepticism towards even genuine online offers (NOT Scam scenarios), this effect dissipated after 21 days, suggesting no long-term negative impact on user trust. This study demonstrates the potential of game-based inoculation as a scalable and effective scam prevention strategy, offering valuable insights for product design, policy interventions, and future research, including the need for longitudinal studies and cross-cultural adaptations.
View details
Permission Rationales in the Web Ecosystem: An Exploration of Rationale Text and Design Patterns
Yusra Elbitar
Soheil Khodayari
Marian Harbach
Gianluca De Stefano
Balazs Engedy
Giancarlo Pellegrino
Sven Bugiel
CHI 2025, ACM
Preview abstract
Modern web applications rely on features like camera and geolocation for personalized experiences, requiring user permission via browser prompts. To explain these requests, applications provide rationales—contextual information on why permissions are needed. Despite their importance, little is known about how rationales appear on the web or their influence on user decisions.
This paper presents the first large-scale study of how the web ecosystem handles permission rationales, covering three areas: (i) identifying webpages that use permissions, (ii) detecting and classifying permission rationales, and (iii) analyzing their attributes to understand their impact on user decisions. We examined over 770K webpages from Chrome telemetry, finding 3.6K unique rationale texts and 749 rationale UIs across 85K pages. We extracted key rationale attributes and assessed their effect on user behavior by cross-referencing them with Chrome telemetry data. Our findings reveal nine key insights, providing the first evidence of how different rationales affect user decisions.
View details
Triaging mammography with artificial intelligence: an implementation study
Sarah M. Friedewald
Sunny Jansen
Fereshteh Mahvar
Timo Kohlberger
David V. Schacht
Sonya Bhole
Dipti Gupta
Scott Mayer McKinney
Stacey Caron
David Melnick
Mozziyar Etemadi
Samantha Winter
Alejandra Maciel
Luca Speroni
Martha Sevenich
Arnav Agharwal
Rubin Zhang
Gavin Duggan
Shiro Kadowaki
Atilla Kiraly
Jie Yang
Basil Mustafa
Krish Eswaran
Shravya Shetty
Breast Cancer Research and Treatment (2025)
Preview abstract
Purpose
Many breast centers are unable to provide immediate results at the time of screening mammography which results in delayed patient care. Implementing artificial intelligence (AI) could identify patients who may have breast cancer and accelerate the time to diagnostic imaging and biopsy diagnosis.
Methods
In this prospective randomized, unblinded, controlled implementation study we enrolled 1000 screening participants between March 2021 and May 2022. The experimental group used an AI system to prioritize a subset of cases for same-visit radiologist evaluation, and same-visit diagnostic workup if necessary. The control group followed the standard of care. The primary operational endpoints were time to additional imaging (TA) and time to biopsy diagnosis (TB).
Results
The final cohort included 463 experimental and 392 control participants. The one-sided Mann-Whitney U test was employed for analysis of TA and TB. In the control group, the TA was 25.6 days [95% CI 22.0–29.9] and TB was 55.9 days [95% CI 45.5–69.6]. In comparison, the experimental group's mean TA was reduced by 25% (6.4 fewer days [one-sided 95% CI > 0.3], p<0.001) and mean TB was reduced by 30% (16.8 fewer days; 95% CI > 5.1], p=0.003). The time reduction was more pronounced for AI-prioritized participants in the experimental group. All participants eventually diagnosed with breast cancer were prioritized by the AI.
Conclusions
Implementing AI prioritization can accelerate care timelines for patients requiring additional workup, while maintaining the efficiency of delayed interpretation for most participants. Reducing diagnostic delays could contribute to improved patient adherence, decreased anxiety and addressing disparities in access to timely care.
View details
Snap-it, Tap-it, Splat-it: Tactile-Informed 3D Gaussian Splatting for Reconstructing Challenging Surfaces
Mauro Comi
Max Yang
Jonathan Tremblay
Valts Blukis
Yijiong Lin
Nathan Lepora
Laurence Aitchison
2025
Preview abstract
Touch and vision go hand in hand, mutually enhancing our ability to understand the world. From a research perspective, the problem of mixing touch and vision is underexplored and presents interesting challenges. To this end, we propose Tactile-Informed 3DGS, a novel approach that incorporates touch data (local depth maps) with multi-view vision data to achieve surface reconstruction and novel view synthesis. Our method optimises 3D Gaussian primitives to accurately model the object's geometry at points of contact. By creating a framework that decreases the transmittance at touch locations, we achieve a refined surface reconstruction, ensuring a uniformly smooth depth map. Touch is particularly useful when considering non-Lambertian objects (e.g. shiny or reflective surfaces) since contemporary methods tend to fail to reconstruct with fidelity specular highlights. By combining vision and tactile sensing, we achieve more accurate geometry reconstructions with fewer images than prior methods. We conduct evaluation on objects with glossy and reflective surfaces and demonstrate the effectiveness of our approach, offering significant improvements in reconstruction quality.
View details