Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 10795 publications
    Productionizing Quantum Mass Production
    Bill Huggins
    Nathan Wiebe
    arXiv for now (2026) (to appear)
    Preview abstract For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step. View details
    FreshBrew: A Benchmark for Evaluating AI Agents on Java Code Migration
    Diganta Misra
    Yanqi Luo
    Anjali Sridhar
    Justine Gehring
    Silvio Soares Ribeiro Junior
    2026
    Preview abstract AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization. View details
    Reasoning-SQL: Reinforcement Learning with Partial Rewards for Reasoning-Enhanced Text-to-SQL
    Mohammadreza Pourreza
    Shayan Talaei
    Hailong Li
    Azalia Mirhoseini
    Amin Saberi
    Conference on Language Modeling (COLM) (2025) (to appear)
    Preview abstract Text-to-SQL is a challenging task involving multiple reasoning-intensive subtasks, including natural language understanding, database schema comprehension, and precise SQL query formulation. Existing approaches often rely on handcrafted reasoning paths with inductive biases that can limit their overall effectiveness. Motivated by the recent success of reasoning-enhanced models such as DeepSeek R1 and OpenAI o1, which effectively leverage reward-driven self-exploration to enhance reasoning capabilities and generalization, we propose a novel set of partial rewards tailored specifically for the Text-to-SQL task. Our reward set includes schema-linking, AI feedback, n-gram similarity, and syntax check, explicitly designed to address the reward sparsity issue prevalent in reinforcement learning (RL). Leveraging group relative policy optimization (GRPO), our approach explicitly encourages large language models (LLMs) to develop intrinsic reasoning skills necessary for accurate SQL query generation. With models of different sizes, we demonstrate that RL-only training with our proposed rewards consistently achieves higher accuracy and superior generalization compared to supervised fine-tuning (SFT). Remarkably, our RL-trained 14B-parameter model significantly outperforms larger proprietary models, e.g. o3-mini by 4% and Gemini-1.5-Pro-002 by 3% on the BIRD benchmark. These highlight the efficacy of our proposed RL-training framework with partial rewards for enhancing both accuracy and reasoning capabilities in Text-to-SQL tasks. View details
    Preview abstract Generative AI (GenAI), particularly Large Language Models (LLMs), offer powerful capabilities for interpreting the complex data landscape in healthcare. In this paper, we present a comprehensive overview of the capabilities, requirements and applications of GenAI for deriving clinical insights and improving clinical efficiency. We first provide some background on the forms and sources of patient data, namely real-time Remote Patient Monitoring (RPM) streams and traditional Electronic Health Records (EHR). The sheer volume and heterogeneity of this combined data present significant challenges to clinicians and contribute to information overload. In addition, we explore the potential of LLM-powered applications for improving clinical efficiency. These applications can enhance navigation of longitudinal patient data and provide actionable clinical decision support through natural language dialogue. We discuss the opportunities this presents for streamlining clinician workflows and personalizing care, alongside critical challenges such as data integration complexity, ensuring data quality and RPM data reliability, maintaining patient privacy, validating AI outputs for clinical safety, mitigating bias, and ensuring clinical acceptance. We believe this work represents the first summarization of GenAI techniques for managing clinician data overload due to combined RPM / EHR data complexities. View details
    Preview abstract In this paper I describe the performance enchantments I implemented in a quantum-error-correction decoder developed at Google. The decoder is an open-source project and I am documenting the speedups I achieved in this paper. View details
    Preview abstract Tesseract is a Most-Likely-Error decoder designed for quantum error-correcting codes. Tesseract conducts a search through an graph on the set of all subsets of errors to find the lowest cost subset of errors consistent with the input syndrome. Although this set is exponentially large, the search can be made efficient in practice for random errors using A* along with a variety of pruning heuristics. We show through benchmark circuits for surface, color, and bivariate-bicycle codes that Tesseract is competitive with integer programming-based decoders at moderate physical error rates. Finally, we compare surface and bivariate bicycle codes using most-likely error decoding View details
    A personal health large language model for sleep and fitness coaching
    Anastasiya Belyaeva
    Zhun Yang
    Nick Furlotte
    Chace Lee
    Erik Schenck
    Yojan Patel
    Jian Cui
    Logan Schneider
    Robby Bryant
    Ryan Gomes
    Allen Jiang
    Roy Lee
    Javier Perez
    Jamie Rogers
    Cathy Speed
    Shyam Tailor
    Megan Walker
    Jeffrey Yu
    Tim Althoff
    Conor Heneghan
    Mark Malhotra
    Shwetak Patel
    Shravya Shetty
    Jiening Zhan
    Daniel McDuff
    Nature Medicine (2025)
    Preview abstract Although large language models (LLMs) show promise for clinical healthcare applications, their utility for personalized health monitoring using wearable device data remains underexplored. Here we introduce the Personal Health Large Language Model (PH-LLM), designed for applications in sleep and fitness. PH-LLM is a version of the Gemini LLM that was finetuned for text understanding and reasoning when applied to aggregated daily-resolution numerical sensor data. We created three benchmark datasets to assess multiple complementary aspects of sleep and fitness: expert domain knowledge, generation of personalized insights and recommendations and prediction of self-reported sleep quality from longitudinal data. PH-LLM achieved scores that exceeded a sample of human experts on multiple-choice examinations in sleep medicine (79% versus 76%) and fitness (88% versus 71%). In a comprehensive evaluation involving 857 real-world case studies, PH-LLM performed similarly to human experts for fitness-related tasks and improved over the base Gemini model in providing personalized sleep insights. Finally, PH-LLM effectively predicted self-reported sleep quality using a multimodal encoding of wearable sensor data, further demonstrating its ability to effectively contextualize wearable modalities. This work highlights the potential of LLMs to revolutionize personal health monitoring via tailored insights and predictions from wearable data and provides datasets, rubrics and benchmark performance to further accelerate personal health-related LLM research. View details
    Integer Programming for Generalized Causal Bootstrap Designs
    Adel Javanmard
    Nick Doudchenko
    Proceedings of the 42 nd International Conference on Machine Learning (2025)
    Preview abstract In experimental causal inference, we distinguish between two sources of uncertainty: design uncertainty, due to the treatment assignment mechanism, and sampling uncertainty, when the sample is drawn from a super-population. This distinction matters in settings with small fixed samples and heterogeneous treatment effects, as in geographical experiments. The standard bootstrap procedure most often used by practitioners primarily estimates sampling uncertainty, and the causal bootstrap procedure, which accounts for design uncertainty, was developed for the completely randomized design and the difference-in-means estimator, whereas non-standard designs and estimators are often used in these low-power regimes. We address this gap by proposing an integer program which computes numerically the worst-case copula used as an input to the causal bootstrap method, in a wide range of settings. Specifically, we prove the asymptotic validity of our approach for unconfounded, conditionally unconfounded, and individualistic with bounded confoundedness assignments, as well as generalizing to any linear-in-treatment and quadratic-in-treatment estimators. We demonstrate the refined confidence intervals achieved through simulations of small geographical experiments. View details
    Preview abstract This tutorial examines the progress and scaling limitations of IM-DD based optical technologies and explores how datacenter use cases optimized coherent technology, including a newly proposed polarization-folding, time-diversity approach and a novel single-sideband coherent detection technology—can address some of these challenges View details
    Preview abstract In my talk, I will present a historical overview of the different numerical methods used to find optimal contraction paths. I will introduce TNSA, which has been used for the optimization of the contraction used for [Nature, 634 (8033), 2024], and TNCO, which is an ongoing effort to build upon TNSA. View details
    Landscape of Thoughts: Visualizing the Reasoning Process of Large Language Models
    Zhanke Zhou
    Xuan Li
    Zhaocheng Zhu
    Michael Galkin
    Xiao Feng
    Sanmi Koyejo
    Jian Tang
    Bo Han
    Reasoning and Planning for LLMs @ ICLR 2025
    Preview abstract Numerous applications of large language models (LLMs) rely on their ability to perform step-by-step reasoning. However, the reasoning behavior of LLMs remains poorly understood, posing challenges to research, development, and safety. To address this gap, we introduce landscape of thoughts-the first visualization tool for users to inspect the reasoning paths of chain-of-thought and its derivatives on any multi-choice dataset. Specifically, we represent the states in a reasoning path as feature vectors that quantify their distances to all answer choices. These features are then visualized in two-dimensional plots using t-SNE. Qualitative analysis shows that the landscape of thoughts effectively distinguishes between strong and weak models, correct and incorrect answers, as well as different reasoning tasks. It also uncovers undesirable reasoning patterns, such as low consistency and high uncertainty. Additionally, users can adapt our tool to a neural model that predicts any property they observe. We showcase this advantage by adapting our tool to a lightweight verifier, which significantly improves reasoning by evaluating the correctness of reasoning paths. View details
    Beyond Retrieval: Generating Narratives in Conversational Recommender Systems
    Krishna Sayana
    Raghavendra Vasudeva
    Yuri Vasilevski
    Kun Su
    Liam Hebert
    James Pine
    Hubert Pham
    Ambarish Jash
    Sukhdeep Sodhi
    (2025)
    Preview abstract Large Language Models (LLMs) have shown remarkable progress in generating human-quality text and engaging in complex reasoning. This presents a unique opportunity to revolutionize conversational recommender systems by enabling them to generate rich, engaging and personalized narratives that go beyond recommendations. However, the lack of suitable datasets limits research in this area. This paper addresses this challenge by making two key contributions. First, we introduce REGEN Reviews Enhanced with GEnerative Narratives, a new dataset extending the Amazon Product Reviews with rich user narratives. Furthermore, we perform an extensive automated evaluation of the dataset using a rater LLM. Second, the paper introduces a fusion architecture (CF model with an LLM) which serves as a baseline for REGEN. To the best of our knowledge, this represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives. We demonstrate that LLMs can effectively learn from simple fusion architectures utilizing interaction-based CF embeddings, and this can be further enhanced using the metadata and personalization data associated with items. Our experiments show that combining CF and content embeddings leads to improvements of 4-12% in key language metrics compared to using either type of embedding individually. We also provide an analysis to interpret their contributions to this new generative task. View details
    Syntactic and Semantic Gender Biases in the Language on Children’s Television: Evidence from a Corpus of 98 Shows from 1960 to 2018
    Andrea Vial
    Ruyuan Zuo
    Shreya Havaldar
    Morteza Dehghani
    Andrei Cimpian
    Psychological Science (2025)
    Preview abstract Biased media content shapes children’s social concepts and identities. We examined gender bias in a large corpus of scripts from 98 children’s television programs spanning 1960 to 2018 (6,600 episodes, ~2.7 million sentences, ~16 million words). We focused on agency and communion, the fundamental psychological dimensions underlying gender stereotypes. At the syntactic level, words referring to men/boys (vs. women/girls) appear more often in the agent (vs. patient) role. This syntactic bias remained stable between 1960 and 2018. At the semantic level, words referring to men/boys (vs. women/girls) co-occurred more often with words denoting agency. Words denoting communion showed both stereotypical and counterstereotypical associations. Some semantic gender biases have remained unchanged or weakened over time; others have grown. These findings suggest gender stereotypes are built into the core of children’s stories. Whether we are closer to gender equality in children’s media depends on where one looks. View details
    Preview abstract In the Max k-Weight SAT (aka Max SAT with Cardinality Constraint) problem, we are given a CNF formula with n variables and m clauses together with a positive integer k. The goal is to find an assignment where at most k variables are set to one that satisfies as many constraints as possible. Recently, Jain et al. (SODA 2023) gave an FPT approximation scheme (FPT-AS) with running time 2^O((dk/ε)^d) * (n + m)^O(1) for Max k-Weight SAT when the incidence graph is K_{d,d}-free. They asked whether a polynomial-size approximate kernel exists. In this work, we answer this question positively by giving an (1 − ε)-approximate kernel with (dk/ε)^O(d) variables. This also implies an improved FPT-AS with running time (dk/ε)^O(dk) * (n+m)^O(1)-time algorithm for the problem. Our approximate kernel is based mainly on a couple of greedy strategies together with a sunflower lemma-style reduction rule. View details
    Text to 3D Object Generation for Scalable Room Assembly
    Sonia Laguna
    Alberto García García
    Marie-Julie Rakotosaona
    Stylianos Moschoglou
    Leonhard Helminger
    2025
    Preview abstract Modern machine learning models for scene understanding, such as depth estimation and object tracking, rely on large, high-quality datasets that mimic real-world deployment scenarios. To address data scarcity, an end-to-end system for synthetic data generation for scalable, high-quality, and customizable 3D indoor scenes. By integrating text-to-image and multi-view diffusion models with NeRF-based meshing, this system generates high-fidelity 3D assets from text prompts and incorporates them into pre-defined floor plans using a rendering tool, Blender. By incorporating novel loss functions and training strategies into prior existing methods, or method supports on-demand object generation, bridging the domain gap between synthetic and real-world data. This system advances synthetic data’s role in addressing machine learning training limitations, enabling more robust and generalizable models for real-world applications. View details