Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10822 publications
mmMUSE: An mmWave-based Motion-resilient Universal Speech Enhancement System
Chenming He
Yanyong Zhang
Kai Wang
Dequan Wang
Lingyu Wang
the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies (IMWUT), ACM (2026) (to appear)
Preview abstract
Voice-based smart systems can greatly enhance user experiences by allowing higher-quality interactions through better voice perception. Speech enhancement can benefit such systems by isolating noise from speech. Recently, integrating millimeter-wave (mmWave) with audio for speech perception has gained increasing attention due to microphones' limitations in noisy environments. However, mmWave-based vocal extraction is severely affected by motion, which disperses vocal signals across ranges and introduces distortions. In this paper, we propose an mmWave-based motion-resilient universal speech enhancement system called mmMUSE, which fuses mmWave and audio signals. To mitigate motion interference, we develop a Doppler-based method for motion-robust vocal signal extraction. Moreover, by introducing the Vocal-Noise-Ratio metric to assess the prominence of vocal signals from mmWave, we achieve real-time voice activity detection that gains 3.81 dB of SISDR in noisy speeches. Additionally, we design a two-stage complex-valued network that includes an attention-based fusion network for cross-modal complementing and a time-frequency masking network for correcting amplitude and phase of speech to isolate noises.
Using mmWave and audio datasets from 46 participants, mmMUSE outperforms the state-of-the-art speech enhancement models, achieving an average SISDR improvement of 3.12 dB. Additionally, mmMUSE achieves SISDR improvements of 16.51 dB, 17.93 dB, 14.93 dB, and 18.95 dB in controlled environments involving intense noise, extensive motion, multiple speakers, and various obstructive materials, respectively. Finally, we evaluate mmMUSE in real-world scenarios including running, public spaces, and driving, maintaining a word error rate (WER) below 10%.
View details
Preview abstract
AI coding assistants are rapidly becoming integral to modern software development. A key challenge in this space is the continual need to migrate and modernize codebases in response to evolving software ecosystems. Traditionally, such migrations have relied on rule-based systems and human intervention. With the advent of powerful large language models (LLMs), AI-driven agentic frameworks offer a promising alternative—but their effectiveness remains underexplored. In this paper, we introduce FreshBrew, a novel benchmark for evaluating AI-based agentic frameworks on project-level Java migrations. We benchmark several such frameworks, powered by state-of-the-art LLMs, and compare their performance against established rule-based tools. Our evaluation of AI agents on this benchmark of 228 repositories shows that the top-performing model, Gemini 2.5 Flash, can successfully migrate 56.5% of projects to JDK 17. Our empirical analysis reveals novel insights into the critical strengths and limitations of current agentic approaches, offering actionable insights into their real-world applicability. By releasing FreshBrew publicly upon acceptance, we aim to facilitate rigorous, reproducible evaluation and catalyze progress in AI-driven codebase modernization.
View details
Preview abstract
For many practical applications of quantum computing, the slowest and most costly steps involve coherently accessing classical data. We help address this challenge by applying mass production techniques, which can sometimes allow us to perform operations many times in parallel for a cost that is comparable to a single execution[1-3]. We combine existing mass-production results with modern approaches for loading classical data using ``quantum read-only memory.'' We show that quantum mass production techniques offer no benefit when we consider a cost model that focuses purely on the number of non-Clifford gates. However, analyzing the constant factors in a more nuanced cost model, we find that it may be possible to obtain a reduction in cost of an order or magnitude or more for a variety reasonably-sized fault-tolerant quantum algorithms. We present several applications of quantum mass-production techniques beyond naive parallelization, including a strategy for reducing the cost of serial calls to the same data loading step.
View details
Cortina Conference Opening Remarks
Yu Chen
(2025)
Preview abstract
Giving a short opening remark presentation at a Google host conference about superconducting qubits (https://ai-quantum.cortinadampezzo.it/). This is just high-level review the progress and challenges in the field of superconducting qubits
View details
Preview abstract
Large language models (LLMs), optimized through human feedback, have rapidly emerged as a leading paradigm for developing intelligent conversational assistants. However, despite their strong performance across many benchmarks, LLM-based agents might still lack conversational skills such as disambiguation -- when they are faced with ambiguity, they often overhedge or implicitly guess users' true intents rather than asking clarification questions. Under task-specific settings, high-quality conversation samples are often limited, constituting a bottleneck for LLMs' ability to learn optimal dialogue action policies. We propose Action-Based Contrastive Self-Training (ACT), a quasi-online preference optimization algorithm based on Direct Preference Optimization (DPO), that enables data-efficient dialogue policy learning in multi-turn conversation modeling. We demonstrate ACT's efficacy under data-efficient tuning scenarios, even when there is no action label available, using multiple real-world conversational tasks: tabular-grounded question-answering, machine reading comprehension, and AmbigSQL, a novel task for disambiguating information-seeking requests for complex SQL generation towards data analysis agents. Additionally, we propose evaluating LLMs' ability to function as conversational agents by examining whether they can implicitly recognize and reason about ambiguity in conversation. ACT demonstrates substantial conversation modeling improvements over standard tuning approaches like supervised fine-tuning and DPO.
View details
Speculative Knowledge Distillation: Bridging the Teacher-Student Gap Through Interleaved Sampling
Lei Li
Wenda Xu
Rishabh Agarwal
William Wang
Dhruv Madeka
ICLR 2025
Preview abstract
Recent knowledge distillation (KD) research made significant progress on improving smaller student models to match larger teachers' performances. Two noticeable methods, supervised KD and on-policy KD emerged as the state-of-the-art approaches. However, supervised KD for auto-regressive models suffers from distribution mismatch between training over fixed dataset and inference over student generated outputs. Conversely, on-policy KD, which uses student-generated samples for training, can suffer from low-quality training examples and the teacher's potential inaccuracies in assessing these samples. To address these limitations, we introduce Speculative Knowledge Distillation (SKD). Instead of solely training on teacher- or student-proposed samples, SKD leverages the student model to initially propose tokens following its own generation distribution. Subsequently, the teacher model is employed to replace tokens that are deemed out-of-distribution. Compared with supervised KD, the samples generated by SKD are more likely to align with the student's inference-time distribution, and 2) SKD can mitigate the generation of low-quality sequences by incorporating the teacher's feedback at each token. Furthermore, we demonstrate that SKD is a generic framework capable of implementing both supervised and on-policy knowledge distillation as specific instances. To validate SKD's effectiveness, we apply it to distill autoregressive large language models for various tasks, including translation, summarization, math, and instruction following. Our experiments consistently demonstrate SKD's superior performance compared to existing methods across different domains, tasks, data sizes, and model initialization strategies.
View details
Integer Programming for Generalized Causal Bootstrap Designs
Adel Javanmard
Nick Doudchenko
Proceedings of the 42 nd International Conference on Machine Learning (2025)
Preview abstract
In experimental causal inference, we distinguish between two sources of uncertainty: design uncertainty, due to the treatment assignment mechanism, and sampling uncertainty, when the sample is drawn from a super-population. This distinction matters in settings with small fixed samples and heterogeneous treatment effects, as in geographical experiments. The standard bootstrap procedure most often used by practitioners primarily estimates sampling uncertainty, and the causal bootstrap procedure, which accounts for design uncertainty, was developed for the completely randomized design and the difference-in-means estimator, whereas non-standard designs and estimators are often used in these low-power regimes. We address this gap by proposing an integer program which computes numerically the worst-case copula used as an input to the causal bootstrap method, in a wide range of settings. Specifically, we prove the asymptotic validity of our approach for unconfounded, conditionally unconfounded,
and individualistic with bounded confoundedness assignments, as well as generalizing to any linear-in-treatment and quadratic-in-treatment estimators. We demonstrate the refined confidence intervals achieved through simulations of small geographical experiments.
View details
Consideration on CMAS arriving as discrete particles
Eric H. Jordan
Stephen Jordan
Hiram Diaz
Byung-gun Jun
(2025)
Preview abstract
Turbine contaminants known as CMAS mostly arrive as individual particles in a range of mineral compositions to turbine hot sections where they are deposited and within a small area can be treated as arriving at random locations as splats. By the time the particles reach the hot section the particle size maximum is believed to be 10 microns. Using a simplified heat transfer analysis suggests the arriving temperature will be the turbine inlet temperature. Using AFRL03 as a representative set of possible minerals, for most turbine inlet temperatures a mixture of melted and un-melted particles will arrive. There are 31 combinations of the 5 minerals of AFRL03 presenting a wide range of melting points experimentally investigated in this paper. As expected, combinations generally melt at lower temperatures than the highest melting mineral in each combination. The progression of conditions starting with the arrival of isolated individual minerals is modeled using monte carlo simulations and known materials from percolation theory. This allows understanding of the development of coverage fraction and potential for mineral mixing important to melt behavior as a function of normalized CMAS dose. Using the normalized CMAS dose it is also possible to comment on the likely relative fraction of coating life during which less than fully homogenized CMAS dominates behavior. It is noteworthy that 4 out of 5 minerals and 4 mineral combinations lack either calcium or silicon or both and also melt below 1300°C. Interaction in the early deposition stage involves non CMAS like chemistries.
View details
Rapid Initial-State Preparation for the Quantum Simulation of Strongly Correlated Molecules
Dominic Berry
Yu Tong
Alec White
Tae In Kim
Lin Lin
Seunghoon Lee
Garnet Chan
PRX Quantum, 6 (2025), pp. 020327
Preview abstract
Studies on quantum algorithms for ground-state energy estimation often assume perfect ground-state preparation; however, in reality the initial state will have imperfect overlap with the true ground state. Here, we address that problem in two ways: by faster preparation of matrix-product-state (MPS) approximations and by more efficient filtering of the prepared state to find the ground-state energy. We show how to achieve unitary synthesis with a Toffoli complexity about 7 × lower than that in prior work and use that to derive a more efficient MPS-preparation method. For filtering, we present two different approaches: sampling and binary search. For both, we use the theory of window functions to avoid large phase errors and minimize the complexity. We find that the binary-search approach provides better scaling with the overlap at the cost of a larger constant factor, such that it will be preferred for overlaps less than about 0.003. Finally, we estimate the total resources to perform ground-state energy estimation of Fe-S cluster systems, including the FeMo cofactor by estimating the overlap of different MPS initial states with potential ground states of the FeMo cofactor using an extrapolation procedure. With a modest MPS bond dimension of 4000, our procedure produces an estimate of approximately 0.9 overlap squared with a candidate ground state of the FeMo cofactor, producing a total resource estimate of 7.3e10 Toffoli gates; neglecting the search over candidates and assuming the accuracy of the extrapolation, this validates prior estimates that have used perfect ground-state overlap. This presents an example of a practical path to prepare states of high overlap in a challenging-to-compute chemical system.
View details
Round Elimination via Self-Reduction: Closing Gaps for Distributed Maximal Matching
Seri Khoury
Aaron Schild
2025
Preview abstract
We show that there is no randomized LOCAL algorithm for maximal matching (MM) that takes o(min(log D, sqrt(log n))) rounds, even on regular graphs and trees. This improves upon the KMW bound from 21 years ago and shows a surprising separation between MM and MIS on trees, among other implications.
View details
Matryoshka Model Learning for Improved Elastic Student Models
Chetan Verma
Cho-Jui Hsieh
Ngot Bui
Yang Zhang
Wen Chen
Xin Liu
Inderjit Dhillon
2025
Preview abstract
Production machine learning models in the industry are often devel-oped with a primary focus on maximizing model quality. However,these models must ultimately operate within the resource con-straints of their serving infrastructure, including limitations on com-pute, memory and bandwidth. The rapid evolution of serving hard-ware, particularly with advancements in accelerator technology,necessitates periodic retraining to leverage newer, more efficientinfrastructure. This cyclical retraining process is resource-intensive,demanding significant model development time and incurring sub-stantial training costs. This challenge is further amplified by thetrend towards increasingly complex models, which inherently re-quire greater computational resources for training and deployment.While prior work has explored techniques like supernet sub-modelextraction to address training efficiency, a critical gap remains: theefficient generation of a spectrum of high-quality models froman existing production model, a common requirement in diverseindustrial applications. To bridge this gap, we introduce a novel ap-proach leveraging a "Teaching Assistant" (TA) model, derived froma given production model (referred to as the Student model). Wedemonstrate that through co-training the Student and TA modelswith Matryoshka structure while using online distillation, we notonly enhance the Student model’s performance but also enable theflexible creation of a model family offering a compelling trade-offbetween model quality and model size.
View details
Mastering Multiple-Expert Routing: Realizable H-Consistency and Strong Guarantees for Learning to Defer
Anqi Mao
Proceedings of the 42nd International Conference on Machine Learning (ICML 2025)
Preview abstract
The problem of learning to defer with multiple experts consists of optimally assigning input instances to experts, balancing the trade-off between their accuracy and computational cost. This is a critical challenge in natural language generation, but also in other fields such as image processing, and medical diagnostics. Recent studies have proposed surrogate loss functions to optimize deferral, but challenges remain in ensuring their consistency properties. This paper introduces novel surrogate loss functions and efficient algorithms with strong theoretical learning guarantees. We address open questions regarding realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency for both single-stage (jointly learning predictor and deferral function) and two-stage (learning only the deferral function with a fixed expert) learning scenarios. For single-stage deferral, we introduce a family of new realizable $H$-consistent surrogate losses and further prove $H$-consistency for a selected member. For two-stage deferral, we derive new surrogate losses that achieve realizable $H$-consistency, $H$-consistency bounds, and Bayes-consistency for the two-expert scenario and, under natural assumptions, multiple-expert scenario. Additionally, we provide enhanced theoretical guarantees under low-noise assumptions for both scenarios. Finally, we report the results of experiments using our proposed surrogate losses, comparing their performance against existing baselines.
View details
Nearly Tight Regret Bounds for Revenue Maximization in Bilateral Trade
Simone di Gregorio
Paul Duetting
Federico Fusco
Chris Schwiegelshohn
FOCS 2025
Preview abstract
Bilateral trade models the task of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. We study this problem from the perspective of a broker, in a regret minimization framework. At each time step, a new seller and buyer arrive, and the broker has to propose a mechanism that is incentive-compatible and individually rational, with the goal of maximizing profit.
We propose a learning algorithm that guarantees a nearly tight regret in the stochastic setting when seller and buyer valuations are drawn i.i.d. from a fixed and possibly correlated unknown distribution. We further show that it is impossible to achieve sublinear regret in the non-stationary scenario where valuations are generated upfront by an adversary. Our ambitious benchmark for these results is the best incentive-compatible and individually rational mechanism. This separates us from previous works on efficiency maximization in bilateral trade, where the benchmark is a single number: the best fixed price in hindsight.
A particular challenge we face is that uniform convergence for all mechanisms' profits is impossible. We overcome this difficulty via a careful chaining analysis that proves convergence for a provably near-optimal mechanism at (essentially) optimal rate. We further showcase the broader applicability of our techniques by providing nearly optimal results for the joint ads problem.
View details
RemapRoute: Local Remapping of Internet Path Changes
renata cruz teixeira
Christophe Diot
italo cunha
Elverton Fazzion
Darryl Veitch
2025
Preview abstract
Several systems rely on traceroute to track a large number of Internet paths as they change over time. Monitoring systems perform this task by remapping paths periodically or whenever a change is detected. This paper shows that such complete remapping is inefficient, because most path changes are localized to a few hops of a path. We develop RemapRoute, a tool to remap a path locally given the previously known path and a change point. RemapRoute sends targeted probes to locate and remap the often few hops that have changed. Our evaluation with trace-driven simulations and in a real deployment shows that local remapping reduces the average number of probes issued during remapping by 63% and 79%, respectively, when compared with complete remapping. At the same time, our results show that local remapping has little impact on the accuracy of inferred paths.
View details
Preview abstract
A high level talk about quantum computing at Google. I am giving an invited talk at the Kavli Frontiers of Science. *Please note that I am only using slides that have already been presented publicly by others on the team. All slides have already previously passed review.*
View details