Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10100 publications
Concordance of randomised controlled trials for artificial intelligence interventions with the CONSORT-AI reporting guidelines
Aditya U Kale
Alastair Dennison
Alexander Martindale
An Wen Chan
Andrew Beam
Benjamin Ng
Cecilia S. Lee
Christopher Yau
David Moher
Gary Collins
Lauren Oakden-Rayner
Lavinia Ferrante di Ruffano
Melanie Calvert
Melissa D McCradden
Pearse Keane
Robert Golub
Samantha Cruz Rivera
Victoria Ngai
Xiaoxuan Liu
Nature Communications (2024)
Preview abstract
The Consolidated Standards of Reporting Trials extension for Artificial Intelligence interventions (CONSORT-AI) was published in September 2020. Since its publication, several randomised controlled trials (RCTs) of AI interventions have been published but their completeness and transparency of reporting is unknown. This systematic review assesses the completeness of reporting of AI RCTs following publication of CONSORT-AI and provides a comprehensive summary of RCTs published in recent years. 65 RCTs were identified, mostly conducted in China (37%) and USA (18%). Median concordance with CONSORT-AI reporting was 90% (IQR 77–94%), although only 10 RCTs explicitly reported its use. Several items were consistently under-reported, including algorithm version, accessibility of the AI intervention or code, and references to a study protocol. Only 3 of 52 included journals explicitly endorsed or mandated CONSORT-AI. Despite a generally high concordance amongst recent AI RCTs, some AI-specific considerations remain systematically poorly reported. Further encouragement of CONSORT-AI adoption by journals and funders may enable more complete adoption of the full CONSORT-AI guidelines.
View details
Preview abstract
Background. Wildfire research uses ensemble methods to analyze fire behaviors and assess
uncertainties. Nonetheless, current research methods are either confined to simple models
or complex simulations with limits. Modern computing tools could allow for efficient, high-
fidelity ensemble simulations. Aims. This study proposes a high-fidelity ensemble wildfire
simulation framework for studying wildfire behavior, ML tasks, fire-risk assessment, and
uncertainty analysis. Methods. In this research, we present a simulation framework that
integrates the Swirl-Fire large-eddy simulation tool for wildfire predictions with the Vizier
optimization platform for automated run-time management of ensemble simulations and
large-scale batch processing. All simulations are executed on tensor-processing units to
enhance computational efficiency. Key results. A dataset of 117 simulations is created,
each with 1.35 billion mesh points. The simulations are compared to existing experimental
data and show good agreement in terms of fire rate of spread. Computations are done for
fire acceleration, mean rate of spread, and fireline intensity. Conclusions. Strong coupling
between these 2 parameters are observed for the fire spread and intermittency. A critical
Froude number that delineates fires from plume-driven to convection-driven is identified and
confirmed with literature observations. Implications. The ensemble simulation framework
is efficient in facilitating parametric wildfire studies.
View details
Understanding and Designing for Trust in AI Powered Developer Tooling
Ugam Kumar
Quinn Madison
IEEE Software (2024)
Preview abstract
Trust is central to how developers engage with AI. In this article, we discuss what we learned from developers about their level of trust in AI enhanced developer tooling, and how we translated those findings into product design recommendations to support customization, and the challenges we encountered along the way.
View details
FieldSwap: Data Augmentation for Effective Form-Like Document Extraction
Seth Ebner
IEEE 40th International Conference on Data Engineering (ICDE) (2024), pp. 4722-4732
Preview abstract
Extracting structured data from visually rich documents like invoices, receipts, financial statements, and tax forms is key to automating many business workflows. However, building extraction models in this domain often demands a large collection of high-quality training examples. To address this challenge, we introduce FieldSwap, a novel data augmentation technique specifically designed for such extraction problems. FieldSwap generates synthetic training examples by replacing key phrases indicative of one field with those corresponding to another. Our experiments on five diverse datasets demonstrate that incorporating FieldSwap-augmented data into the training process can enhance model performance by 1-11 F1 points, particularly when dealing with limited training data (10--100 documents). Additionally, we propose algorithms for automatically inferring key phrases from the training data. Our findings indicate that FieldSwap is effective regardless of whether key phrases are manually provided by human experts or inferred automatically.
View details
Understanding the Dataset Practitioners Behind Large Language Models
Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (CHI EA '24), ACM, Honolulu, HI, USA (2024)
Preview abstract
As large language models (LLMs) become more advanced and impactful, it is increasingly important to scrutinize the data that they rely upon and produce. What is it to be a dataset practitioner doing this work? We approach this in two parts: first, we define the role of "dataset practitioners'' by performing a retrospective analysis on the responsibilities of teams contributing to LLM development at a technology company, Google. Then, we conduct semi-structured interviews with a cross-section of these practitioners (N=10). We find that although data quality is a top priority, there is little consensus around what data quality is and how to evaluate it. Consequently, practitioners either rely on their own intuition or write custom code to evaluate their data. We discuss potential reasons for this phenomenon and opportunities for alignment.
View details
Preview abstract
Inter-sentence pauses are the silences that occur between sentences in a paragraph or a dialogue.
They are an important aspect of long-form speech prosody, as they can affect the naturalness, intelligibility, and effectiveness of communication.
However, the user perception of inter-sentence pauses in long-form speech synthesis is not well understood. Previous work often evaluates pause modelling in conjunction with other prosodic features making it hard to explicitly study how raters perceive differences in inter-sentence pause lengths.
In this paper, using multiple text-to-speech (TTS) datasets that cover different content types, domains, and settings, we investigate how sensitive raters are to changes to the durations of inter-sentence pauses in long-form speech by comparing ground truth audio samples with renditions that have manipulated pause durations.
This experimental design is meant to allow us to draw conclusions regarding the utility that can be expected from similar evaluations when applied to synthesized long-form speech.
We find that, using standard evaluation methodologies, raters are not sensitive to variations in pause lengths unless these deviate exceedingly from the norms or expectations of the speech context.
View details
Preview abstract
What is it to explain the outputs of an opaque machine learning model? Popular strategies in the literature are to develop
explainable machine learning techniques. These techniques approximate how the model works by providing local or global
information about the inner workings of a machine learning model. In this paper, we argue that, in some cases, explaining
machine learning outputs requires appealing to the third kind of explanation that we call socio-structural explanations.
The importance of socio-structural explanations is motivated by the observation that machine learning models are not
autonomous mathematico-computational entities. Instead, their very existence is intrinsically tied to the social context in
which they operate. Sometimes, the social structures are mirrored in the design and training of machine learning models
and hence appealing to the socio-structural explanations offers the relevant explanation for why the output is obtained.
By thoroughly examining a well-known case of racially biased algorithmic resource allocation in healthcare, we highlight
the significance of socio-structural explanations. One ramification of our proposal is that to understand how machine
learning models perpetuate unjust social harms, more is needed to interpret them by model interpretability methods.
Instead, providing socio-structural explanations adds explanatory adequacy as to how and why machine learning outputs
are obtained
View details
Preview abstract
Knowledge-grounded dialogue generation is a challenging task because it requires satisfying two fundamental yet often competing constraints: being responsive in a manner that is specific to what the conversation partner has said while also being attributable to an underlying source document. In this work, we bring this trade-off between these two objectives (specificity and attribution) to light and ask the question: Can explicit content planning before the response generation help the model to address this challenge? To answer this question, we design a framework called PLEDGE, which allows us to experiment with various plan variables explored in prior work, supporting both metric-agnostic and metric-aware approaches. While content planning shows promise, our results on whether it can actually help to navigate this trade-off are mixed -- planning mechanisms that are metric-aware (use automatic metrics during training) are better at automatic evaluations but underperform in human judgment compared to metric-agnostic mechanisms. We discuss how this may be caused by over-fitting to automatic metrics and the need for future work to better calibrate these metrics towards human judgment. We hope the observations from our analysis will inform future work that aims to apply content planning in this context.
View details
Meta-Manager: A Tool for Collecting and Exploring Meta Information about Code
Amber Horvath
Brad A. Myers
CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems (2024)
Preview abstract
Modern software engineering is in a state of flux. With more development utilizing AI code generation tools and the continued reliance on online programming resources, understanding code and the original intent behind it is becoming more important than it ever has been. To this end, we have developed the “Meta-Manager”, a Visual Studio Code extension, with a supplementary browser extension, that automatically collects and organizes changes made to code while keeping track of the provenance of each part of the code, including code that has been copy-pasted from popular programming resources online. These sources and subsequent changes are represented in the editor and may be explored using searching and filtering mechanisms to help developers answer historically hard-to-answer questions about code, its provenance, and its design rationale. In our evaluation of Meta-Manager, we found developers
were successfully able to use it to answer otherwise unanswerable questions about an unfamiliar code base.
View details
Take it, Leave it, or Fix it: Measuring Productivity and Trust in Human-AI Collaboration
29th International Conference on Intelligent User Interfaces (IUI ’24), ACM, New York, NY, USA (2024)
Preview abstract
Although recent developments in generative AI have greatly enhanced the capabilities of conversational agents such as Google's Bard or OpenAI's ChatGPT, it's unclear whether the usage of these agents aids users across various contexts. To better understand how access to conversational AI affects productivity and trust, we conducted a mixed-methods, task-based user study, observing 76 software engineers (N=76) as they completed a programming exam with and without access to Bard. Effects on performance, efficiency, satisfaction, and trust vary depending on user expertise, question type (open-ended "solve" questions vs. definitive "search" questions), and measurement type (demonstrated vs. self-reported). Our findings include evidence of automation complacency, increased reliance on the AI over the course of the task, and increased performance for novices on “solve”-type questions when using the AI. We discuss common behaviors, design recommendations, and impact considerations to improve collaborations with conversational AI.
View details
Ubiquitous and Low-Cost Generation of Elevation Pseudo Ground Control Points
Etienne Le Grand
Moustafa Youssef
14th International Conference on Indoor Positioning and Indoor Navigation (IPIN). Hong Kong, China, 2024.
Preview abstract
In this paper, we design a system to generate Pseudo Ground Control Points (PGCPs) using standard low-cost widely available GNSS receivers in a crowd-sourcing manner. We propose a number of GNSS points filters that removes different causes of errors and biases, and design a linear regression height estimator leading to high-accuracy PGCP elevations. Evaluation of our system shows that the PGCPs can achieve a median accuracy of 22.5 cm in 25 metropolitan areas in the USA.
View details
First Passage Percolation with Queried Hints
Kritkorn Karntikoon
Aaron Schild
Yiheng Shen
Ali Sinop
AISTATS (2024)
Preview abstract
Optimization problems are ubiquitous throughout the modern world. In many of these applications, the input is inherently noisy and it is expensive to probe all of the noise in the input before solving the relevant optimization problem. In this work, we study how much of that noise needs to be queried in order to obtain an approximately optimal solution to the relevant problem. We focus on the shortest path problem in graphs, where one may think of the noise as coming from real-time traffic. We consider the following model: start with a weighted base graph $G$ and multiply each edge weight by an independently chosen, uniformly random number in $[1,2]$ to obtain a random graph $G'$. This model is called \emph{first passage percolation}. Mathematicians have studied this model extensively when $G$ is a $d$-dimensional grid graph, but the behavior of shortest paths in this model is still poorly understood in general graphs. We make progress in this direction for a class of graphs that resembles real-world road networks. Specifically, we prove that if the geometric realization of $G$ has constant doubling dimension, then for a given $s-t$ pair, we only need to probe the weights on $((\log n) / \epsilon)^{O(1)}$ edges in $G'$ in order to obtain a $(1 + \epsilon)$-approximation to the $s-t$ distance in $G'$. We also demonstrate experimentally that this result is pessimistic -- one can even obtain a short path in $G'$ with a small number of probes to $G'$.
View details
Open Se Cura: First Silicon Results of an Auditable and Transparent Hardware Root of Trust System using Open EDA in 16-nm
Guanchen Tao
Ming-Hung Chen
Bangfei Pan
Kai Yick
Dennis Sylvester
Mehdi Saligane
IEEE Solid-State Circuits Magazine, 16(2024), pp. 58-66
Preview abstract
Hardware Root of Trust (HRoT) is essential for Internet-of-Things (IoT) devices as it provides critical user data protection. However, each novel use case significantly lengthens the development time for an HRoT system. Furthermore, most HRoT solutions are proprietary, and users lack permission to inspect and audit such systems [1-2]. This paper introduces Open Se Cura, which is an open-source framework designed to expedite the implementation of secure and transparent HRoT systems. It utilizes open-source Electronic Design Automation (EDA) tools like OpenROAD [3-4] and OpenFASOC [5-6], along with open-source Process Design Kits (PDKs), to present a transparent and auditable approach to hardware-software co-design platforms. This approach enables fast and trustworthy HRoT system implementation and is made openly available to reproduce its results and security efficacy [7]. Our reference design is showcased through FPGA emulation, and the first measurement results of a silicon implementation in 16nm of Open Se Cura security domain subsets integrated using open-source EDA are presented.
View details
Nteasee: A qualitative study of expert and general population perspectives on deploying AI for health in African countries
Iskandar Haykel
Florence Ofori
Kerrie Kauer
Tousif Ahmad
Preview abstract
Background: Artificial Intelligence for health has the potential to significantly change and improve healthcare. However in most African countries identifying culturally and contextually attuned approaches for deploying these solutions is not well understood. To bridge this gap, we conduct a qualitative study to investigate the best practices, fairness indicators and potential biases to mitigate when deploying AI for health in African countries, as well as explore opportunities where artificial intelligence could make a positive impact in health.
Methods: We used a mixed methods approach combining in-depth interviews (IDIs) and surveys. We conduct 1.5-2 hour long IDIs with 50 experts in health, policy and AI across 17 countries, and through an inductive approach we conduct a qualitative thematic analysis on expert IDI responses. We administer a blinded 30-minute survey with thought-cases to 672 general population participants across 5 countries in Africa (Ghana, South Africa, Rwanda, Kenya and Nigeria), and analyze responses on quantitative scales, statistically comparing responses by country, age, gender, and level of familiarity with AI. We thematically summarize open-ended responses from surveys.
Results and Conclusion: Our results find generally positive attitudes, high levels of trust, accompanied by moderate levels of concern among general population participants for AI usage for health in Africa. This contrasts with expert responses, where major themes revolved around trust/mistrust, AI ethics concerns, and systemic barriers to overcome, among others. This work presents the first-of-its-kind qualitative research study of the potential of AI for health in Africa with perspectives from both experts and the general population. We hope that this work guides policy makers and drives home the need for education and the inclusion of general population perspectives in decision-making around AI usage.
View details
A Toolbox for Surfacing Health Equity Harms and Biases in Large Language Models
Heather Cole-Lewis
Nenad Tomašev
Liam McCoy
Leo Anthony Celi
Alanna Walton
Akeiylah DeWitt
Philip Mansfield
Sushant Prakash
Joelle Barral
Ivor Horn
Karan Singhal
Nature Medicine (2024)
Preview abstract
Large language models (LLMs) hold promise to serve complex health information needs but also have the potential to introduce harm and exacerbate health disparities. Reliably evaluating equity-related model failures is a critical step toward developing systems that promote health equity. We present resources and methodologies for surfacing biases with potential to precipitate equity-related harms in long-form, LLM-generated answers to medical questions and conduct a large-scale empirical case study with the Med-PaLM 2 LLM. Our contributions include a multifactorial framework for human assessment of LLM-generated answers for biases and EquityMedQA, a collection of seven datasets enriched for adversarial queries. Both our human assessment framework and our dataset design process are grounded in an iterative participatory approach and review of Med-PaLM 2 answers. Through our empirical study, we find that our approach surfaces biases that may be missed by narrower evaluation approaches. Our experience underscores the importance of using diverse assessment methodologies and involving raters of varying backgrounds and expertise. While our approach is not sufficient to holistically assess whether the deployment of an artificial intelligence (AI) system promotes equitable health outcomes, we hope that it can be leveraged and built upon toward a shared goal of LLMs that promote accessible and equitable healthcare.
View details