Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10129 publications
WSDM 2024 Workshop on Large Language Models for Individuals, Groups, and Society
Qiaozhu Mei
Vanessa Murdock
Jie Tang
Hongning Wang
Hamed Zamani
(2024) (to appear)
Preview abstract
This workshop discusses the cutting-edge developments in research and applications of personalizing large language models (LLMs) and adapting them to the demands of diverse user populations and societal needs. The full-day workshop plan includes several keynotes and invited talks, a poster session and a panel discussion.
View details
Chain-of-Table: Evolves Tables in the LLM Reasoning Chain for Table Understanding
Zilong Wang
Hao Zhang
Chun-Liang Li
Jingbo Shang
ICLR (2024)
Preview abstract
Table-based reasoning with large language models (LLMs) is a promising direction to tackle many table understanding tasks, such as table-based question answering and fact verification. Compared with generic reasoning, table-based reasoning requires the extraction of underlying semantics from both free-form questions and semi-structured tabular data. Chain-of-Thought and its similar approaches incorporate the reasoning chain in the form of textual context, but it is still an open question how to effectively leverage tabular data in the reasoning chain. We propose the Chain-of-Table framework, where tabular data is explicitly used in the reasoning chain as a proxy for intermediate thoughts. Specifically, we guide LLMs using in-context learning to iteratively generate operations and update the table to represent a tabular reasoning chain. LLMs can therefore dynamically plan the next operation based on the results of the previous ones. This continuous evolution of the table forms a chain, showing the reasoning process for a given tabular problem. The chain carries structured information of the intermediate results, enabling more accurate and reliable predictions. Chain-of-Table achieves new state-of-the-art performance on WikiTQ, FeTaQA, and TabFact benchmarks across multiple LLM choices.
View details
Preview abstract
Detecting offensive content in text is an increasingly central challenge for both social-media platforms and AI-driven technologies. However offensiveness remains a subjective phenomenon as perspectives differ across sociodemographic characteristics, as well as cultural norms and moral values. This intricacy is largely ignored in the current AI-focused approaches for detecting offensiveness or related concepts such as hate speech and toxicity detection. We frame the task of determining offensiveness as essentially a matter of moral judgment --- deciding the boundaries of ethically wrong vs. right language to be used or generated within an implied set of sociocultural norms. In this paper, we investigate how judgment of offensiveness varies across diverse global cultural regions, and the crucial role of moral values in shaping these variations. Our findings highlight substantial cross-cultural differences in perceiving offensiveness, with moral concerns about Caring and Purity as the mediating factor driving these differences. These insights are of importance as AI safety protocols, shaped by human annotators' inputs and perspectives, embed their moral values which do not align with the notions of right and wrong in all contexts, and for all individuals.
View details
(In)Security of File Uploads in Node.js
Harun Oz
Abbas Acar
Ahmet Aris
Amin Kharraz
Selcuk Uluagac
The Web conference (WWW) (2024)
Preview abstract
File upload is a critical feature incorporated by a myriad of web
applications to enable users to share and manage their files conveniently. It has been used in many useful services such as file-sharing
and social media. While file upload is an essential component of
web applications, the lack of rigorous checks on the file name, type,
and content of the uploaded files can result in security issues, often
referred to as Unrestricted File Upload (UFU). In this study, we analyze the (in)security of popular file upload libraries and real-world
applications in the Node.js ecosystem. To automate our analysis, we
propose NodeSec– a tool designed to analyze file upload insecurities in Node.js applications and libraries. NodeSec generates unique
payloads and thoroughly evaluates the application’s file upload security against 13 distinct UFU-type attacks. Utilizing NodeSec, we
analyze the most popular file upload libraries and real-world ap-
plications in the Node.js ecosystem. Our results reveal that some
real-world web applications are vulnerable to UFU attacks and dis-
close serious security bugs in file upload libraries. As of this writing,
we received 19 CVEs and two US-CERT cases for the security issues that we reported. Our findings provide strong evidence that
the dynamic features of Node.js applications introduce security
shortcomings and that web developers should be cautious when
implementing file upload features in their applications.
View details
Quantifying urban park use in the USA at scale: empirical estimates of realised park usage using smartphone location data
Michael T Young
Swapnil Vispute
Stylianos Serghiou
Akim Kumok
Yash Shah
Kevin J. Lane
Flannery Black-Ingersoll
Paige Brochu
Monica Bharel
Sarah Skenazy
Shailesh Bavadekar
Mansi Kansal
Evgeniy Gabrilovich
Gregory A. Wellenius
Lancet Planetary Health (2024)
Preview abstract
Summary
Background A large body of evidence connects access to greenspace with substantial benefits to physical and mental
health. In urban settings where access to greenspace can be limited, park access and use have been associated with
higher levels of physical activity, improved physical health, and lower levels of markers of mental distress. Despite the
potential health benefits of urban parks, little is known about how park usage varies across locations (between or
within cities) or over time.
Methods We estimated park usage among urban residents (identified as residents of urban census tracts) in
498 US cities from 2019 to 2021 from aggregated and anonymised opted-in smartphone location history data. We
used descriptive statistics to quantify differences in park usage over time, between cities, and across census tracts
within cities, and used generalised linear models to estimate the associations between park usage and census tract
level descriptors.
Findings In spring (March 1 to May 31) 2019, 18·9% of urban residents visited a park at least once per week, with
average use higher in northwest and southwest USA, and lowest in the southeast. Park usage varied substantially
both within and between cities; was unequally distributed across census tract-level markers of race, ethnicity, income,
and social vulnerability; and was only moderately correlated with established markers of census tract greenspace. In
spring 2019, a doubling of walking time to parks was associated with a 10·1% (95% CI 5·6–14·3) lower average
weekly park usage, adjusting for city and social vulnerability index. The median decline in park usage from spring
2019 to spring 2020 was 38·0% (IQR 28·4–46·5), coincident with the onset of physical distancing policies across
much of the country. We estimated that the COVID-19-related decline in park usage was more pronounced for those
living further from a park and those living in areas of higher social vulnerability.
Interpretation These estimates provide novel insights into the patterns and correlates of park use and could enable
new studies of the health benefits of urban greenspace. In addition, the availability of an empirical park usage metric
that varies over time could be a useful tool for assessing the effectiveness of policies intended to increase such
activities.
View details
Preview abstract
To tackle the challenge of optimizing middle-mile logistics, the crucial link between warehouses and final deliveries, we introduce a novel instance generator that aims to create a rich and adaptable dataset of diverse instances to empower researchers and developers. The instance defines a logistics network with hubs, vehicles, routes, lines, and rotations. Additionally, it specifies a list of shipments that need to be transported through this network. To customize the instance, the user can adjust various parameters, such as the number of hubs, density of the space graphs, distribution of shipment weights, or the maximum number of vehicles.
The generator reflects real-world complexities through variations in network size and structure. We developed a random graph generator to mimic real-world middle mile networks, by generating space graphs for hubs. Subsequently, lines and routes are randomly constructed on the generated space graphs, while adhering to user-defined constraints.
The tool is in the form of an optimized C++ library that enables the generation of instances with a large number of hubs and shipments. It offers the immense potential for advancing middle-mile logistics optimization by providing a comprehensive and adaptable dataset for benchmarking optimization approaches, training machine learning models, and analyzing the impact of network configurations and shipments characteristics on overall efficiency.
View details
Preview abstract
In this paper we study users' opinions about the privacy of their mobile health apps. We look at what they write in app reviews in the 'Health & Fitness' category on the Google Play store. We identified 2832 apps in this category (based on 1K minimum installs). Using NLP/LLM analyses, we find that 76% of these apps have at least some privacy reviews. In total this yields over 164,000 reviews about privacy, from over 150 countries and in 25 languages. Our analyses identifies top themes and offers an approximation of how widespread these issues are around the world. We show that the top 4 themes - Data Sharing and Exposure, Permission Requests, Location Tracking and Data Collection - are issues of concern in over 70 countries. Our automatically generated thematic summaries reveal interesting aspects that deserve further research around user suspicions (unneeded data collection), user requests (more fine-grained control over data collection and data access), as well as user behavior (uninstalling apps).
View details
Health equity assessment of machine learning performance (HEAL): a framework and dermatology AI model case study
Terry Spitz
Malcolm Chelliah
Heather Cole-Lewis
Stephanie Farquhar
Qinghan Xue
Jenna Lester
Cían Hughes
Patricia Strachan
Fraser Tan
Peggy Bui
Craig Mermel
Lily Peng
Sunny Virmani
Ivor Horn
Cameron Chen
The Lancet eClinicalMedicine (2024)
Preview abstract
Background
Artificial intelligence (AI) has repeatedly been shown to encode historical inequities in healthcare. We aimed to develop a framework to quantitatively assess the performance equity of health AI technologies and to illustrate its utility via a case study.
Methods
Here, we propose a methodology to assess whether health AI technologies prioritise performance for patient populations experiencing worse outcomes, that is complementary to existing fairness metrics. We developed the Health Equity Assessment of machine Learning performance (HEAL) framework designed to quantitatively assess the performance equity of health AI technologies via a four-step interdisciplinary process to understand and quantify domain-specific criteria, and the resulting HEAL metric. As an illustrative case study (analysis conducted between October 2022 and January 2023), we applied the HEAL framework to a dermatology AI model. A set of 5420 teledermatology cases (store-and-forward cases from patients of 20 years or older, submitted from primary care providers in the USA and skin cancer clinics in Australia), enriched for diversity in age, sex and race/ethnicity, was used to retrospectively evaluate the AI model's HEAL metric, defined as the likelihood that the AI model performs better for subpopulations with worse average health outcomes as compared to others. The likelihood that AI performance was anticorrelated to pre-existing health outcomes was estimated using bootstrap methods as the probability that the negated Spearman's rank correlation coefficient (i.e., “R”) was greater than zero. Positive values of R suggest that subpopulations with poorer health outcomes have better AI model performance. Thus, the HEAL metric, defined as p (R >0), measures how likely the AI technology is to prioritise performance for subpopulations with worse average health outcomes as compared to others (presented as a percentage below). Health outcomes were quantified as disability-adjusted life years (DALYs) when grouping by sex and age, and years of life lost (YLLs) when grouping by race/ethnicity. AI performance was measured as top-3 agreement with the reference diagnosis from a panel of 3 dermatologists per case.
Findings
Across all dermatologic conditions, the HEAL metric was 80.5% for prioritizing AI performance of racial/ethnic subpopulations based on YLLs, and 92.1% and 0.0% respectively for prioritizing AI performance of sex and age subpopulations based on DALYs. Certain dermatologic conditions were significantly associated with greater AI model performance compared to a reference category of less common conditions. For skin cancer conditions, the HEAL metric was 73.8% for prioritizing AI performance of age subpopulations based on DALYs.
Interpretation
Analysis using the proposed HEAL framework showed that the dermatology AI model prioritised performance for race/ethnicity, sex (all conditions) and age (cancer conditions) subpopulations with respect to pre-existing health disparities. More work is needed to investigate ways of promoting equitable AI performance across age for non-cancer conditions and to better understand how AI models can contribute towards improving equity in health outcomes.
View details
AI-powered patching: the future of automated vulnerability fixes
Jan Keller
Jan Nowakowski
Google Security Engineering Technical Report (2024) (to appear)
Preview abstract
As AI continues to advance at rapid speed, so has its ability to unearth hidden security vulnerabilities in all types of software. Every bug uncovered is an opportunity to patch and strengthen code—but as detection continues to improve, we need to be prepared with new automated solutions that bolster our ability to fix those bugs. That’s why our Secure AI Framework (SAIF) includes a fundamental pillar addressing the need to “automate defenses to keep pace with new and existing threats.”
This paper shares lessons from our experience leveraging AI to scale our ability to fix bugs, specifically those found by sanitizers in C/C++, Java, and Go code. By automating a pipeline to prompt Large Language Models (LLMs) to generate code fixes for human review, we have harnessed our Gemini model to successfully fix 15% of sanitizer bugs discovered during unit tests, resulting in hundreds of bugs patched. Given the large number of sanitizer bugs found each year, this seemingly modest success rate will with time save significant engineering effort. We expect this success rate to continually improve and anticipate that LLMs can be used to fix bugs in various languages across the software development lifecycle.
View details
AGILE3D: Attention Guided Interactive Multi-object 3D Segmentation
Yuanwen Yue
Sabarinath Mahadevan
Jonas Schult
Francis Engelmann
Bastian Leibe
Konrad Schindler
Theodora Kontogianni
ICLR (2024)
Preview abstract
During interactive segmentation, a model and a user work together to delineate objects of interest in a 3D point cloud. In an iterative process, the model assigns each data point to an object (or the background), while the user corrects errors in the resulting segmentation and feeds them back into the model. The current best practice formulates the problem as binary classification and segments objects one at a time. The model expects the user to provide positive clicks to indicate regions wrongly assigned to the background and negative clicks on regions wrongly assigned to the object. Sequentially visiting objects is wasteful since it disregards synergies between objects: a positive click for a given object can, by definition, serve as a negative click for nearby objects. Moreover, a direct competition between adjacent objects can speed up the identification of their common boundary. We introduce AGILE3D, an efficient, attention-based model that (1) supports simultaneous segmentation of multiple 3D objects, (2) yields more accurate segmentation masks with fewer user clicks, and (3) offers faster inference. Our core idea is to encode user clicks as spatial-temporal queries and enable explicit interactions between click queries as well as between them and the 3D scene through a click attention module. Every time new clicks are added, we only need to run a lightweight decoder that produces updated segmentation masks. In experiments with four different 3D point cloud datasets, AGILE3D sets a new state-of-the-art. Moreover, we also verify its practicality in real-world setups with real user studies. Project page: https://ywyue.github.io/AGILE3D.
View details
Leveraging Function Space Aggregation for Federated Learning at Scale
Nikita Dhawan
Karolina Dziugaite
Transactions on Machine Learning Research (2024)
Preview abstract
The federated learning paradigm has motivated the development of methods for aggregating multiple client updates into a global server model, without sharing client data. Many federated learning algorithms, including the canonical Federated Averaging (FedAvg), take a direct (possibly weighted) average of the client parameter updates, motivated by results in distributed optimization. In this work, we adopt a function space perspective and propose a new algorithm, FedFish, that aggregates local approximations to the functions learned by clients, using an estimate based on their Fisher information. We evaluate FedFish on realistic, large-scale cross-device benchmarks. While the performance of FedAvg can suffer as client models drift further apart, we demonstrate that FedFish is more robust to longer local training. Our evaluation across several settings in image and language benchmarks shows that FedFish outperforms FedAvg as local training epochs increase. Further, FedFish results in global networks that are more amenable to efficient personalization via local fine-tuning on the same or shifted data distributions. For instance, federated pretraining on the C4 dataset, followed by few-shot personalization on Stack Overflow, results in a 7% improvement in next-token prediction by FedFish over FedAvg.
View details
DORSal: Diffusion for Object-centric Representations of Scenes et al.
Allan Jabri
Emiel Hoogeboom
Thomas Kipf
International Conference on Learning Representations (2024)
Preview abstract
Recent progress in 3D scene understanding enables scalable learning of representations across large datasets of diverse scenes. As a consequence, generalization to unseen scenes and objects, rendering novel views from just a single or a handful of input images, and controllable scene generation that supports editing, is now possible. However, training jointly on a large number of scenes typically compromises rendering quality when compared to single-scene optimized models such as NeRFs. In this paper, we leverage recent progress in diffusion models to equip 3D scene representation learning models with the ability to render high-fidelity novel views, while retaining benefits such as object-level scene editing to a large degree. In particular, we propose DORSal, which adapts a video diffusion architecture for 3D scene generation conditioned on frozen object-centric slot-based representations of scenes. On both complex synthetic multi-object scenes and on the real-world large-scale Street View dataset, we show that DORSal enables scalable neural rendering of 3D scenes with object-level editing and improves upon existing approaches.
View details
Developer Ecosystems for Software Safety
Commun. ACM, 67 (2024), 52–60
Preview abstract
This paper reflects on work at Google over the past decade to address common types of software safety and security defects. Our experience has shown that software safety is an emergent property of the software and tooling ecosystem it is developed in and the production environment into which it is deployed. Thus, to effectively prevent common weaknesses at scale, we need to shift-left the responsibility for ensuring safety and security invariants to the end-to-end developer ecosystem, that is, programming languages, software libraries, application frameworks, build and deployment tooling, the production platform and its configuration surfaces, and so forth.
Doing so is practical and cost effective when developer ecosystems are designed with application archetypes in mind, such as web or mobile apps: The design of the developer ecosystem can address threat model aspects that apply commonly to all applications of the respective archetype, and investments to ensure safety invariants at the ecosystem level amortize across many applications.
Applying secure-by-design principles to developer ecosystems at Google has achieved drastic reduction and in some cases near-zero residual rates of common classes of defects, across hundreds of applications being developed by thousands of developers.
View details
BEYOND THE CODE: AI REGULATIONS AS THE SECRET COMPASS OF ENGINEERING MANAGERS
Proceedings of the American Society for Engineering Management 2024 International Annual Conference (2024)
Preview abstract
Technology is a product of society. As technology evolves, the norms governing it have to mature for enabling its proper use within the society. The interest in Artificial Intelligence (AI) has surged following the introduction of chatGPT. Firms, both large and small, are competing to develop new products and solutions involving AI. Amidst these developments, leading corporations such as Google and Microsoft have proactively committed to upholding responsible innovation in AI development. Governments worldwide are responding with the creation of guidelines and regulations in the field. Notably, in March 2024, the United Nations General Assembly (UNGA) adopted landmark regulation on AI.
At the heart of these developments in AI are engineering managers who leverage technical advances to build products and services that create value. To effectively harness AI for human benefit, engineering managers must be aware of these evolving regulations governing AI. Some regulations such as Digital Markets Act (DMA) and General Data Protection Regulations (GDPR) have far reaching consequences for organizations globally. Having a working knowledge of these statutory requirements will enable engineering managers to identify the opportunities and constraints in leveraging AI technology while building products and services. It will allow them to make informed decisions about data collection methods, model training processes, the deployment of AI systems and metrics for their evaluation. At scale, it can become a competitive advantage for the firms they work in, as explored through real-world examples in this paper.
View details
Optimization by Decoded Quantum Interferometry
Stephen Jordan
Noah Shutty
Mary Wootters
Alexander Schmidhuber
Robbie King
Sergei Isakov
arXiv:2408.08292 (2024)
Preview abstract
We introduce Decoded Quantum Interferometry (DQI), a quantum algorithm for reducing classical optimization problems to classical decoding problems by exploiting structure in the Fourier spectrum of the objective function. DQI reduces sparse max-XORSAT to decoding LDPC codes, which can be decoded using powerful classical algorithms such as belief propagation. As an initial benchmark, we compare DQI using belief propagation decoding against classical optimization via simulated annealing. In this setting we identify a family of max-XORSAT instances where DQI achieves a better approximation ratio on average than simulated annealing, although not better than specialized classical algorithms tailored to those instances. We also analyze a combinatorial optimization problem corresponding to finding polynomials that intersect the maximum number of points. There, DQI efficiently achieves a better approximation ratio than any polynomial-time classical algorithm known to us, thus realizing an apparent exponential quantum speedup. Finally, we show that the problem defined by Yamakawa and Zhandry in order to prove an exponential separation between quantum and classical query complexity is a special case of the optimization problem efficiently solved by DQI.
View details