Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 10100 publications
Efficiency of the Generalized Second-Price Auction for Value Maximizers
Hanrui Zhang
Proceedings of the ACM on Web Conference 2024, 46–56
Preview abstract
We study the price of anarchy of the generalized second-price auction where bidders are value maximizers (i.e., autobidders). We show that in general the price of anarchy can be as bad as 0. For comparison, the price of anarchy of running VCG is 1/2 in the autobidding world. We further show a fined-grained price of anarchy with respect to the discount factors (i.e., the ratios of click probabilities between lower slots and the highest slot in each auction) in the generalized second-price auction, which highlights the qualitative relation between the smoothness of the discount factors and the efficiency of the generalized second-price auction.
View details
Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines
Yuchen Li
Alexandre Kirchmeyer
Aashay Mehta
Yilong Qin
Andrej Risteski
International Conference on Machine Learning (2024) (to appear)
Preview abstract
Autoregressive language models are the currently dominant paradigm for text generation, however they have some fundamental limitations that cannot be remedied by scale ---for example inherently sequential and unidirectional generation. While alternate classes of models have been explored, we have limited mathematical understanding of their fundamental power and limitations. In this paper we focus on Generative Masked Language Models (GMLMs), a non-autoregressive paradigm in which we train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model. These models empirically strike a promising speed-quality trade-off as each step can be typically parallelized by decoding the entire sequence in parallel. We develop a mathematical framework for analyzing and improving such models which sheds light on questions of sample complexity and inference speed and quality. Empirically, we adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality compared with autoregressive models. We run careful ablation experiments to give recommendations on key design choices, and make fine-grained observations on the common error modes in connection with our theory. Our mathematical analyses and empirical observations characterize both potentials and limitations of this approach, and can be applied to future works on improving understanding and performance of GMLMs.
View details
On the Benefits of Traffic “Reprofiling” The Multiple Hops Case – Part I
Henry Sariowan
Jiaming Qiu
Jiayi Song
Roch Guerin
IEEE/ACM Transactions on Networking (2024)
Preview abstract
Abstract—This paper considers networks where user traffic is regulated through deterministic traffic profiles, e.g. token buckets, and requirescleanguaranteed hard delay bounds. The network’s goal is to minimize the resources it needs to meet those cleanrequirementsbounds. The paper explores how reprofiling, i.e. proactively modifying how user traffic enters the network, can be of benefit. Reprofiling produces “smoother” flows but introduces an up-front access delay that forces tighter network delays. The paper explores this trade-off and demonstrates that, unlike what holds in the single-hop case, reprofiling can be of benefit even when “optimal”cleansophisticated schedulers are available at each hop.
View details
Sandwiched Compression: Repurposing Standard Codecs with Neural Network Wrappers
Phil A. Chou
Hugues Hoppe
Danhang Tang
Jonathan Taylor
Philip Davidson
arXiv:2402.05887 (2024)
Preview abstract
We propose sandwiching standard image and video codecs between pre- and post-processing neural networks. The networks are jointly trained through a differentiable codec proxy to minimize a given rate-distortion loss. This sandwich architecture not only improves the standard codec’s performance on its intended content, it can effectively adapt the codec to other types of image/video content and to other distortion measures. Essentially, the sandwich learns to transmit “neural code images” that optimize overall rate-distortion performance even when the overall problem is well outside the scope of the codec’s design. Through a variety of examples, we apply the sandwich architecture to sources with different numbers of channels, higher resolution, higher dynamic range, and perceptual distortion measures. The results demonstrate substantial improvements (up to 9 dB gains or up to 3 adaptations. We derive VQ equivalents for the sandwich, establish optimality properties, and design differentiable codec proxies approximating current standard codecs. We further analyze model complexity, visual quality under perceptual metrics, as well as sandwich configurations that offer interesting potentials in image/video compression and streaming.
View details
Websites Need Your Permission Too – User Sentiment and Decision Making on Web Permission Prompts in Desktop Chrome
Marian Harbach
CHI 2024, ACM (to appear)
Preview abstract
The web utilizes permission prompts to moderate access to certain capabilities. We present the first investigation of user behavior and sentiment of this security and privacy measure on the web, using 28 days of telemetry data from more than 100M Chrome installations on desktop platforms and experience sampling responses from 25,706 Chrome users. Based on this data, we find that ignoring and dismissing permission prompts are most common for geolocation and notifications. Permission prompts are perceived as more annoying and interrupting when they are not allowed, and most respondents cite a rational reason for the decision they took. Our data also supports that the perceived availability of contextual information from the requesting website is associated with allowing access to a requested capability. More usable permission controls could facilitate adoption of best practices that address several of the identified challenges; and ultimately could lead to better user experiences and a safer web.
View details
Ubiquitous and Low-Cost Generation of Elevation Pseudo Ground Control Points
Etienne Le Grand
Moustafa Youssef
14th International Conference on Indoor Positioning and Indoor Navigation (IPIN). Hong Kong, China, 2024.
Preview abstract
In this paper, we design a system to generate Pseudo Ground Control Points (PGCPs) using standard low-cost widely available GNSS receivers in a crowd-sourcing manner. We propose a number of GNSS points filters that removes different causes of errors and biases, and design a linear regression height estimator leading to high-accuracy PGCP elevations. Evaluation of our system shows that the PGCPs can achieve a median accuracy of 22.5 cm in 25 metropolitan areas in the USA.
View details
Preview abstract
Referring Image Segmentation is a comprehensive task to segment an object referred by a textual query from an image. In nature, the level of difficulty in this task is affected by the existence of similar objects and the complexity of the referring expression. Recent RIS models still show a significant performance gap between easy and hard scenarios. We pose that the bottleneck exists in the data, and propose a simple but powerful data augmentation method, Negative-mined Mosaic Augmentation (NeMo). This method augments a training image into a mosaic with three other negative images carefully curated by a pretrained multimodal alignment model, e.g., CLIP, to make the sample more challenging. We discover that it is critical to properly adjust the difficulty level, neither too ambiguous nor too trivial. The augmented training data encourages the RIS model to recognize subtle differences and relationships between similar visual entities and to concretely understand the whole expression to locate the right target better. Our approach shows consistent improvements on various datasets and models, verified by extensive experiments.
View details
Preview abstract
The effect of regularizers such as weight decay when training deep neural networks is not well understood. We study the influence of weight decay as well as $L2$-regularization when training neural network models in which parameter matrices interact multiplicatively. This combination is of particular interest as this parametrization is common in attention layers, the workhorse of transformers. Here, key-query, as well as value-projection parameter matrices, are multiplied directly with each other: $W_K^TW_Q$ and $PW_V$.
We extend previous results and show on one hand that any local minimum of a $L2$-regularized loss of the form $L(AB^\top) + \lambda (\|A\|^2 + \|B\|^2)$ coincides with a minimum of the nuclear norm-regularized loss $L(AB^\top) + \lambda\|AB^\top\|_*$, and on the other hand that the 2 losses become identical exponentially quickly during training. We thus complement existing works linking $L2$-regularization with low-rank regularization, and in particular, explain why such regularization on the matrix product affects early stages of training.
Based on these theoretical insights, we verify empirically that the key-query and value-projection matrix products $W_K^TW_Q, PW_V$ within attention layers, when optimized with weight decay, as usually done in vision tasks and language modelling, indeed induce a significant reduction in the rank of $W_K^TW_Q$ and $PW_V$, even in fully online training.
We find that, in accordance with existing work, inducing low rank in attention matrix products can damage language model performance, and observe advantages when decoupling weight decay in attention layers from the rest of the parameters.
View details
Preview abstract
This paper introduces a novel deep neural network architecture for solving the inverse scattering problem in frequency domain with wide-band data, by directly approximating the inverse map, thus avoiding the expensive optimization loop of classical methods. The architecture is motivated by the filtered back-projection formula in the full aperture regime and with homogeneous background, and it leverages the underlying equivariance of the problem and compressibility of the integral operator. This drastically reduces the number of training parameters, and therefore the computational and sample complexity of the method. In particular, we obtain an architecture whose number of parameters scales sub-linearly with respect to the dimension of the inputs, while its inference complexity scales super-linearly but with very small constants. We provide several numerical tests that show that the current approach results in better reconstruction than optimization-based techniques such as full-waveform inversion, but at a fraction of the cost while being competitive with state-of-the-art machine learning methods.
View details
CodeQueries: A Dataset of Semantic Queries over Code
Surya Prakash Sahu
Madhurima Mandal
Shikhar Bharadwaj
Aditya Kanade
Shirish Shevade
Innovations in Software Engineering (ISEC), ACM, Bangalore, India (2024)
Preview abstract
Developers often have questions about semantic aspects of code
they are working on, e.g., “Is there a class whose parent classes
declare a conflicting attribute?”. Answering them requires understanding code semantics such as attributes and inheritance relation
of classes. An answer to such a question should identify code spans
constituting the answer (e.g., the declaration of the subclass) as well
as supporting facts (e.g., the definitions of the conflicting attributes).
The existing work on question-answering over code has considered
yes/no questions or method-level context. We contribute a labeled
dataset, called CodeQueries, of semantic queries over Python code.
Compared to the existing datasets, in CodeQueries, the queries
are about code semantics, the context is file level and the answers
are code spans. We curate the dataset based on queries supported
by a widely-used static analysis tool, CodeQL, and include both
positive and negative examples, and queries requiring single-hop
and multi-hop reasoning.
To assess the value of our dataset, we evaluate baseline neural
approaches. We study a large language model (GPT3.5-Turbo) in
zero-shot and few-shot settings on a subset of CodeQueries. We
also evaluate a BERT style model (CuBERT) with fine-tuning. We
find that these models achieve limited success on CodeQueries.
CodeQueries is thus a challenging dataset to test the ability of
neural models, to understand code semantics, in the extractive
question-answering setting
View details
Preview abstract
Current approaches to Analog Layout Automation
apply ML techniques such as Graph Convolutional Neural
Networks (GCN) to translate netlist to layout. While these ML
approaches have proven to be effective, they lack the powerful
reasoning capabilities, an intuitive human interface, and standard
evaluation benchmarks that have been improving at a rapid de-
velopment pace in Large Language Models (LLMs). The GLayout
framework introduced in this work translates analog layout into
an expressive, technology generic, compact text representation.
Then, an LLM is taught to understand analog layout through
fine-tuning and in-context learning using Retrieval Augmented
Generation (RAG). The LLM is able to successfully layout unseen
circuits based on new information provided in-context. We train
3.8, 7, and 22 Billion parameter quantized LLMs on a dataset
of less than 50 unique circuits, and text documents providing
layout knowledge. The 22B parameter model is tuned in 2 hours
on a single NVIDIA A100 GPU. The open-source evaluation
set is proposed as an automation benchmark for LLM layout
automation tasks, and ranges from 2-transistor circuits to a
∆Σ ADC. The 22B model completes 70% of the tasks in the
evaluation set, and is able to pass DRC and LVS verification on
unseen 4 transistor blocks.
View details
DiffHuman: Probabilistic Photorealistic 3D Reconstruction of Humans
Akash Sengupta
Enric Corona
Andrei Zanfir
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (2024)
Preview abstract
We present DiffHuman, a probabilistic method for photorealistic 3D human reconstruction from a single RGB image. Despite the ill-posed nature of this problem, most methods are deterministic and output a single solution, often resulting in a lack of geometric detail and blurriness in unseen or uncertain regions. In contrast, DiffHuman predicts a distribution over 3D reconstructions conditioned on an image, which allows us to sample multiple detailed 3D avatars that are consistent with the input image. DiffHuman is implemented as a conditional diffusion model that denoises partial observations of an underlying pixel-aligned 3D representation. In testing, we can sample a 3D shape by iteratively denoising renderings of the predicted intermediate representation. Further, we introduce an additional generator neural network that approximates rendering with considerably reduced runtime (55x speed up), resulting in a novel dual-branch diffusion framework. We evaluate the effectiveness of our approach through various experiments. Our method can produce diverse, more detailed reconstructions for the parts of the person not observed in the image, and has competitive performance for the surface reconstruction of visible parts.
View details
Preview abstract
Given copies of a quantum state $\rho$, a shadow tomography protocol aims to learn all expectation values from a fixed set of observables, to within a given precision $\epsilon$. We say that a shadow tomography protocol is \textit{triply efficient} if it is sample- and time-efficient, and only employs measurements that entangle a constant number of copies of $\rho$ at a time. The classical shadows protocol based on random single-copy measurements is triply efficient for the set of local Pauli observables. This and other protocols based on random single-copy Clifford measurements can be understood as arising from fractional colorings of a graph $G$ that encodes the commutation structure of the set of observables. Here we describe a framework for two-copy shadow tomography that uses an initial round of Bell measurements to reduce to a fractional coloring problem in an induced subgraph of $G$ with bounded clique number. This coloring problem can be addressed using techniques from graph theory known as \textit{chi-boundedness}. Using this framework we give the first triply efficient shadow tomography scheme for the set of local fermionic observables, which arise in a broad class of interacting fermionic systems in physics and chemistry. We also give a triply efficient scheme for the set of all $n$-qubit Pauli observables. Our protocols for these tasks use two-copy measurements, which is necessary: sample-efficient schemes are provably impossible using only single-copy measurements. Finally, we give a shadow tomography protocol that compresses an $n$-qubit quantum state into a $\poly(n)$-sized classical representation, from which one can extract the expected value of any of the $4^n$ Pauli observables in $\poly(n)$ time, up to a small constant error.
View details
PRISM: A New Lens for Improved Color Understanding
Garima Pruthi
Inderjit Dhillon
Varun Jampani
EMNLP (2024)
Preview abstract
While image-text pre-trained models, such as CLIP, have demonstrated impressive capabilities in learning robust text and image representations, a critical area for substantial improvement remains—precise color understanding. In this paper, we address this limitation by introducing PRISM, a simple yet highly effective method that extends CLIP's capability to grasp the nuances of precise colors. PRISM seamlessly adapts to both recognized HTML colors and out-of-vocabulary RGB inputs through the utilization of our curated dataset of 100 image-text pairs, which can be effortlessly repurposed for fine-tuning with any desired color. Importantly, PRISM achieves these enhancements without compromising CLIP's performance on established benchmarks. During the fine-tuning process, PRISM encourages the disentanglement of color-relevant information from color-irrelevant details. Furthermore, we introduce a novel evaluation framework, ColorLens, featuring both seen and unseen test sets that can be readily repurposed to assess a model's precision in understanding precise colors. Our comprehensive evaluation and results demonstrate significant improvements over baseline models.
View details
Preview abstract
Cloud computing architectures are more scalable and economical which is the main reason that has contributed to its popularity. However, they bring their own set of challenges when it comes to workload scheduling and resource utilization because virtual machines (VM) and applications have to share different types of resources like servers, storage, etc. Historically, other strategies for workload balancing and resource management include manual configuration or simplistic heuristics that do not provide effective optimizations of resource usage and performance. In this technical brief, we propose an approach built on the use of unsupervised learning techniques to detect usage patterns perceptively and improve resource utilization, which corresponds to both optimal performance and automatically balanced workload among VMs. We are making use of clustering algorithms to cluster similar workloads and then resource allocation for each group based on demand. The point of this step is to use the resources more effectively so we do not run into resource exhaustion. We also integrate anomaly detection methods within our system for identifying and handling abnormal behavior by both monitoring and placing resources. We experiment with region traces from production workloads to demonstrate the benefits of our approach, showing marked improvements in workload balancing and resource utilization over current practices.
View details