Tania Bedrax-Weiss
Tania Bedrax Weiss is a Director of Research at Google. Her current focus is on identifying untapped problems at the intersection of Recommender Systems, Natural Language Understanding, and Vision at Google and outlining research agendas to advance the state-of-the-art. During the 15+ years she’s been at Google she has launched transformative products in Google Play, Ads, and Search. Previously, she worked at NASA Ames Research Center and was part of the team that wrote the software used to schedule daily observations for Spirit and Opportunity. She has also worked in industry on automated configuration and pricing systems. She holds a PhD from the University of Oregon in Artificial Intelligence.
Research Areas
Authored Publications
Sort By
Simple and Principled Uncertainty Estimation with Deterministic Deep Learning via Distance Awareness
Zi Lin
Shreyas Padhy
Advances in Neural Information Processing Systems 33, Curran Associates, Inc. (2020) (to appear)
Preview abstract
Bayesian neural networks (BNN) and Deep Ensembles are principled approaches to estimate the predictive uncertainty of a deep learning model. However their practicality in real-time, industrial-scale applications are limited due to their heavy memory and inference cost. This motivates us to study principled approaches to high-quality uncertainty estimation that require only a single deep neural network (DNN). By formalizing the uncertainty quantification as a minimax learning problem, we first identify \textit{input distance awareness}, i.e., the model’s ability in quantifying the distance of a testing example from the training data in the input space, as a necessary condition for a DNN to achieve high-quality (i.e., minimax optimal) uncertainty estimation. We then propose \textit{Spectral-normalized Gaussian Process} (SNGP), a simple method that improves the distance-awareness ability of modern DNNs, by adding a weight normalization step during training and replacing the activation of the penultimate layer. We visually illustrate the property of the proposed method on two-dimensional datasets, and benchmark its performance against Deep Ensembles and other single-model approaches across both vision and language understanding tasks and on modern architectures (ResNet and BERT). Despite its simplicity, SNGP is competitive with Deep Ensembles in prediction, calibration and out-of-domain detection, and significantly outperforms the other single-model approaches.
View details
Preview abstract
Most current NLP systems have little knowledge about quantitative attributes of objects and events. We propose an unsupervised method for collecting quantitative information from large amounts of web data, and use it to create a new, very large resource consisting of distributions over physical quantities associated with objects, adjectives, and verbs which we call Distribution over Quantities (DoQ). This contrasts with recent work in this area which has focused on making only relative comparisons such as ``Is a lion bigger than a wolf?". Our evaluation shows that DoQ compares favorably with state of the art results on existing datasets for relative comparisons of nouns and adjectives, and on a new dataset we introduce.
View details
Incremental Learning from Text for Question Answering
Samira Abnar
Continual Learning Workshop, Neural Information Processing Systems (NIPS) 2018
Preview abstract
Any system which performs goal-directed continual learning must not only learn incrementally but process and absorb information incrementally. Such a system also has to understand when its goals have been achieved. In this paper, we consider these issues in the context of question answering. Current state-of-the-art question answering models reason over an entire passage, not incrementally. As we will show, naive approaches to incremental reading, such as restriction to unidirectional language models in the model, perform poorly. We present extensions to the DocQA [2] model to allow incremental reading without loss of accuracy. The model also jointly learns to provide the best answer given the text that is seen so far and predict whether this best-so-far answer is sufficient.
View details
Points, Paths, and Playscapes: Large-scale Spatial Language Understanding Tasks Set in the Real World
Daphne Luong
Proceedings of the First International Workshop on Spatial Language Understanding, Association for Computational Linguistics, New Orleans, Louisiana, USA (2018), pp. 46-52
Preview abstract
Spatial language understanding is important for practical applications and as a building block for better abstract language understanding. Much progress has been made through work on understanding spatial relations and values in images and texts as well as on giving and following navigation instructions in restricted domains. We argue that the next big advances in spatial language understanding can be best supported by creating large-scale datasets that focus on points and paths based in the real world, and then extending these to create online, persistent playscapes that mix human and bot players. The bot players can begin play having undergone a prior training regime, but then must learn, evolve, and survive according to their depth of understanding of scenes, navigation, and interactions.
View details