Elad Eban
Dr. Elad Eban is a staff research scientist at Google Research (Perception). Since joining Google in 2015, he focused his research on leveraging ML theory towards designing better algorithms that can be directly integrated into production systems. Models utilizing his work have been deployed in various Google products serving billions of users. Previously, Elad founded an ML consulting firm supporting startups and small companies.
Elad's research has been published in top-tier journals such as Nature Neuroscience, and presented in top machine learning and computer vision conferences such as ICML (including a best paper award), CVPR, and UAI. He obtained his PhD in 2015 from the Hebrew University of Jerusalem under the supervision of Amir Globerson and Shai Shalev-Shwartz.
Research Areas
Authored Publications
Sort By
Preview abstract
Knowledge Distillation is a popular method to reduce model size by transferring the knowledge of a large teacher model to a smaller student network. We show that it is possible to independently replace sub-parts of a network without accuracy loss. Based on this, we propose a distillation method that breaks the end-to-end paradigm by splitting the teacher architecture into smaller sub-networks - also called neighbourhoods. For each neighbourhood we distill a student independently and then merge them into a single student model. We show that this process is significantly faster than Knowledge Distillation, and produces students of the same quality.
From Neighbourhood Distillation, we design Student Search, an architecture search that leverages the independently distilled candidates to explore an exponentially large search space of architectures and locally selects the best candidate to use for the student model.
We show applications of Neighbourhood Distillation and Student Search on CIFAR-10 and ImageNet models on model reduction and sparsification problems. Our method offers up to $4.6\times$ speed-up compared to end-to-end distillation methods while retaining the same performance.
View details
Fine-Grained Stochastic Architecture Search
Shraman Ray Chaudhuri
Hanhan Li
Max Moroz
ICLR Workshop on Neural Architecture Search, @article{chaudhuri2020fine,
title={Fine-grained stochastic architecture search},
author={Chaudhuri, Shraman Ray and Eban, Elad and Li, Hanhan and Moroz, Max and Movshovitz-Attias, Yair},
journal={ICLR Workshop on Neural Architecture Search},
year={2020}
} (2020)
Preview abstract
State-of-the-art deep networks are often too large to deploy on mobile devices and
embedded systems. Mobile neural architecture search (NAS) methods automate
the design of small models but state-of-the-art NAS methods are expensive to
run. Differentiable neural architecture search (DNAS) methods reduce the search
cost but explore a limited subspace of candidate architectures. In this paper, we
introduce Fine-Grained Stochastic Architecture Search (FiGS), a differentiable
search method that searches over a much larger set of candidate architectures. FiGS
simultaneously selects and modifies operators in the search space by applying a
structured sparse regularization penalty based on the Logistic-Sigmoid distribution.
We show results across 3 existing search spaces, matching or outperforming the
original search algorithms and producing state-of-the-art parameter-efficient models
on ImageNet (e.g., 75.4% top-1 with 2.6M params). Using our architectures as
backbones for object detection with SSDLite, we achieve significantly higher mAP
on COCO (e.g., 25.8 with 3.0M params) than MobileNetV3 and MnasNet.
View details
Sky Optimization: Semantically aware image processing in low-light photography
Yun-Ta Tsai
Huizhong Chen
(2020), pp. 526-527
Preview abstract
The sky is a major component of the appearance of a photograph, and its color and tone can strongly influence the mood of a picture. In nighttime photography, the sky can also suffer from noise and color artifacts. For this reason, there is a strong desire to process the sky in isolation from the rest of the scene to achieve an optimal look.
In this work, we propose an automated method, which can run as a part of a camera pipeline, for creating accurate sky alpha-masks and using them to improve the appearance of the sky.
Our method performs end-to-end sky optimization in less than half a second per image on a mobile device.
We introduce a method for creating an accurate sky-mask dataset that is based on partially annotated images that are inpainted and refined by our modified weighted guided filter. We use this dataset to train a neural network for semantic sky segmentation.
Due to the compute and power constraints of mobile devices, sky segmentation is performed at a low image resolution. Our modified weighted guided filter is used for edge-aware upsampling to resize the alpha-mask to a higher resolution.
With this detailed mask we automatically apply post-processing steps to the sky in isolation, such as automatic spatially varying white-balance, brightness adjustments, contrast enhancement, and noise reduction.
View details
Seq2Slate: Re-ranking and Slate Optimization with RNNs
Irwan Bello
Sagar Jain
Alan Mackey
arXiv (2019)
Preview abstract
Ranking is a central task in machine learning and information retrieval. In this task, it is especially important to present the user with a slate of items that is appealing as a whole. This in turn requires taking into account interactions between items, since intuitively, placing an item on the slate affects the decision of which other items should be placed alongside it. In this work, we propose a sequence-to-sequence model for ranking called seq2slate. At each step, the model predicts the next "best" item to place on the slate given the items already selected. The sequential nature of the model allows complex dependencies between the items to be captured directly in a flexible and scalable way. We show how to learn the model end-to-end from weak supervision in the form of easily obtained click-through data. We further demonstrate the usefulness of our approach in experiments on standard ranking benchmarks as well as in a real-world recommendation system.
View details
Preview abstract
Image compression using neural networks have reached or exceeded non-neural methods (such as JPEG, WebP, BPG). While these networks are state of the art in rate-distortion performance, computational feasibility of these models remains a challenge. Our work provides three novel contributions. We propose a run-time improvement to the Generalized Divisive Normalization formulation, a regularization technique targeted to optimizing neural image decoders, and an analysis of
the trade offs in 207 architecture variations across multiple distortion loss functions to recommend an architecture that is twice as fast while maintaining state-of-the-art image compression performance.
View details
Structured Multi-Hashing for Model Compression
Andrew Poon
Yerlan Idelbayev
Miguel A. Carreira-Perpinan
(2019)
Preview abstract
Despite the success of deep neural networks (DNNs), state-of-the-art models are too large to deploy on low-resource devices or common server configurations in which multiple models are held in memory. Model compression methods address this limitation by reducing the memory footprint, latency, or energy consumption of a model with minimal impact on accuracy. We focus on the task of reducing the number of learnable variables in the model. In this work we combine ideas from weight hashing and dimensionality reductions resulting in a simple and powerful structured multi-hashing method based on matrix products that allows direct control of model size of any deep network and is trained end-to-end. We demonstrate the strength of our approach by compressing models from the ResNet, EfficientNet, and MobileNet architecture families. Our method allows us to drastically decrease the number of variables while maintaining high accuracy. For instance, by applying our approach to EfficentNet-B4 (16M parameters) we reduce it to to the size of B0 (5M parameters), while gaining over 3% in accuracy over B0 baseline. On the commonly used benchmark CIFAR10 we reduce the ResNet32 model by 75% with no loss in quality, and are able to do a 10x compression while still achieving above 90% accuracy.
View details
Preview abstract
We present MorphNet, an approach to automate the design of neural network structures. MorphNet iteratively shrinks and expands a network, shrinking via a resource-weighted sparsifying regularizer on activations and expanding via a uniform multiplicative factor on all layers. In contrast to previous approaches, our method is scalable to large networks, adaptable to specific resource constraints (e.g. the number of floating-point operations per inference), and capable of increasing the network's performance. When applied to standard network architectures on a wide variety of datasets, our approach discovers novel structures in each domain, obtaining higher performance while respecting the resource constraint.
View details
Preview abstract
In most machine learning applications, classification accuracy is not the primary metric of interest. Binary classifiers which face class imbalance are often evaluated by the $F_\beta$ score, area under the precision-recall curve, Precision at K, and more. The maximization of many of these metrics can be expressed as a constrained optimization problem, where the constraint is a function of the classifier's predictions.
In this paper we propose a novel framework for learning with constraints that can be expressed as a predicted positive rate (or negative rate) on a subset of the training data. We explicitly model the threshold at which a classifier must operate to satisfy the constraint, yielding a surrogate loss function which avoids the complexity of constrained optimization.
The method is model-agnostic and only marginally more expensive than minimization of the unconstrained loss. Experiments on a variety of benchmarks show competitive performance relative to existing baselines.
View details
Improper Deep Kernels
Uri Heinemann
Roi Livni
Proceedings of The 19th International Conference on Artificial Intelligence and Statististics. (2016)
Preview abstract
Neural networks have recently re-emerged as a powerful hypothesis class, yielding impressive classification accuracy in multiple domains. However, their training is a non-convex optimization problem which poses theoretical and practical challenges. Here we address this difficulty by turning to ``improper'' learning of neural nets. In other words, we learn a classifier that is not a neural net but is competitive with the best neural net model given a sufficient number of training examples. Our approach relies on a novel kernel construction scheme in which the kernel is a result of integration over the set of all possible instantiation of neural models. It turns out that the corresponding integral can be evaluated in closed-form via a simple recursion. Thus we translate the non-convex, hard learning problem of a neural net to a SVM with an appropriate kernel. We also provide sample complexity results which depend on the stability of the optimal neural net.
View details
Scalable Learning of Non-Decomposable Objectives
Alan Mackey
Ariel Gordon
arXiv preprint arXiv:1608.04802 (2016)
Preview abstract
Modern retrieval systems are often driven by an underlying machine learning model. The goal of such systems is to identify and possibly rank the few most relevant items for a given query or context. Thus, such systems are typically evaluated using a ranking-based performance metric such as the area under the precision-recall curve, the Fβ score, precision at fixed recall, etc. Obviously, it is desirable to train such systems to optimize the metric of interest.
In practice, due to the scalability limitations of existing approaches for optimizing such objectives, large-scale retrieval systems are instead trained to maximize classification accuracy, in the hope that performance as measured via the true objective will also be favorable. In this work we present a unified framework that, using straightforward building block bounds, allows for highly scalable optimization of a wide range of ranking-based objectives. We demonstrate the advantage of our approach on several real-life retrieval problems that are significantly larger than those considered in the literature, while achieving substantial improvement in performance over the accuracy-objective baseline.
View details