# Tomer Koren

I'm a Senior Research Scientist at Google Research, Tel Aviv, and an Assistant Professor at the Blavatnik School of Computer Science at Tel Aviv University. My research interests are in machine learning and optimization.

### Research Areas

Authored Publications

Google Publications

Other Publications

Sort By

Stochastic Optimization with Laggard Data Pipelines

Cyril Zhang

Kunal Talwar

Rohan Anil

Thirty-fourth Conference on Neural Information Processing Systems, 2020 (2020) (to appear)

Preview abstract
State-of-the-art optimization has increasingly moved toward massively parallel pipelines with extremely large batches. As a consequence, the performance bottleneck is shifting towards the CPU- and disk-bound data loading and preprocessing, as opposed to hardware-accelerated backpropagation. In this regime, a recently proposed approach is data echoing (Choi et al. '19), which takes repeated gradient steps on the same batch. We provide the first convergence analysis of data echoing-based extensions of ubiquitous optimization methods, exhibiting provable improvements over their synchronous counterparts. Specifically, we show that asynchronous batch reuse can magnify the gradient signal in a stochastic batch, without harming the statistical rate.
View details

Robust Bi-Tempered Logistic Loss Based on Bregman Divergences

Ehsan Amid

Rohan Anil

Thirty-Third Annual Conference on Neural Information Processing Systems (NeurIPS) (2019)

Preview abstract
We introduce a temperature into the exponential function and replace the softmax output layer of neural nets by a high temperature generalization. Similarly, the logarithm in the log loss we use for training is replaced by a low temperature logarithm. By tuning the two temperatures we create loss functions that are non-convex already in the single layer case. When replacing the last layer of the neural nets by our bi-temperature generalization of logistic loss, the training becomes more robust to noise. We visualize the effect of tuning the two temperatures in a simple setting and show the efficacy of our method on large data sets. Our methodology is based on Bregman divergences and is superior to a related two-temperature method using the Tsallis divergence.
View details

No Results Found