
Utku Evci
Utku joined Google through the AI Residency Program in the summer of 2018 after completing his M.Sc. degree in Computer Science at NYU Courant with the Fulbright Scholarship. During his time at NYU, he worked with Levent Sagun on understanding the energy landscape of neural networks. Following his interest in network pruning, Utku wrote his M.Sc. thesis on detecting dead units in neural networks (advised by Prof. Leon Bottou). Prior the M.Sc. degree, Utku completed his undergrad degree in Koc University, Istanbul double majoring in Electrical Engineering and Computer Engineering. His prior research experience includes two summer internships at EPFL(2015), Switzerland and University of Amsterdam(2016), Netherlands.
Utku is very excited about doing research in Google. He is excited to leverage Google's computational resources to conduct research on a large scale. He believes neural network training is far from optimal in terms of speed and resources used. He is excited to work on making them faster, smaller, and more agile on learning various tasks.
His off-research interests include running, yoga, climbing and some maker projects.
His personal page and blog posts related to ML and engineering can be found here.
Research Areas
Authored Publications
Sort By
Google
Scaling Vision Transformers to 22 Billion Parameters
Josip Djolonga
Basil Mustafa
Piotr Padlewski
Justin Gilmer
Mathilde Caron
Rodolphe Jenatton
Lucas Beyer
Michael Tschannen
Anurag Arnab
Carlos Riquelme
Matthias Minderer
Gamaleldin Elsayed
Fisher Yu
Avital Oliver
Fantine Huot
Mark Collier
Vighnesh Birodkar
Yi Tay
Alexander Kolesnikov
Filip Pavetić
Thomas Kipf
Xiaohua Zhai
Neil Houlsby
Arxiv (2023)
GradMax: Growing Neural Networks using Gradient Information
Bart van Merriënboer
Thomas Unterthiner
The International Conference on Learning Representations (2022)
Training Recipe for N:M Structured Sparsity with Decaying Pruning Mask
Sheng-Chun Kao
Shivani Agrawal
Suvinay Subramanian
Tushar Krishna
(2022) (to appear)
Gradient Flow in Sparse Neural Networks and How Lottery Tickets Win
Yani Ioannou
Cem Keskin
AAAI Conference on Artificial Intelligence (2022)
Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning
Mike Mozer
Proceedings of the 39th International Conference on Machine Learning, PMLR (2022)
The State of Sparse Training in Deep Reinforcement Learning
Erich Elsen
Proceedings of the 39th International Conference on Machine Learning, PMLR (2022)
A Unified Few-Shot Classification Benchmark to Compare Transfer and Meta Learning Approaches
Neil Houlsby
Xiaohua Zhai
Sylvain Gelly
NeurIPS Datasets and Benchmarks Track (2021)
Meta-Dataset: A Dataset of Datasets for Learning to Learn from Few Examples
Eleni Triantafillou
Tyler Zhu
Kelvin Xu
Carles Gelada
International Conference on Learning Representations (submission) (2020)
Rigging The Lottery: Making All Tickets Winners
Jacob Menick
Erich Elsen
International Conference of Machine Learning (2020)