Google at ICML 2018

July 9, 2018

Posted by Christian Howard, Editor-in-Chief, Google AI Communications



Machine learning is a key strategic focus at Google, with highly active groups pursuing research in virtually all aspects of the field, including deep learning and more classical algorithms, exploring theory as well as application. We utilize scalable tools and architectures to build machine learning systems that enable us to solve deep scientific and engineering challenges in areas of language, speech, translation, music, visual processing and more.

As a leader in machine learning research, Google is proud to be a Platinum Sponsor of the thirty-fifth International Conference on Machine Learning (ICML 2018), a premier annual event supported by the International Machine Learning Society taking place this week in Stockholm, Sweden. With over 130 Googlers attending the conference to present publications and host workshops, we look forward to our continued collaboration with the larger ML research community.

If you're attending ICML 2018, we hope you'll visit the Google booth and talk with our researchers to learn more about the exciting work, creativity and fun that goes into solving some of the field's most interesting challenges. Our researchers will also be available to talk about TensorFlow Hub, the latest work from the Magenta project, a Q&A session on the Google AI Residency program and much more. You can also learn more about our research being presented at ICML 2018 in the list below (Googlers highlighted in blue).

ICML 2018 Committees
Board Members include: Andrew McCallumCorinna CortesHugo LarochelleWilliam Cohen
Sponsorship Co-Chair: Ryan Adams

Accepted Publications
Predict and Constrain: Modeling Cardinality in Deep Structured Prediction
Nataly Brukhim, Amir Globerson

Quickshift++: Provably Good Initializations for Sample-Based Mean Shift
Heinrich Jiang, Jennifer Jang, Samory Kpotufe

Learning a Mixture of Two Multinomial Logits
Flavio Chierichetti, Ravi KumarAndrew Tomkins

Structured Evolution with Compact Architectures for Scalable Policy Optimization
Krzysztof Choromanski, Mark Rowland, Vikas Sindhwani, Richard E Turner, Adrian Weller

Fixing a Broken ELBO
Alexander Alemi, Ben Poole, Ian FischerJoshua DillonRif SaurousKevin Murphy

Hierarchical Long-term Video Prediction without Supervision
Nevan Wichers, Ruben Villegas, Dumitru ErhanHonglak Lee

Self-Consistent Trajectory Autoencoder: Hierarchical Reinforcement Learning with Trajectory Embeddings
John Co-Reyes, Yu Xuan Liu, Abhishek Gupta, Benjamin Eysenbach, Pieter Abbeel, Sergey Levine

Well Tempered Lasso
Yuanzhi Li, Yoram Singer

Programmatically Interpretable Reinforcement Learning
Abhinav Verma, Vijayaraghavan Murali, Rishabh Singh, Pushmeet Kohli, Swarat Chaudhuri

Dynamical Isometry and a Mean Field Theory of CNNs: How to Train 10,000-Layer Vanilla Convolutional Neural Networks
Lechao XiaoYasaman BahriJascha Sohl-DicksteinSamuel SchoenholzJeffrey Pennington

On the Optimization of Deep Networks: Implicit Acceleration by Overparameterization
Sanjeev Arora, Nadav Cohen, Elad Hazan

Scalable Deletion-Robust Submodular Maximization: Data Summarization with Privacy and Fairness Constraints
Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Data Summarization at Scale: A Two-Stage Submodular Approach
Marko Mitrovic, Ehsan Kazemi, Morteza Zadimoghaddam, Amin Karbasi

Machine Theory of Mind
Neil Rabinowitz, Frank Perbet, Francis Song, Chiyuan Zhang, S. M. Ali Eslami, Matthew Botvinick

Learning to Optimize Combinatorial Functions
Nir Rosenfeld, Eric Balkanski, Amir Globerson, Yaron Singer

Proportional Allocation: Simple, Distributed, and Diverse Matching with High Entropy
Shipra Agarwal, Morteza ZadimoghaddamVahab Mirrokni

Path Consistency Learning in Tsallis Entropy Regularized MDPs
Yinlam Chow, Ofir NachumMohammad Ghavamzadeh

Efficient Neural Architecture Search via Parameters Sharing
Hieu Pham, Melody Guan, Barret ZophQuoc LeJeff Dean

Adafactor: Adaptive Learning Rates with Sublinear Memory Cost
Noam Shazeer, Mitchell Stern

Learning Memory Access Patterns
Milad HashemiKevin SwerskyJamie Smith, Grant Ayers, Heiner Litz, Jichuan Chang, Christos Kozyrakis, Parthasarathy Ranganathan

SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation
Bo Dai, Albert Shaw, Lihong Li, Lin Xiao, Niao He, Zhen Liu, Jianshu Chen, Le Song

Scalable Bilinear Pi Learning Using State and Action Features
Yichen Chen, Lihong Li, Mengdi Wang

Distributed Asynchronous Optimization with Unbounded Delays: How Slow Can You Go?
Zhengyuan Zhou, Panayotis Mertikopoulos, Nicholas Bambos, Peter Glynn, Yinyu Ye, Li-Jia Li, Li Fei-Fei

Shampoo: Preconditioned Stochastic Tensor Optimization
Vineet Gupta, Tomer Koren, Yoram Singer

Parallel and Streaming Algorithms for K-Core Decomposition
Hossein Esfandiari, Silvio LattanziVahab Mirrokni

Can Deep Reinforcement Learning Solve Erdos-Selfridge-Spencer Games?
Maithra RaghuAlexander Irpan, Jacob Andreas, Bobby Kleinberg, Quoc Le, Jon Kleinberg

Is Generator Conditioning Causally Related to GAN Performance?
Augustus OdenaJacob BuckmanCatherine OlssonTom BrownChristopher OlahColin RaffelIan Goodfellow

The Mirage of Action-Dependent Baselines in Reinforcement Learning
George TuckerSurya Bhupatiraju, Shixiang Gu, Richard E Turner, Zoubin Ghahramani, Sergey Levine

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks on Corrupted Labels
Lu Jiang, Zhengyuan Zhou, Thomas Leung, Li-Jia LiLi Fei-Fei

Loss Decomposition for Fast Learning in Large Output Spaces
En-Hsu Yen, Satyen KaleFelix Xinnan YuDaniel Holtmann-RiceSanjiv Kumar, Pradeep Ravikumar

A Hierarchical Latent Vector Model for Learning Long-Term Structure in Music
Adam RobertsJesse EngelColin RaffelCurtis HawthorneDouglas Eck

Smoothed Action Value Functions for Learning Gaussian Policies
Ofir NachumMohammad NorouziGeorge TuckerDale Schuurmans

Fast Decoding in Sequence Models Using Discrete Latent Variables
Lukasz KaiserSamy BengioAurko RoyAshish VaswaniNiki ParmarJakob UszkoreitNoam Shazeer

Accelerating Greedy Coordinate Descent Methods
Haihao Lu, Robert Freund, Vahab Mirrokni

Approximate Leave-One-Out for Fast Parameter Tuning in High Dimensions
Shuaiwen Wang, Wenda Zhou, Haihao Lu, Arian Maleki, Vahab Mirrokni

Image Transformer
Niki Parmar, Ashish Vaswani, Jakob Uszkoreit, Lukasz Kaiser, Noam Shazeer, Alexander Ku, Dustin Tran

Towards End-to-End Prosody Transfer for Expressive Speech Synthesis with Tacotron
RJ Skerry-Ryan, Eric Battenberg, Ying Xiao, Yuxuan Wang, Daisy Stanton, Joel Shor, Ron Weiss, Robert Clark, Rif Saurous

Dynamical Isometry and a Mean Field Theory of RNNs: Gating Enables Signal Propagation in Recurrent Neural Networks
Minmin Chen, Jeffrey Pennington,, Samuel Schoenholz

Style Tokens: Unsupervised Style Modeling, Control and Transfer in End-to-End Speech Synthesis
Yuxuan Wang, Daisy Stanton, Yu Zhang, RJ Skerry-Ryan, Eric Battenberg, Joel ShorYing Xiao, Ye Jia, Fei Ren, Rif Saurous

Constrained Interacting Submodular Groupings
Andrew CotterMahdi Milani FardSeungil YouMaya Gupta, Jeff Bilmes

Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
Xi Wu, Uyeong Jang, Jiefeng Chen, Lingjiao Chen, Somesh Jha

Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors (TCAV)
Been Kim, Martin Wattenberg, Justin Gilmer, Carrie Cai, James Wexler, Fernanda Viégas, Rory Sayres

Online Learning with Abstention
Corinna CortesGiulia DeSalvoClaudio GentileMehryar Mohri, Scott Yang

Online Linear Quadratic Control
Alon CohenAvinatan HasidimTomer KorenNevena LazicYishay MansourKunal Talwar

Competitive Caching with Machine Learned Advice
Thodoris Lykouris, Sergei Vassilvitskii

Efficient Neural Audio Synthesis
Nal Kalchbrenner, Erich Elsen, Karen Simonyan, Seb Noury, Norman Casagrande, Edward Lockhart, Florian Stimberg, Aäron van den Oord, Sander Dieleman, Koray Kavukcuoglu

Gradient Descent with Identity Initialization Efficiently Learns Positive Definite Linear Transformations by Deep Residual Networks
Peter Bartlett, Dave Helmbold, Phil Long

Understanding and Simplifying One-Shot Architecture Search
Gabriel BenderPieter-Jan KindermansBarret ZophVijay VasudevanQuoc Le

Approximation Algorithms for Cascading Prediction Models
Matthew Streeter

Learning Longer-term Dependencies in RNNs with Auxiliary Losses
Trieu TrinhAndrew DaiThang LuongQuoc Le

Self-Imitation Learning
Junhyuk Oh, Yijie Guo, Satinder Singh, Honglak Lee

Adaptive Sampled Softmax with Kernel Based Sampling
Guy Blanc, Steffen Rendle

Workshops
2018 Workshop on Human Interpretability in Machine Learning (WHI)
Organizers: Been Kim, Kush Varshney, Adrian Weller
Invited Speakers include: Fernanda ViégasMartin Wattenberg

Exploration in Reinforcement Learning
Organizers: Ben EysenbachSurya BhupatirajuShane Gu, Junhyuk Oh, Vincent Vanhoucke, Oriol Vinyals, Doina Precup

Theoretical Foundations and Applications of Deep Generative Models
Invited speakers include: Honglak Lee