This week, Toulon, France hosts the
5th International Conference on Learning Representations (ICLR 2017), a conference focused on how one can learn meaningful and useful representations of data for
Machine Learning. ICLR includes conference and workshop tracks, with invited talks along with oral and poster presentations of some of the latest research on deep learning, metric learning, kernel learning, compositional models, non-linear structured prediction, and issues regarding non-convex optimization.
At the forefront of innovation in cutting-edge technology in
Neural Networks and
Deep Learning, Google focuses on both theory and application, developing learning approaches to understand and generalize. As Platinum Sponsor of ICLR 2017, Google will have a strong presence with over 50 researchers attending (many from the
Google Brain team and
Google Research Europe), contributing to and learning from the broader academic research community by presenting papers and posters, in addition to participating on organizing committees and in workshops.
If you are attending ICLR 2017, we hope you'll stop by our booth and chat with our researchers about the projects and opportunities at Google that go into solving interesting problems for billions of people. You can also learn more about our research being presented at ICLR 2017 in the list below (Googlers highlighted in
blue).
Area Chairs include:George Dahl, Slav Petrov, Vikas SindhwaniProgram Chairs include:Hugo Larochelle, Tara SainathContributed TalksUnderstanding Deep Learning Requires Rethinking Generalization (Best Paper Award)Chiyuan Zhang*, Samy Bengio, Moritz Hardt, Benjamin Recht*, Oriol VinyalsSemi-Supervised Knowledge Transfer for Deep Learning from Private Training Data (Best Paper Award)Nicolas Papernot*, Martín Abadi, Úlfar Erlingsson, Ian Goodfellow, Kunal
TalwarQ-Prop: Sample-Efficient Policy Gradient with An Off-Policy CriticShixiang (Shane) Gu*, Timothy Lillicrap, Zoubin Ghahramani, Richard E.
Turner, Sergey LevineNeural Architecture Search with Reinforcement LearningBarret Zoph, Quoc LePostersAdversarial Machine Learning at ScaleAlexey Kurakin, Ian J. Goodfellow†, Samy BengioCapacity and Trainability in Recurrent Neural NetworksJasmine Collins, Jascha Sohl-Dickstein, David SussilloImproving Policy Gradient by Exploring Under-Appreciated RewardsOfir Nachum, Mohammad Norouzi, Dale SchuurmansOutrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts LayerNoam Shazeer, Azalia Mirhoseini, Krzysztof Maziarz, Andy Davis, Quoc Le, Geoffrey Hinton, Jeff DeanUnrolled Generative Adversarial NetworksLuke Metz, Ben Poole*, David Pfau, Jascha Sohl-DicksteinCategorical Reparameterization with Gumbel-SoftmaxEric Jang, Shixiang (Shane) Gu*, Ben Poole*Decomposing Motion and Content for Natural Video Sequence PredictionRuben Villegas, Jimei Yang, Seunghoon Hong, Xunyu Lin, Honglak LeeDensity Estimation Using Real NVPLaurent Dinh*, Jascha Sohl-Dickstein, Samy BengioLatent Sequence DecompositionsWilliam Chan*, Yu Zhang*, Quoc Le, Navdeep Jaitly*Learning a Natural Language Interface with Neural ProgrammerArvind Neelakantan*, Quoc V. Le, Martín Abadi, Andrew McCallum*, Dario Amodei*Deep Information PropagationSamuel Schoenholz, Justin Gilmer, Surya Ganguli, Jascha Sohl-DicksteinIdentity Matters in Deep LearningMoritz Hardt, Tengyu MaA Learned Representation For Artistic StyleVincent Dumoulin*, Jonathon Shlens, Manjunath KudlurAdversarial Training Methods for Semi-Supervised Text ClassificationTakeru Miyato, Andrew M. Dai, Ian Goodfellow†HyperNetworksDavid Ha, Andrew Dai, Quoc V. LeLearning to Remember Rare EventsLukasz Kaiser, Ofir Nachum, Aurko Roy*, Samy BengioDeep Learning with Dynamic Computation GraphsMoshe Looks, Marcello Herreshof, DeLesley Hutchins, Peter NorvigHolStep: A Machine Learning Dataset for Higher-order Logic Theorem ProvingCezary Kaliszyk, François Chollet, Christian SzegedyHyperband: Bandit-based Configuration Evaluation for Hyperparameter Optimization Lisha Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh, Ameet TalwalkarWorkshop Track AbstractsParticle Value FunctionsChris J. Maddison, Dieterich Lawson, George Tucker, Nicolas Heess, Arnaud Doucet, Andriy Mnih, Yee Whye TehNeural Combinatorial Optimization with Reinforcement LearningIrwan Bello, Hieu Pham, Quoc V. Le, Mohammad Norouzi, Samy BengioShort and Deep: Sketching and Neural NetworksAmit Daniely, Nevena Lazic, Yoram Singer, Kunal TalwarExplaining the Learning Dynamics of Direct Feedback AlignmentJustin Gilmer, Colin Raffel, Samuel S. Schoenholz, Maithra Raghu, Jascha Sohl-DicksteinTraining a Subsampling Mechanism in ExpectationColin Raffel, Dieterich LawsonTuning Recurrent Neural Networks with Reinforcement LearningNatasha Jaques*, Shixiang (Shane) Gu*, Richard E. Turner, Douglas EckREBAR: Low-Variance, Unbiased Gradient Estimates for Discrete Latent Variable ModelsGeorge Tucker, Andriy Mnih, Chris J. Maddison, Jascha Sohl-DicksteinAdversarial Examples in the Physical WorldAlexey Kurakin, Ian Goodfellow†, Samy BengioRegularizing Neural Networks by Penalizing Confident Output DistributionsGabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, Geoffrey HintonUnsupervised Perceptual Rewards for Imitation LearningPierre Sermanet, Kelvin Xu, Sergey LevineChanging Model Behavior at Test-time Using Reinforcement LearningAugustus Odena, Dieterich Lawson, Christopher Olah * Work performed while at Google † Work performed while at OpenAI