“Beyond being incredibly instructive, the Google Brain Residency program has been a truly affirming experience. Working alongside people who truly love what they do--and are eager to help you develop your own passion--has vastly increased my confidence in my interests, my ability to explore them, and my plans for the near future.”-Akosua Busia, B.S. Mathematical and Computational Science, Stanford University ‘16
2016 Google Brain Resident
In October 2015 we launched the
Google Brain Residency, a 12-month program focused on jumpstarting a career for those interested in machine learning and deep learning research. This program is an opportunity to get hands on experience using the state-of-the-art infrastructure available at Google, and offers the chance to work alongside top researchers within the
Google Brain team.
Our first group of residents arrived in June 2016, working with researchers on problems at the forefront of machine learning. The wide array of topics studied by residents reflects the diversity of the residents themselves — some come to the program as new graduates with degrees ranging from BAs to Ph.Ds in computer science to physics and mathematics to biology and neuroscience, while other residents come with years of industry experience under their belts. They all have come with a passion for learning how to conduct machine learning research.
The breadth of research being done by the Google Brain Team along with resident-mentorship pairing flexibility ensures that residents with interests in machine learning algorithms and reinforcement learning, natural language understanding, robotics, neuroscience, genetics and more, are able to find good mentors to help them pursue their ideas and publish interesting work. And just seven months into the program, the Residents are already making an impact in the research field.
To date, Google Brain Residents have submitted a total of 21 papers to leading machine learning conferences, spanning topics from enhancing low resolution images to building neural networks that in turn design novel, task specific neural network architectures. Of those 21 papers, 5 were accepted in the recent
BayLearn Conference (two of which, “
Mean Field Neural Networks” and “
Regularizing Neural Networks by Penalizing Their Output Distribution’’, were presented in oral sessions), 2 were accepted in the
NIPS 2016 Adversarial Training workshop, and another in
ISMIR 2016 (see the full list of papers, including the 14 submissions to
ICLR 2017, after the figures below).
 |
| An LSTM Cell (Left) and a state of the art RNN Cell found using a neural network (Right). This is an example of a novel architecture found using the approach presented in “Neural Architecture Search with Reinforcement Learning” (B. Zoph and Q. V. Le, submitted to ICLR 2017). This paper uses a neural network to generate novel RNN cell architectures that outperform the widely used LSTM on a variety of different tasks. |
 |
| The training accuracy for neural networks, colored from black (random chance) to red (high accuracy). Overlaid in white dashed lines are the theoretical predictions showing the boundary between trainable and untrainable networks. (a) Networks with no dropout. (b)-(d) Networks with dropout rates of 0.01, 0.02, 0.06 respectively. This research explores whether theoretical calculations can replace large hyperparameter searches. For more details, read “Deep Information Propagation” (S. S. Schoenholz, J. Gilmer, S. Ganguli, J. Sohl-Dickstein, submitted to ICLR 2017). |
Accepted conference papers (Google Brain Residents marked with asterisks)Unrolled Generative Adversarial NetworksLuke Metz*, Ben Poole, David Pfau, Jascha Sohl-DicksteinNIPS 2016 Adversarial Training Workshop (oral presentation)Conditional Image Synthesis with Auxiliary Classifier GANsAugustus Odena*, Chris Olah, Jon ShlensNIPS 2016 Adversarial Training Workshop (oral presentation)Regularizing Neural Networks by Penalizing Their Output DistributionGabriel Pereyra*, George Tucker, Lukasz Kaiser, Geoff HintonBayLearn 2016 (oral presentation)Mean Field Neural NetworksSamuel S. Schoenholz*, Justin Gilmer*, Jascha Sohl-DicksteinBayLearn 2016 (oral presentation)Learning to RememberAurko Roy, Ofir Nachum*, Łukasz Kaiser, Samy BengioBayLearn 2016 (poster session)Towards Generating Higher Resolution Images with Generative Adversarial NetworksAugustus Odena*, Jonathon ShlensBayLearn 2016 (poster session)Multi-Task Convolutional Music ModelsDiego Ardila, Cinjon Resnick*, Adam Roberts, Douglas EckBayLearn 2016 (poster session)Audio DeepDream: Optimizing Raw Audio With Convolutional NetworksDiego Ardila, Cinjon Resnick*, Adam Roberts, Douglas EckISMIR 2016 (poster session)Papers under review (Google Brain Residents marked with asterisks)Learning to Remember Rare Events Lukasz Kaiser, Ofir Nachum*, Aurko Roy, Samy Bengio Submitted to ICLR 2017 Neural Combinatorial Optimization with Reinforcement LearningIrwan Bello*, Hieu Pham*, Quoc V. Le, Mohammad Norouzi, Samy BengioSubmitted to ICLR 2017 HyperNetworksDavid Ha*, Andrew Dai, Quoc V. LeSubmitted to ICLR 2017 Outrageously Large Neural Networks: The Sparsely-Gated Mixture-of-Experts LayerNoam Shazeer, Azalia Mirhoseini*, Krzysztof Maziarz, Quoc Le, Jeff DeanSubmitted to ICLR 2017 Neural Architecture Search with Reinforcement LearningBarret Zoph* and Quoc LeSubmitted to ICLR 2017 Deep Information PropagationSamuel Schoenholz*, Justin Gilmer*, Surya Ganguli, Jascha Sohl-DicksteinSubmitted to ICLR 2017 Capacity and Trainability in Recurrent Neural NetworksJasmine Collins*, Jascha Sohl-Dickstein, David SussilloSubmitted to ICLR 2017 Unrolled Generative Adversarial NetworksLuke Metz*, Ben Poole, David Pfau, Jascha Sohl-DicksteinSubmitted to ICLR 2017 Conditional Image Synthesis with Auxiliary Classifier GANsAugustus Odena*, Chris Olah, Jon ShlensSubmitted to ICLR 2017
Generating Long and Diverse Responses with Neural Conversation ModelsLouis Shao, Stephan Gouws, Denny Britz*, Anna Goldie, Brian Strope, Ray KurzweilSubmitted to ICLR 2017Intelligible Language Modeling with Input Switched Affine NetworksJakob Foerster, Justin Gilmer*, Jan Chorowski, Jascha Sohl-dickstein, David SussilloSubmitted to ICLR 2017 Regularizing Neural Networks by Penalizing Confident Output DistributionsGabriel Pereyra*, George Tucker*, Jan Chorowski, Lukasz Kaiser, Geoffrey HintonSubmitted to ICLR 2017 Unsupervised Perceptual Rewards for Imitation LearningPierre Sermanet, Kelvin Xu*, Sergey LevineSubmitted to ICLR 2017 Improving policy gradient by exploring under-appreciated rewardsOfir Nachum*, Mohammad Norouzi, Dale SchuurmansSubmitted to ICLR 2017 Protein Secondary Structure Prediction Using Deep Multi-scale Convolutional Neural Networks and Next-Step ConditioningAkosua Busia*, Jasmine Collins*, Navdeep JaitlyThe diverse and collaborative atmosphere fostered by the Brain team has resulted in a group of researchers making great strides on a wide range of research areas which we are excited to share with the broader community. We look forward to even more innovative research that is yet to be done from our 2016 residents, and are excited for the program to continue into it’s second year!
We are currently accepting applications for the 2017 Google Brain Residency Program. To learn more about the program and to submit your application, visit
g.co/brainresidency. Applications close January 13th, 2017.