I joined Google in 2018 as an AI Resident. After seeing the startling progress of deep learning from afar, I wanted to understand why deep models work so well, and the Residency seemed like an excellent way to do this. The Residency is an exciting opportunity that surrounds you with great researchers and gives you access to Google’s extensive resources.
I am interested in theories of deep learning that can help explain the effectiveness of neural nets. I want to learn more about the geometry of the loss surface and how signals propagate through a neural net. When studying math as an undergraduate at Harvard, I researched random matrices and am excited to see them appear in the theory of neural networks.
Before starting the AI Residency, I was a PhD student in applied math (also at Harvard). In my research, I applied techniques from probability theory and stochastic processes to study fundamental, mathematical principles of evolution. I am fascinated by how evolution produces novel forms and functions and the ability of evolutionary systems to produce adaptive variations—sometimes called evolvability. I hope that some of the intuition and tools I developed while studying evolution can carry over to topics in AI like architecture search, meta-learning, and GANs.