Two things inspired me to pursue machine learning. First: Magenta, a team within Google Brain that uses ML for music and art generation. Second: WaveNet, a ground-breaking architecture from Google DeepMind used for audio and speech synthesis. Joining the AI Residency was thus a dream come true! It has given me the opportunity to work with and be mentored by the incredible researchers and engineers behind these technologies, all while defining my own research direction that combines my different passions. I have undergraduate degrees in Physics and Music, both from Yale University. At Georgia Tech, my M.S focused on ML and ultrasound signal processing for use in prosthetic hands built for musicians. I also play jazz piano professionally. How does all this relate to AI? My Residency projects explore how we can use machine learning for more robust and expressive music and audio synthesis. Prior to Google, I spent a year designing and building “fidular”, an award-winning modular fiddle that demonstrates the principle of “transcultural” design and engineering. The system demonstrates how hardware can be made cross-cultural, enabling musical traditions across Asia and the Middle East to fluidly interchange with one another. In conjunction with PAIR (People + AI Research) and AIY, I am very excited by how these ideas can translate and scale to the world of ML and AI, especially for non-western cultures and communities.