
Sneha Reddy Kudugunta
I joined the Google AI Residency right after receiving my bachelor’s degree in Computer Science and Engineering from Indian Institute of Technology, Hyderabad. While there, I worked on viewing neural networks in terms of quasi-convex optimization problems. I’ve also spent time at the Institute for Pure and Applied Mathematics, UCLA modeling cryptographic side-channel attacks, and at the Information Sciences Institute, USC using machine learning to detect social bots. These experiences have gotten me interested in both understanding language - especially in less structured contexts - and in using mathematics to understand machine learning.
Given my diverse interests, the opportunity to explore different research interests before grad school at a place that has teams working on a broad variety of areas is what drew me to this program. I’m currently working on improving transfer learning in the context of natural language understanding - the difficulties associated with natural language understanding make transfer learning in this context an especially exciting challenge. My time here so far has been amazing - over the next year, I’m excited to learn and make use of all the opportunities available at Google.
In my free time, I enjoy reading, exposing my friends to cringe-pop, doodling (sketchpad optional!) and being outdoors.
Research Areas
Authored Publications
Sort By
Google
A Loss Curvature Perspective On Training Instability in Deep Learning
Justin Gilmer
Behrooz Ghorbani
Ankush Garg
Behnam Neyshabur
David Cardoze
ICLR (2022)
Quality at a Glance: An Audit of Web-Crawled Multilingual Datasets
Julia Kreutzer
Lisa Wang
Ahsan Wahab
Nasanbayar Ulzii-Orshikh
Allahsera Auguste Tapo
Nishant Subramani
Artem Sokolov
Claytone Sikasote
Monang Setyawan
Supheakmungkol Sarin
Sokhar Samb
Benoît Sagot
Clara E. Rivera
Annette Rios
Isabel Papadimitriou
Salomey Osei
Pedro Javier Ortiz Suárez
Iroro Fred Ọ̀nọ̀mẹ̀ Orife
Kelechi Ogueji
Rubungo Andre Niyongabo
Toan Nguyen
Mathias Müller
André Müller
Shamsuddeen Hassan Muhammad
Nanda Muhammad
Ayanda Mnyakeni
Jamshidbek Mirzakhalov
Tapiwanashe Matangira
Colin Leong
Nze Lawson
Yacine Jernite
Mathias Jenny
Bonaventure F. P. Dossou
Sakhile Dlamini
Nisansa de Silva
Sakine Çabuk Ballı
Stella Biderman
Alessia Battisti
Ahmed Baruwa
Pallavi Baljekar
Israel Abebe Azime
Ayodele Awokoya
Duygu Ataman
Orevaoghene Ahia
Oghenefego Ahia
Sweta Agrawal
Mofetoluwa Adeyemi
TACL (2022)
MURAL: Multimodal, Multitask Retrieval Across Languages
Aashi Jain
Krishna Srinivasan
Ting Chen
Chao Jia
Yinfei Yang
EMNLP (2021)
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference
Dmitry (Dima) Lepikhin
Maxim Krikun
Beyond Distillation: Task-level Mixture-of-Experts for Efficient Inference (2021)
Leveraging Monolingual Data with Self-Supervision for Multilingual Neural Machine Translation
Naveen Ari
Yonghui Wu
ACL 2020 (2020)