Google at SALA 2026
Google at SALA 2026
Google is proud to support the Summit of AI in LatAm (SALA 2026), taking place from Monday, March 9th to Thursday, March 12th, at the Universidad San Francisco de Quito (USFQ) - Campus Cumbayá in Quito, Ecuador. As a biennial conference, SALA serves as a premier gathering for the Latin American artificial intelligence community, connecting local talent from academia, industry, and government with global experts to foster regional opportunities.
Our researchers are deeply engaged in this year’s summit, contributing to key leadership and advisory roles including SALA Director, General Chair, Communications Chair, and Advisor.
The conference offers a fully immersive experience: mornings are dedicated to guest lectures from renowned AI researchers, while afternoons feature a hackathon where collaborative teams tackle impactful, real-world projects.
Attending SALA 2026 in person? We invite you to visit the Google booth to meet our team and discover our initiatives to empower AI talent across more than 20 countries in Latin America and beyond. Stay updated on our latest announcements and activities by following @GoogleResearch on X and Google Research on LinkedIn.
All session times are provided in ECT (Ecuador Time). Dates and times may be subject to change.
Organizing Committee
-
Pablo Samuel Castro
- Director SALA, General Chair
-
Isabelle Simpson
- Director SALA, Communications Chair
-
Doina Precup
- Advisor
Specialized Session
Vibe coding
Monday, March 9th | 1:30 pm - 2:30 pm
This session will cover the history and meaning of "vibe coding," and, most importantly, show you how to use it to make your coding workflow faster and more creative. Learn practical techniques for jump-starting new ideas, a skill that will be particularly useful for those participating in the afternoon hackathon.
Speaker: Pablo Samuel Castro
Foundational Session
The surprising effectiveness of generative diffusion models
Tuesday, March 10th | 11:00 am - 12:00 pm
Diffusion models are advanced AI tools that create realistic new content, like images and video. This talk will demonstrate that these powerful models are surprisingly effective for a wide range of visual tasks, not just content creation.
Abstract: Diffusion models have emerged as a powerful class of likelihood-based generative models. They have been particularly effective for modeling image and video data, with their striking performance in text-to-image and text-to-video generation. This talk will address several questions concerning model capabilities on a variety of downstream tasks, beyond tex-to-X generation. These include generative data augmentation, fine-tuning for visual estimation, representation learning and alignment with human perception, and zero-shot inference with video models. Together, these results suggest that large text-to-image and text-to-video models are surprisingly effective for a broad spectrum of visual tasks, through a combination of fine-tuning, in-context learning, or zero-shot application.
Speaker: David Fleet