Google Research at ICLR 2024
Google Research at ICLR 2024
The Twelfth International Conference on Learning Representations (ICLR 2024), a premier conference on deep learning, is being held this week as a hybrid event in Vienna, Austria. We are proud to be a Diamond Sponsor of ICLR 2024, where Google researchers will contribute at all levels. This year we are presenting over 85 papers and are actively involved in organizing and hosting a number of different events, including 9 workshops, an EXPO talk, and several interactive sessions.
Attending ICLR 2024 in person? Stop by the Google Research booth to learn more about the exciting work we’re doing across topics spanning reinforcement learning, enterprise AI, large language models, theory and optimization, societal impact, safety and privacy, and more. Visit the @GoogleAI X (formerly Twitter) and Google Research LinkedIn accounts to find out about Google booth activities (e.g., demos and Q&A sessions, which are also listed below).
Continue below to learn more about how Google researchers are engaged at ICLR 2024 (Google affiliations highlighted in bold). See Google DeepMind’s blog to learn about their technical participation at ICLR 2024.
All session times are provided in CEST.
Quick links
Quick links
Board and Organizing Committee
-
Cordelia Schmid
- Senior Area Chair
-
Fei Sha
- Senior Area Chair
-
Inderjit S. Dhillon
- Senior Area Chair
-
Ming-Hsuan Yang
- Senior Area Chair
-
Abhimanyu Das
- Area Chair
-
Aditya Krishna Menon
- Area Chair
-
Afshin Rostamizadeh
- Area Chair
-
Ahmad Beirami
- Area Chair
-
Arun Narayanan
- Area Chair
-
Asma Ghandeharioun
- Area Chair
-
Badih Ghazi
- Area Chair
-
Boqing Gong
- Area Chair
-
Charles Herrmann
- Area Chair
-
Chen-Yu Lee
- Area Chair
-
Cho-Jui Hsieh
- Area Chair
-
Da-Cheng Juan
- Area Chair
-
Deqing Sun
- Area Chair
-
Feng Yang
- Area Chair
-
Hossein Mobahi
- Area Chair
-
Jennifer J. Sun
- Area Chair
-
Kelvin C.K. Chan
- Area Chair
-
Leonardo Zepeda-Núñez
- Area Chair
-
Lin Chen
- Area Chair
-
Manaal Faruqui
- Area Chair
-
Markus Freitag
- Area Chair
-
Mingyuan Zhou
- Area Chair
-
Pasin Manurangsi
- Area Chair
-
Richard Nock
- Area Chair
-
Sashank J. Reddi
- Area Chair
-
Sercan O Arik
- Area Chair
-
Silvio Lattanzi
- Area Chair
-
Sjoerd van Steenkiste
- Area Chair
-
Srinadh Bhojanapalli
- Area Chair
-
Weiran Wang
- Area Chair
-
Yang Li
- Area Chair
-
Yinlam Chow
- Area Chair
-
Yu-Chuan Su
- Area Chair
-
Aisha Walcott-Bryant
- Workshop Chair
-
Mercy Asiedu
- Workshop Chair
Expo & oral talks
-
Thu, May 9 | 12:45PM — 2:15PM
Expo Talk: Advances in Private Training for Production On-device Language ModelsSpeaker: Zheng Xu
-
Fri, 10 May | 10:00AM — 10:15AM
Oral Talk: One-shot Empirical Privacy Estimation for Federated LearningSpeakers: Galen Andrew, Peter Kairouz, Sewoong Oh, Alina Oprea, Hugh Brendan McMahan, Vinith Menon Suriyakumar
Accepted papers
Conformal Risk Control
Anastasios N. Angelopoulos, Stephen Bates, Adam Fisch, Lihua Lei, Tal Schuster
Learning to Reject Meets Long-tail Learning
Harikrishna Narasimhan, Aditya Krishna Menon, Wittawat Jitkrittum, Neha Gupta, Sanjiv Kumar
On Bias-Variance Alignment in Deep Models
Lin Chen, Michal Lukasik, Wittawat Jitkrittum, Chong You, Sanjiv Kumar
Role of Locality and Weight Sharing in Image-Based Tasks: A Sample Complexity Separation between CNNs, LCNs, and FCNs
Aakash Lahoti, Stefani Karp, Ezra Winston, Aarti Singh, Yuanzhi Li
Massively Scalable Inverse Reinforcement Learning in Google Maps (see blog post)
Matt Barnes, Matthew Abueg, Oliver F. Lange, Matt Deeds, Jason Trader, Denali Molitor, Markus Wulfmeier, Shawn O'Banion
A Benchmark for Learning to Translate a New Language from One Grammar Book
Garrett Tanzer, Mirac Suzgun, Eline Visser, Dan Jurafsky, Luke Melas-Kyriazi
On the Foundations of Shortcut Learning
Katherine Hermann, Hossein Mobahi, Thomas Fel, Michael Curtis Mozer
Dictionary Contrastive Learning for Efficient Local Supervision without Auxiliary Networks
Suhwan Choi, Myeongho Jeon, Yeonjung Hwang, Jeonglyul Oh, Sungjun Lim, Joonseok Lee, Myungjoo Kang
Distributionally Robust Optimization with Bias and Variance Reduction
Ronak Mehta, Vincent Roulet, Krishna Pillutla, Zaid Harchaoui
Learning Performance-Improving Code Edits
Alexander G Shypula, Aman Madaan, Yimeng Zeng, Uri Alon, Jacob R. Gardner, Yiming Yang, Milad Hashemi, Graham Neubig, Parthasarathy Ranganathan, Osbert Bastani, Amir Yazdanbakhsh
Leveraging Unpaired Data for Vision-Language Generative Models via Cycle Consistency
Tianhong Li*, Sangnie Bhardwaj, Yonglong Tian, Han Zhang, Jarred Barber, Dina Katabi, Guillaume Lajoie, Huiwen Chang*, Dilip Krishnan
DyST: Towards Dynamic Neural Scene Representations on Real-World Videos
Maximilian Seitzer, Sjoerd van Steenkiste, Thomas Kipf, Klaus Greff, Mehdi S. M. Sajjadi
DreamFlow: High-Quality Text-to-3D Generation by Approximating Probability Flow
Kyungmin Lee, Kihyuk Sohn, Jinwoo Shin
Enhancing Group Fairness in Online Settings Using Oblique Decision Forests
Somnath Basu Roy Chowdhury*, Nicholas Monath, Ahmad Beirami, Rahul Kidambi, Avinava Dubey, Amr Ahmed, Snigdha Chaturvedi
Privacy Amplification for Matrix Mechanisms
Christopher A. Choquette-Choo, Arun Ganesh, Thomas Steinke, Abhradeep Guha Thakurta
Towards Optimal Regret in Adversarial Linear MDPs with Bandit Feedback
Haolin Liu, Chen-Yu Wei, Julian Zimmert
CausalLM is not Optimal for in-Context Learning
Nan Ding, Tomer Levinboim, Jialin Wu, Sebastian Goodman, Radu Soricut
Delphic Offline Reinforcement Learning under Nonidentifiable Hidden Confounding
Alizée Pace, Hugo Yèche, Bernhard Schölkopf, Gunnar Ratsch, Guy Tennenholtz
Neural SDF Flow for 3D Reconstruction of Dynamic Scenes
Wei Mao, Richard Hartley, Mathieu Salzmann, Miaomiao Liu
Retrieval-Enhanced Contrastive Vision-Text Models
Ahmet Iscen, Mathilde Caron, Alireza Fathi, Cordelia Schmid
TEMPO: Prompt-Based Generative Pre-trained Transformer for Time Series Forecasting
Defu Cao, Furong Jia, Sercan Ö. Arik, Tomas Pfister, Yixiang Zheng, Wen Ye, Yan Liu
The Cost of Scaling Down Large Language Models: Reducing Model Size Affects Memory before In-context Learning
Tian Jin, Nolan Clement, Xin Dong, Vaishnavh Nagarajan, Michael Carbin, Jonathan Ragan-Kelley, Gintare Karolina Dziugaite
Combining Axes Pre-Conditioners through Kronecker Approximation for Deep Learning
Sai Surya Duvvuri*, Fnu Devvrit, Rohan Anil, Cho-Jui Hsieh, Inderjit S Dhillon
Demystifying Embedding Spaces using Large Language Models
Guy Tennenholtz, Yinlam Chow, ChihWei Hsu, Jihwan Jeong, Lior Shani, Azamat Tulepbergenov, Deepak Ramachandran, Martin Mladenov, Craig Boutilier
Denoising Diffusion via Image-Based Rendering
Titas Anciukevičius, Fabian Manhardt, Federico Tombari, Paul Henderson
Don't Trust: Verify — Grounding LLM Quantitative Reasoning with Auto Formalization
Jin Peng Zhou*, Charles E Staats*, Wenda Li, Christian Szegedy*, Kilian Q Weinberger, Yuhuai Wu*
Enabling Language Models to Implicitly Learn Self-Improvement
Ziqi Wang*, Le Hou , Tianjian Lu , Yuexin Wu , Yunxuan Li, Hongkun Yu, Heng Ji
The Importance of Feature Pre-Processing for Differentially Private Linear Optimization
Ziteng Sun, Ananda Theertha Suresh, Aditya Krishna Menon
OmniControl: Control Any Joint at Any Time for Human Motion Generation
Yiming Xie, Varun Jampani, Lei Zhong, Deqing Sun, Huaizu Jiang
Perceptual Group Tokenizer: Building Perception with Iterative Grouping
Zhiwei Deng, Ting Chen, Yang Li
Scalable Neural Network Kernels
Arijit Sehanobish, Krzysztof Marcin Choromanski, Yunfan Zhao, Avinava Dubey, Valerii Likhosherstov
Sharpness-Aware Minimization Enhances Feature Quality via Balanced Learning
Jacob Mitchell Springer, Vaishnavh Nagarajan, Aditi Raghunathan
Adaptive Regret for Bandits Made Possible: Two Queries Suffice
Zhou Lu, Qiuyi Zhang, Xinyi Chen, Fred Zhang, David Woodruff, Elad Hazan
Cleanba: A Reproducible and Efficient Distributed Reinforcement Learning Platform
Shengyi Huang, Jiayi Weng, Rujikorn Charakorn, Min Lin, Zhongwen Xu, Santiago Ontanon
CoBIT: A Contrastive Bi-Directional Image-Text Generation Model
Haoxuan You*, Mandy Guo, Zhecan Wang, Kai-Wei Chang, Jason Michael Baldridge, Jiahui Yu
LabelDP-Pro: Learning with Label Differential Privacy via Projections
Badih Ghazi, Yangsibo Huang*, Pritish Kamath, Ravi Kumar, Pasin Manurangsi, Chiyuan Zhang
Learning to Reject with a Fixed Predictor: Application to Decontextualization
Christopher Mohri, Daniel Andor, Eunsol Choi, Michael Collins, Anqi Mao, Yutao Zhong
A Restoration Network as an Implicit Prior
Yuyang Hu, Mauricio Delbracio, Peyman Milanfar, Ulugbek Kamilov
Spoken Question Answering and Speech Continuation Using Spectrogram-Powered LLM
Eliya Nachmani, Alon Levkovitch*, Roy Hirsch, Julian Salazar, Chulayuth Asawaroengchai, Soroosh Mariooryad, Ehud Rivlin, RJ Skerry-Ryan, Michelle Tadmor Ramanovich
Conformal Language Modeling
Victor Quach, Adam Fisch, Tal Schuster, Adam Yala, Jae Ho Sohn, Tommi S. Jaakkola, Regina Barzilay
The Hidden Language of Diffusion Models
Hila Chefer*, Oran Lang, Mor Geva, Volodymyr Polosukhin, Assaf Shocher, Michal Irani, Inbar Mosseri, Lior Wolf
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief Propagation
Shreyas Havaldar, Navodita Sharma, Shubhi Sareen, Karthikeyan Shanmugam, Aravindan Raghuveer
Learning Low Dimensional State Spaces with Overparameterized Recurrent Neural Nets
Edo Cohen-Karlik, Itamar Menuhin-Gruman, Raja Giryes, Nadav Cohen, Amir Globerson
LLM Augmented LLMs: Expanding Capabilities through Composition
Rachit Bansal, Bidisha Samanta, Siddharth Dalmia, Nitish Gupta, Shikhar Vashishth, Sriram Ganapathy, Abhishek Bapna, Prateek Jain, Partha Talukdar
Transformers can Optimally Learn Regression Mixture Models
Reese Pathak*, Rajat Sen, Weihao Kong, Abhimanyu Das
AGILE3D: Attention Guided Interactive Multi-Object 3D Segmentation
Yuanwen Yue, Sabarinath Mahadevan, Jonas Schult, Francis Engelmann, Bastian Leibe, Konrad Schindler, Theodora Kontogianni
Correlated Noise Provably Beats Independent Noise for Differentially Private Learning
Christopher A. Choquette-Choo, Krishnamurthy (Dj) Dvijotham, Krishna Pillutla, Arun Ganesh, Thomas Steinke, Abhradeep Thakurta
Discovering Modular Solutions that Generalize Compositionally
Simon Schug, Seijin Kobayashi, Yassir Akram, Maciej Wolczyk, Alexandra Maria Proca, Johannes Von Oswald, Razvan Pascanu, Joao Sacramento, Angelika Steger
DistillSpec: Improving Speculative Decoding via Knowledge Distillation
Yongchao Zhou*, Kaifeng Lyu*, Ankit Singh Rawat, Aditya Krishna Menon, Afshin Rostamizadeh, Sanjiv Kumar, Jean-François Kagy, Rishabh Agarwal
FeatUp: A Model-Agnostic Framework for Features at Any Resolution
Stephanie Fu, Mark Hamilton, Laura E. Brandt, Axel Feldmann, Zhoutong Zhang, William T. Freeman
Language Model Cascades: Token-Level Uncertainty and Beyond
Neha Gupta, Harikrishna Narasimhan, Wittawat Jitkrittum, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar
Learning Thresholds with Latent Values and Censored Feedback
Jiahao Zhang, Tao Lin, Weiqiang Zheng, Zhe Feng, Yifeng Teng, Xiaotie Deng
MBR and QE Fine-Tuning: Training-Time Distillation of the Best and Most Expensive Decoding Methods
Mara Finkelstein, Markus Freitag
Mixture-of-Experts Meets Instruction Tuning: A Winning Combination for Large Language Models
Sheng Shen*, Le Hou, Yanqi Zhou, Nan Du, Shayne Longpre*, Jason Wei*, Hyung Won Chung*, Barret Zoph*, William Fedus*, Xinyun Chen, Tu Vu*, Yuexin Wu, Wuyang Chen*, Albert Webson, Yunxuan Li, Vincent Y Zhao, Hongkun Yu, Kurt Keutzer, Trevor Darrell, Denny Zhou
Statistical Rejection Sampling Improves Preference Optimization
Tianqi Liu, Yao Zhao, Rishabh Joshi, Misha Khalman, Mohammad Saleh, Peter J. Liu, Jialu Liu
Talk Like a Graph: Encoding Graphs for Large Language Models (see blog post)
Bahare Fatemi, Jonathan Halcrow, Bryan Perozzi
When Scaling Meets LLM Fine-Tuning: The Effect of Data, Model and Fine-Tuning Methods
Biao Zhang, Zhongtao Liu, Colin Cherry, Orhan Firat
Davidsonian Scene Graph: Improving Reliability in Fine-Grained Evaluation for Text-to-Image Generation
Jaemin Cho*, Yushi Hu, Roopal Garg, Peter Anderson, Ranjay Krishna, Jason Baldridge, Mohit Bansal, Jordi Pont-Tuset, Su Wang
Learning from Aggregate Responses: Instance Level versus Bag Level Loss Functions
Adel Javanmard, Lin Chen, Vahab Mirrokni, Ashwinkumar Badanidiyuru, Gang Fu
Magnushammer: A Transformer-Based Approach to Premise Selection
Maciej Mikuła, Szymon Tworkowski, Szymon Antoniak, Bartosz Piotrowski, Albert Q. Jiang, Jin Peng Zhou*, Christian Szegedy*, Łukasz Kuciński, Piotr Miłoś, Yuhuai Wu*
RETSim: Resilient and Efficient Text Similarity
Marina Zhang, Owen Skipper Vallis, Aysegul Bumin*, Tanay Vakharia, Elie Bursztein
Think Before you Speak: Training Language Models with Pause Tokens
Sachin Goyal*, Ziwei Ji, Ankit Singh Rawat, Aditya Krishna Menon, Sanjiv Kumar, Vaishnavh Nagarajan
Two-Stage LLM Fine-Tuning with Less Specialization and More Generalization
Yihan Wang*, Si Si, Daliang Li, Michal Lukasik, Felix Yu, Cho-Jui Hsieh, Inderjit S Dhillon, Sanjiv Kumar
Finite Scalar Quantization: VQ-VAE Made Simple
Fabian Mentzer, David Minnen, Eirikur Agustsson, Michael Tschannen
HyperAttention: Long-Context Attention in Near-Linear Time
Insu Han, Rajesh Jayaram, Amin Karbasi, Vahab Mirrokni, David Woodruff, Amir Zandieh
The Unreasonable Effectiveness of Linear Prediction as a Perceptual Metric
Daniel Severo*, Lucas Theis, Johannes Ballé
Dual Associated Encoder for Face Restoration
Yu-Ju Tsai, Yu-Lun Liu, Lu Qi, Kelvin C. K. Chan, Ming-Hsuan Yang
Locality-Aware Graph Rewiring in GNNs
Christopher Fifty, Dennis Duan, Ronald Guenther Junkins, Ehsan Amid, Jure Leskovec, Christopher Re, Sebastian Thrun
Context-Aware Meta-Learning
Tian Li, Manzil Zaheer, Ziyu Liu, Sashank Reddi, Brendan McMahan, Virginia Smith
Learning Model Uncertainty as Variance-Minimizing Instance Weights
Nishant Jain, Karthikeyan Shanmugham, Pradeep Shenoy
Batch Calibration: Rethinking Calibration for In-Context Learning and Prompt Engineering
Han Zhou*, Xingchen Wan, Lev Proleev, Diana Mincu, Jilin Chen, Katherine A Heller, Subhrajit Roy
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table Understanding (see blog post)
Zilong Wang*, Hao Zhang, Chun-Liang Li, Julian Martin Eisenschlos, Vincent Perot, Zifeng Wang, Lesly Miculicich, Yasuhisa Fujii, Jingbo Shang, Chen-Yu Lee, Tomas Pfister
A Differentially Private Clustering Algorithm for Well-Clustered Graphs
Weiqiang He, Hendrik Fichtenberger, Pan Peng
Dual-Encoders for Extreme Multi-Label Classification
Nilesh Gupta, Devvrit Khatri, Ankit Singh Rawat, Srinadh Bhojanapalli, Prateek Jain, Inderjit S Dhillon
Idempotent Generative Network
Assaf Shocher, Amil V Dravid, Yossi Gandelsman, Inbar Mosseri, Michael Rubinstein, Alexei A Efros
OpenNeRF: Open Set 3D Neural Scene Segmentation with Pixel-Wise Features and Rendered Novel Views
Francis Engelmann, Fabian Manhardt, Michael Niemeyer, Keisuke Tateno, Marc Pollefeys, Federico Tombari
Plugin Estimators for Selective Classification with Out-of-Distribution Detection
Harikrishna Narasimhan, Aditya Krishna Menon, Wittawat Jitkrittum, Sanjiv Kumar
Repeated Random Sampling for Minimizing the Time-to-Accuracy of Learning
Patrik Okanovic, Roger Waleffe, Vasilis Mageirakos, Konstantinos E. Nikolakakis, Amin Karbasi, Dionysis Kalogerias, Nezihe Merve Gürel, Theodoros Rekatsinas
Diffusion Sampling with Momentum for Mitigating Divergence Artifacts
Suttisak Wizadwongsa, Worameth Chinchuthakun, Pramook Khungurn, Amit Raj, Supasorn Suwajanakorn
DORSal: Diffusion for Object-Centric Representations of Scenes et al.
Allan Jabri*, Sjoerd van Steenkiste, Emiel Hoogeboom, Mehdi S. M. Sajjadi, Thomas Kipf
Language Model Beats Diffusion - Tokenizer is Key to Visual Generation
Lijun Yu*, Jose Lezama, Nitesh Bharadwaj Gundavarapu, Luca Versari, Kihyuk Sohn, David Minnen, Yong Cheng, Agrim Gupta, Xiuye Gu, Alexander G Hauptmann, Boqing Gong, Ming-Hsuan Yang, Irfan Essa, David A Ross, Lu Jiang
Functional Interpolation for Relative Positions Improves Long Context Transformers
Shanda Li*, Chong You, Guru Guruganesh, Joshua Ainslie, Santiago Ontanon, Manzil Zaheer, Sumit Sanghai, Yiming Yang, Sanjiv Kumar, Srinadh Bhojanapalli
Structured Video-Language Modeling with Temporal Grouping and Spatial Grounding
Yuanhao Xiong*, Long Zhao, Boqing Gong, Ming-Hsuan Yang, Florian Schroff, Ting Liu, Cho-Jui Hsieh, Liangzhe Yuan
Inversion by Direct Iteration: An Alternative to Denoising Diffusion for Image Restoration
Mauricio Delbracio, Peyman Milanfar
Scaling Autoregressive Models for Content-Rich Text-to-Image Generation
Jiahui Yu, Yuanzhong Xu, Jing Yu Koh, Thang Luong, Gunjan Baid, Zirui Wang, Vijay Vasudevan, Alexander Ku, Yinfei Yang, Burcu Karagol Ayan, Ben Hutchinson, Wei Han, Zarana Parekh, Xin Li, Han Zhang, Jason Baldridge, Yonghui Wu
SPADE: Semi-Supervised Anomaly Detection Under Distribution Mismatch
Jinsung Yoon, Kihyuk Sohn, Chun Liang Li, Sercan Ö. Arik, Tomas Pfister
Workshops
-
Sat, May 11 | 9:00AM — 5:00PM
Data-centric Machine Learning Research (DMLR): Harnessing Momentum for ScienceSpeaker: Baharan Mirzasoleiman
Organizer: Alicia Parrish
-
Sat, May 11 | 10:00AM — 5:00PM
Global AI CulturesOrganizers: Rida Qadri, Fernando Diaz, Sunipa Dev, Jessica Quaye
-
Sat, May 11 | 9:00AM — 5:00PM
Machine Learning for Genomics Explorations (MLGenX)Organizer: Arman Hasanzadeh
-
Sat, May 11 | 9:00AM — 5:00PM
Navigating and Addressing Data Problems for Foundation Models (DPFM)Speakers: Remi Denton, Haifeng Xu
-
Sat, May 11 | 8:25AM — 5:00PM
Privacy Regulation and Protection in Machine LearningOrganizer: Sewoong Oh, Zheng Xu
Speakers: Kobbi Nissim, Daniel Ramage -
Sat, May 11 | 8:50AM — 6:00PM
Reliable and Responsible Foundation ModelsSpeaker: Mor Geva Pipek
-
Sat, May 11 | 9:00AM — 5:00PM
Secure and Trustworthy Large Language ModelsSpeaker: Cho-Jui Hsieh
-
Sat, May 11 | 8:00AM — 5:00PM
Tackling Climate Change with Machine Learning: Fostering the Maturity of ML Applications for Climate ChangeSpeaker: Antonia Gawel
Google Research Booth Demo/Q&A Schedule
*Dates and times may be subject to change. Stop by the Google Research booth (#23) for more info.
-
Tuesday, May 7 | starting at 9:30AM
Q&A: Enterprise AI Research ChallengesPresenter: Sercan Arik & Tomas Pfister
-
Tuesday, May 7 | starting at 12:45PM
Inverse Reinforcement Learning in Google MapsPresenters: Matt Barnes & Matthew Abueg
-
Tuesday, May 7 | starting at 3:15PM
LLM Augmented LLMs: Expanding Capabilities through CompositionPresenters: Rachit Bansal, Sid Dlamia & Nitish Gupta
-
Wednesday, May 8 | starting at 9:30AM
Chain-of-Table: Evolving Tables in the Reasoning Chain for Table UnderstandingPresenter: Zilong Wang
-
Wednesday, May 8 | starting at 12:45PM
Break it, Imitate it, Fix it: Robustness by Generating Human-Like AttacksPresenters: Ananth Balashankar, Anu Sinha & Ahmad Beirami
-
Wednesday, May 8 | starting at 3:15PM
Encoding Structure for Large Language ModelsPresenters: Bahar Fatemi, Jonathan Halcrow, & Bryan Perozzi
-
Thursday, May 9 | starting at 9:30AM
Q&A: Cloud AI ResearchPresenters: Tomas Pfister & Secran Arik
-
Thursday, May 9 | starting at 3:15PM
Developing and Evaluating AI for Maternal HealthPresenters: Mercy Asiedu & Nicole Chiou (former Student Researcher)
-
Thursday, May 9 | starting at 3:15PM
Confidence-Aware Language Modeling and Adaptive ComputePresenters: Tal Schuster & Adam Fisch
-
Friday, May 10 | starting at 9:30AM
OpenMask3D: Open-Vocabulary 3D Instance SegmentationPresenter: Francis Engelmann
-
Friday, May 10 | starting at 12:45PM
Learning from Label Proportions: Bootstrapping Supervised Learners via Belief PropagationPresenter: Shreyas Havaldar
-
Friday, May 10 | starting at 3:15PM
Q&A: Learning with Public and Private Data: LLM for Privacy, and Privacy for LLMPresenter: Zheng Xu
* Work done while at Google