This hybrid workshop will be held on Thursday, 4 May 2023.
All times are in local Kigali time, which is UTC+2, ET+6, PT+9, BRT+5, SGT-6, WAT+1.
9:15 - 9:45 Invited Talk by Yann Dauphin: Leveraging Multiple Models and Multiple Tasks (Pre-recorded)
9:50 - 10:20. Invited Talk by Jared Kaplan: AI Safety, RLHF, and Self-Supervision (Pre-recorded)
10:20 - 10:35 Coffee Break
10:35 - 11:10 Invited Talk by Lenka Zdeborová: Insights from exactly solvable high-dimensional models (Live + Q&A)
11:10 - 11:45 Invited Talk by Sanjeev Arora: Task-specific Skill Localization in Fine-tuned Language Models (Live + Q&A)
11:45 - 13:00 Lunch Break
13:00 - 14:00 Poster Session
14:00 - 15:00 Spotlight Talks
Diffusion Models are Minimax Optimal Distribution Estimators. Kazusato Oko, Shunta Akiyama, Taiji Suzuki
Exploring Demonstration Ensembling for In-context Learning. Muhammad Khalifa, Lajanugen Logeswaran, Moontae Lee, Honglak Lee, Lu Wang
Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations. Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari S. Morcos
Effective Data Augmentation With Diffusion Models. Brandon Trabucco, Kyle Doherty, Max Gurinas, Ruslan Salakhutdinov
Text-to-Image Diffusion Models are Zero-Shot Classifiers. Kevin Clark, Priyank Jain
A Kernel-Based View of Language Model Fine-Tuning/ Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, Sanjeev Arora
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners. Seonghyeon Ye, Doyoung Kim, Joel Jang, Joongbo Shin, Minjoon Seo
15:00 - 15:30 Invited Talk by Yasaman Bahri: Understanding Neural Scaling Laws (Live + Q&A)
15:35 - 16:10 Invited Talk by Danqi Chen: Analyzing Training Objectives and Trajectories in Language Pre-training (Live + Q&A)
16:10 - 16:45 Invited Talk by Jonathan Frankle: Faster Neural Network Training, Algorithmically (Live + Q&A)