Schedule
8:00 - 8:30 - Coffee/snacks
8:30 - 12:00 - morning session: each talk 15min (incl. questions)
8:40 - 8:45 Irina Rish (CERC-AAI Lab @ University of Montreal/Mila) Introduction and Overview (slides)
(a brief talk from the Alignment Workshop on CERC-AAI program and Complex systems view of large-scale AI systems (video)8:45 - 9:00 Andrey Gromov (University of Maryland /Meta) An interpretable model of emergent capabilities and grokking (video, slides)
9:00 - 9:15 Percy Liang, (Stanford/Together) Scaling Agents (video, slides)
9:15 - 9:30 Luke Sernau (Google) Time Matters: Using wall clock time to achieve better scaling laws (video, slides)
9:30 - 9:40 - coffee break
9:40 - 9:55 Luke Zettlemoyer (University of Washington; Meta) CM3Leon: A Tokenize Everything Approach to Multimodal Language Models
9:55 - 10:20 Matthias Bethge and Vishaal Udandarao (Tubingen AI Center)
Matthias Bethge : Compositional continual learning with scalable augmented architectures
Vishaal Udandarao : Visual Data-Type Understanding Does Not Emerge from Scaling Vision-Language Models
10:20 - 10:35 Quentin Anthony (EleutherAI & CERC AAI Lab) EleutherAI: DL Research In the Open (video, slides)
10:45-10:50 - coffee break
10:50 - 11:00 Charlie Catlett (Argonne Natl Lab) Trillion Parameter Consortium (video, slides)
11:00 -12:30
Irina Rish: Open-Source Foundation Models on Supercomputers: projects and models built by CERC-AAI Lab and INCITE 2023 Collab (15 min)
Kshitij Gupta, Benjamin Thérien, Adam Ibrahim et al: Continual Pretraining of Foundation Models (15min) (paper, blog, tweet, slides, video)
Kshitij Gupta, Daniel Kaplan: Robin Suite of Open-Source Multimodal Foundation Models (15 min) (paper, blog, tweet, slides, video)
Alexis Roger: Multimodal Alignment: Towards ethical multimodal systems (10 min) (paper, blog, tweet, slides, video)
Arjun Ashok, Andrew Williams: Time-Series Foundation Models (15 min) (paper , blog, tweet, slides, video)
12:10-12:15 - coffee break
Nolano.ai team (Tejas Vaidhya, Ayush Kaushal, Irina Rish) (video)
Irina Rish Introducing Nolano.ai (2 min)
Ayush Kaushal Nolano: Compression and Fast Inference in Foundation Models (8 min) (paper , blog, tweet, slides)
Ayush Kaushal: Nolano: Introducing Hi-NOLIN - the First Hindi-English LLM (5 min) ( tweet, blog, slides)
12:30 -1:30 - lunch (on your own)
1:30 - 5:00 - afternoon session: each talk 15min (w/ questions)
1:30 - 1:45 Luke Sernau (Google) Finding our Moat: Cultivating a relationship between open source and industry (video, slides)
1:45 - 2:00 Dirk Groeneveld (Allen Institute for AI) Overview over the OLMo project (video, slides)
2:00 - 2:15 Abhi Venigalla (Databricks) Data + Training Infrastructure for Custom LLMs (video, slides)
2:15-2:30 Julien Launay (Adaptive ML) Challenges in training the next open frontier model (video, slides)
2:30-2:40 - coffee break
2:50-3:05 Huu Nguyen (Ontocord) Overview of Ontocord (video, slides)
3:05-3:20 Jenia Jitsev (LAION) Open-source foundation models (video, slides)
3:20-3:35 Ce Zhang (Together) Building an Ecosystem for Open Foundation Models, Together
3:35-3:45 coffee break
3:45 - 5:30 Panel: Open-source and the Future of AI: Maximizing Benefits while Reducing Risks (video)
Panelists: Nick Bostrom (remotely), Yann LeCun (remotely), Yoshua Bengio, Max Tegmark, Percy Liang, Jenia Jitsev, Julia Bossmann, Ethan Caballero, Irina Rish, Nora Belrose, Josha Bach, Quintin Pope
Panel snapshots in a series of tweets (thank you, Alex!)
5:30 - 5:45 Natalia Vassilieva and William Marshall (Cerebras) Cerebras and Open Source: Our Journey
5:45 - 6:00 Maxence Ernoult (RAIN) Accelerating Inference and Training at the Edge (video, slides)
🎉🎉🎉 7:00 - 11:00 - Open Source AI party! (organized by CERC AAI, Cerebras, Together, LAION) 🎉🎉🎉 waitlist