Schedule

8:15 - 9:00 Registration & breakfast

9:00 - 9:40 Intro & Overview: Irina Rish & Natalia Vassilieva

9:40 - 10:30 Plenary talk: Yuhai Tu   Statistical Physics of Deep-Learning: On Learning Dynamics and Generalization 

10:30 - 11:00 Coffee break

Track 1: Scaling & Emergence (room A)

11 - 11:30    Paul Bogdan  (USC)

Theoretical Foundations for Artificial Intelligence (AI) Inspired from Understanding Biological Intelligence (BI): Detecting Phase Transitions and Quantifying the Degree of Emergence in Deep Learning 

11:30 - noon   Hailey Schoelkopf (Eleuther AI)  

Why Has Predicting Downstream Capabilities of Frontier AI Models with Scale Remained Elusive?

Track 2:  Multilingual LLMs (room B)

11:00 - 11:30 Rio Yokota  (Tokyo Institute of Technology)

Continual Pre-training of Open-Source Models on Japanese Text 

11:30 - noon Tatiana Shavrina  (Meta)

Towards Full Linguistic Diversity in Language Models


noon  -  1:30 lunch break

Track 1: Scaling & Emergence (room A)

 1:30 -  2:00  -  Darshil Doshi  (University of Maryland)

  Emergence of in-context learning and skill composition

2:00 -  3:00   -  CERC-AAI Open-Source Foundation Models

Robin: A Suite of Vision-Language Models  

Track 2:  Multilingual LLMs (room B)

1:30pm - 2:00pm   Neha Sengupta (G42)

Comparing adaptation and training from scratch for bilingual models

2:00pm - 2:30pm   Preslav Nakov  (Mohamed bin Zayed University of Artificial Intelligence)

Multilinguality Challenges and Datasets for Evaluating LLMs

2:30pm - 3:00pm   Yishi Xu  (Cerebras Systems)

Bilingual Adaptation of Monolingual Foundation Models

3:30 pm  -  3:45pm Coffee

3:45pm -  4:45 pm Panel

4:45 pm - 5:15 pm Open discussion &  interactive working session (putting together google doc with open questions & future directions)