Towards AGI: Scaling, Emergence, Alignment
Irina's Home Page CERC-AAI Lab Schedule Topics & Papers Reading Group Scaling Workshops Current Projects
Time: Mon & Wed, 4:30-6:30pm EST Location: Mila Auditorium 2 & online Discussions: AGI discord, channel #ift6760a-general
Course Info
Instructor: Irina Rish (irina.rish at mila.quebec)
TAs: TBA
Previous courses:
Winter 2023: Towards AGI (IFT6760A)
Winter 2022: Neural Scaling Laws (IFT6760B/6167)
This course: Towards AGI: Scaling, Emergence and Alignment (IFT6760A)
Location: Auditorium 2 at Mila, 6650, boul. St-Urbain, Montréal
Course Description
This seminar-style course will focus on recent advances in the rapidly developing area of foundation models, i.e. large-scale neural network models, pretrained on very large, diverse datasets (GPT-4, Grok, Claude, DALL-e, and many others). Such models often demonstrate significant improvement in their few-shot generalization abilities, as compared to their smaller-scale counterparts, across a wide range of downstream tasks - what one could call a "transformation of quantity into quality" or an "emergent behavior". This is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI). By AGI here we mean literally a "general", i.e. broad, versatile AI capable of quickly adapting to a wide range of situations and tasks, both novel and those encountered before - i.e. achieving a good stability (memory) vs plasticity (adaptation) trade-off, using the continual learning terminology. In this course, we will survey most recent advances in large-scale pretrained models, focusing specifically on empirical scaling laws of such systems' performance, with increasing compute, model size, and pretraining data (power laws, phase transitions). We will also explore the trade-off between the increasing AI capabilities and AI safety/alignment with human values, considering a range of evaluation metrics beyond the predictive performance. Finally, we will touch upon several related fields, including transfer-, continual- and meta-learning, as well as out-of-distribution generalization, robustness and invariant/causal predictive modeling.
In this course, besides several introductory and invited lectures by the instructor and guest speakers, respectively, we will survey and present recent papers listed in the "Topics & Papers" section from the menu on top of this page. If you have any suggestions about the papers to review, please contact the instructor and/or the TAs.
Evaluation Criteria
Paper presentations: 40%
Class project (report + poster presentation): 50%
Class participation: asking questions, participating in discussions (on slack/in class): 10%
Note: due to time zone differences, it may be difficult for all students to join all classes in person; the classes will be recorded, and questions regarding the papers to be discussed can be submitted on the course slack.