The 4th workshop on

  Neural Scaling Laws: 

Towards Maximally Beneficial AGI


Dec 2, 2022, New Orleans, USA

Registration Schedule Workshop Series Photos    

This is the 4th workshop in the workshop series that started Oct 2021, motivated by recent advances in the rapidly developing area of large-scale neural network models (e.g., GPT-3, CLIP, DALL-e, Gato, StableDiffusion and many others), pretrained in an unsupervised way on very large and diverse datasets. Such models often demonstrate significant improvement in their few-shot generalization abilities, as compared to their smaller-scale counterparts, across a wide range of downstream tasks - what one could call a "transformation of quantity into quality" or an "emergent behavior". 


Rapid increase in generalization capabilities of such models is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI). By AGI here we mean literally a "general", i.e. broad, versatile AI capable of quickly adapting to a wide range of situations and tasks, both novel and those encountered before - i.e. achieving a good stability (memory) vs plasticity (adaptation) trade-off, using the continual learning terminology. 

The goal of the workshop series is to provide a forum for discussion of the most recent advances in large-scale pretrained models, focusing specifically on empirical scaling laws of such systems' performance, with increasing compute, model size, and pretraining data (power laws, "broken" power laws, "phase transitions", etc). We will also explore the trade-off between the increasing AI capabilities and AI safety/alignment with human values, considering a range of evaluation metrics beyond the predictive performance. These topics are also closely related to transfer-, continual- and meta-learning, as well as out-of-distribution generalization, robustness and invariant/causal predictive modeling.


This is the 4th workshop in the workshop series that started Oct 2021, motivated by recent advances in the rapidly developing area of large-scale neural network models (e.g., GPT-3, CLIP, DALL-e, Gato, StableDiffusion and many others), pretrained in an unsupervised way on very large and diverse datasets. Such models often demonstrate significant improvement in their few-shot generalization abilities, as compared to their smaller-scale counterparts, across a wide range of downstream tasks - what one could call a "transformation of quantity into quality" or an "emergent behavior". 


Rapid increase in generalization capabilities of such models is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI). By AGI here we mean literally a "general", i.e. broad, versatile AI capable of quickly adapting to a wide range of situations and tasks, both novel and those encountered before - i.e. achieving a good stability (memory) vs plasticity (adaptation) trade-off, using the continual learning terminology. 

The goal of the workshop series is to provide a forum for discussion of the most recent advances in large-scale pretrained models, focusing specifically on empirical scaling laws of such systems' performance, with increasing compute, model size, and pretraining data (power laws, "broken" power laws, "phase transitions", etc). We will also explore the trade-off between the increasing AI capabilities and AI safety/alignment with human values, considering a range of evaluation metrics beyond the predictive performance. These topics are also closely related to transfer-, continual- and meta-learning, as well as out-of-distribution generalization, robustness and invariant/causal predictive modeling.


AI Helps Ukraine: Charity Conference

Mila (Quebec AI Institute) is hosting the “AI Helps Ukraine: Charity Conference”, which aims to raise funds to support Ukraine with medical and humanitarian aid.

The conference includes a series of online talks (from November to December) and a full-day in-person event (Dec 8) at Mila under the theme “AI for Good”. We invite to join us in-person and online!

Some of the most outstanding AI researchers of the world will participate in the conference, such as Yoshua Bengio, Timnit Gebru, Max Welling, Alexei Efros, Regina Barzilay, Anna Goldernberg and many others.

You can find the complete schedule here.

We encourage everyone to DONATE to our cause using this link and helping us spread the word about the event by sharing out tweets in your network.