The 7th workshop on Neural Scaling Laws

Scaling, Transfer & Multilingual Models


Monday, July 22, 2024, Vienna, Courtyard Vienna Prater/Messe


Registration Schedule Speakers Workshop Series 

This joint event combines the 7th workshop in the Scaling Workshop Series and the 2nd Multilingual Models Workshop. The objective of our joint workshop, co-organized by the CERC-AAI Lab@UdeM and Cerebras, is to provide a forum for discussing recent advances in foundation models - large-scale neural networks, pretrained in an unsupervised way on large and diverse datasets. The rapid increase in generalization capabilities of such models is an important step towards a long-standing objective of achieving Artificial General Intelligence (AGI), namely, a truly broad, versatile AI as opposed to a "narrow" specialist.  Pushing the capabilities of the state-of-the-art AI systems towards AGI, while aiming to better understand and steer AI's behaviors towards those aligned with human values and intentions is the high-level objective of the CERC in Autonomous AI program that is the initiator and primary driver behind the Scaling Workshop Series. 


One particular example of the “narrow” behavior of existing language foundation models is their focus on the English language. A great majority of the existing LLMs have been built primarily for the English language and perform far better on English-language text. These models do not possess linguistic and cultural information to reach a global audience adequately. Pushing state-of-the-art foundation models to achieve capabilities similar to English in other languages is important for many production use cases across the world, as well as for further understanding the promises and limitations of “narrow” vs “broad” models.


The topics of this workshop include (but are not limited to):


We aim to bring together experts in the field, engage in a meaningful dialogue, and foster solutions that promote equity and inclusivity in the AI landscape.