IFT 6760b - Continual Learning: Towards "Broad" AI

Winter 2019-2020 (Jan 6 to Apr 16, 2020), A Machine Learning Course offered by the Université de Montréal

Course Description:

Stephen Hawking famously said, ‘Intelligence is the ability to adapt to change.’ While today’s AI systems can achieve impressive performance in specific tasks, from accurate image recognition to super-human performance in games such as Go and chess,  they are still quite "narrow", i.e.  not being able to easily adapt to a wide range of new tasks and environments, without forgetting what they have learned before - something that humans and animals seem to do naturally during their lifetime. This course will focus on the rapidly growing research area of machine learning called continual lifelong learning which aims to push modern AI from "narrow" to "broad", i.e. to develop learning models and algorithms capable of never-ending, lifelong, continual learning over a large, and potentially infinite set of drastically different tasks and environments. In this course, we will review state-of-art literature on continual lifelong learning in modern AI, including catastrophic forgetting problem and recent approaches to overcoming it in deep neural networks, from augmenting stochastic gradient decent algorithm to alternative optimization approaches, architecture adaptation/evolution based on expansion/compression, dynamic routing/selective execution ("internal" attention) and other approaches; moreover, we will also survey related work on stability vs plasticity dilemma in neuroscience and related topics in biology of adaptation and memory.


Evaluation

Class project: 40%,  Paper presentations: 25%, Peer-Reviewing (in-class): 25%,  Class participation (asking questions, discussion participation, motivation, curiosity etc.): 10%

Tentative topics: to be updated as we go along: a mix of lectures and seminars/paper presentations

  • Lecture 1: Intro to continual learning in artificial and natural neural nets: catastrophic forgetting, plasticity vs stability and all that
  • Continual reinforcement learning

Schedule


Resources

Surveys:

Continual Learning reading group @ Mila (Mon @ 1:30pm):    https://github.com/optimass/continual_learning_papers/tree/master/summaries 

Also, a bit outdated set of links here: https://sites.google.com/site/irinarish/metalearning)


Background on machine learning:


Comments