Humans have the extraordinary ability to learn continually from experience. Not only can we apply previously learned knowledge and skills to new situations, we can also use these as the foundation for later learning. One of the grand goals of AI is building an artificial "continual learning" agent that constructs a sophisticated understanding of the world from its own experience, through the autonomous incremental development of ever more complex skills and knowledge.

Hallmarks of continual learning include: interactive, incremental, online learning (learning occurs at every moment, with no fixed tasks or data sets); hierarchy or compositionality (previous learning can become the foundation far later learning); "isolaminar" construction (the same algorithm is used at all stages of learning); resistance to catastrophic forgetting (new learning does not destroy old learning); and unlimited temporal abstraction (both knowledge and skills may refer to or span arbitrary periods of time).

Continual learning is an unsolved problem which presents particular difficulties for the deep-architecture approach that is currently the favoured workhorse for many applications. Some strides have been made recently, and many diverse research groups have continual learning on their road map. Hence we believe this is an opportune moment for a workshop focusing on this theme. The goals would be to define the different facets of the continual-learning problem, to tease out the relationships between different relevant fields (such as reinforcement learning, deep learning, lifelong learning, transfer learning, developmental learning, computational neuroscience, etc.) and to propose and explore promising new research directions.


Razvan Pascanu (Google DeepMind)
Mark Ring (Cogitai)
Tom Schaul (Google DeepMind)




Any questions please send to cldlworkshop@gmail.com