Uncertainty-based Continual Learning witH Adaptive Regularization (Neurips 2019)

Hongjoon Ahn*, Sungmin Cha*, Donggyu Lee and Taesup Moon

M.IN.D Lab

Sungkyunkwan University

{hong0805, csm9493, ldk308, tsmoon}@skku.edu

(* Equal contribution)

Abstract

We introduce a new neural network-based continual learning algorithm, dubbed as Uncertainty-regularized Continual Learning (UCL), which builds on traditional Bayesian online learning framework with variational inference. We focus on two significant drawbacks of the recently proposed regularization-based methods: a) considerable additional memory cost for determining the per-weight regularization strengths and b) the absence of gracefully forgetting scheme, which can prevent performance degradation in learning new tasks. In this paper, we show UCL can solve these two problems by introducing a fresh interpretation on the Kullback- Leibler (KL) divergence term of the variational lower bound for Gaussian meanfield approximation. Based on the interpretation, we propose the notion of nodewise uncertainty, which drastically reduces the number of additional parameters for implementing per-weight regularization. Moreover, we devise two additional regularization terms that enforce stability by freezing important parameters for past tasks and allow plasticity by controlling the actively learning parameters for a new task. Through extensive experiments, we show UCL convincingly outperforms most of recent state-of-the-art baselines not only on popular supervised learning benchmarks, but also on challenging lifelong reinforcement learning tasks.

Overview of UCL

  • Information loss and negative transfer of a important node


  • Final loss function of UCL


  • Colored hidden nodes and edges denote important nodes


  • Experimental results on supervised learning


  • Experimental results on reinforcement learning-
    • Task : 8 tasks in Roboschool, Algorithm : PPO

PaPEr link

The implementation code

Paper, Poster and Summarizing Paper

UCL_arxiv.pdf
NeurIPS2019_UCL_poster_final.pdf

Citation

@incollection

{NIPS2019_UCL,

title = {Uncertainty-based Continual Learning with Adaptive Regularization},

author = {Hongjoon Ahn, Sungmin Cha, Donggyu Lee and Taesup Moon},

booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},

pages = {4394--4404},

year = {2019},

url = {http://papers.nips.cc/paper/8690-uncertainty-based-continual-learning-with-adaptive-regularization.pdf}

}