Model-Agnostic Meta Learning for Fast Adaptation of Deep Networks


We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is compatible with any model trained with gradient descent and applicable to a variety of different learning problems, including classification, regression, and reinforcement learning. The goal of meta-learning is to train a model on a variety of learning tasks, such that it can solve new learning tasks using only a small number of training samples. In our approach, the parameters of the model are explicitly trained such that a small number of gradient steps with a small amount of training data from a new task will produce good generalization performance on that task. In effect, our method trains the model to be easy to fine-tune. We demonstrate that this approach leads to state-of-the-art performance on a few-shot image classification benchmark, produces good results on few-shot regression, and accelerates fine-tuning for policy gradient reinforcement learning with neural network policies.

New Comparisons:

Rei '15: To design a baseline that is representative of Rei ‘15 for meta-learning problems, we concatenated a set of free parameters z to the input x, and only allowed the gradient steps to modify z, rather than modifying theta as in MAML. We ran this on Omniglot and RL. As seen in the results below, this baseline performed well on the toy pointmass problem, but sub-par on more difficult problems, likely due to a less flexible meta-optimization. For Omniglot, z was concatenated channel-wise with the input image.

Multi-task Baselines: We trained a 500 separate regressors on 500 random sinusoid tasks, one-by-one in sequence. The error of the individual regressors was low (<0.02 on their respective sinusoid). We then took the average parameter vector across regressors and fine-tuned on 5 datapoints.

When training the separate regressors, we tried using no regularization, standard l2 regularization, and regularizing to the mean parameter vector so far of the trained regressors (to tie together the parameter vectors). We manually tuned the step size for fine-tuning of these baselines, and set the regularization weight to be as high as possible without significantly affecting performance.

As seen in the results below, none of these baselines outperformed the multi-task baseline in the paper, and suggest that MAML is doing something more sophisticated than finding the mean optimal parameter.

Videos of RL Results: