Probabilistic Model Agnostic Meta-Learning

Chelsea Finn*, Kelvin Xu*, Sergey Levine

University of California, Berkeley

Preprint: https://arxiv.org/abs/1806.02817

Abstract

Meta-learning for few-shot learning entails acquiring a prior over previous tasks and experiences, such that new tasks be learned from small amounts of data. However, a critical challenge in few-shot learning is task ambiguity: even when a powerful prior can be meta-learned from a large number of prior tasks, a small dataset for a new task can simply be too ambiguous to acquire a single model (e.g., a classifier) for that task that is accurate. In this paper, we propose a probabilistic meta-learning algorithm that can sample models for a new task from a model distribution. Our approach extends model-agnostic meta-learning, which adapts to new tasks via gradient descent, to incorporate a parameter distribution that is trained via a variational lower bound. At meta-test time, our algorithm adapts via a simple procedure that injects noise into gradient descent, and at meta-training time, the model is trained such that this stochastic adaptation procedure produces samples from the approximate model posterior. Our experimental results show that our method can sample plausible classifiers and regressors in ambiguous few-shot learning problems.

We present below a set of extra qualitative figures for our approach. We refer to our method as PLATIPUS (short for Probabilistic LATent model for Incorporating Priors and Uncertainty in few-Shot learning)

  • For each column, we show the classifications (indicated with pink borders) made by the sampled classifier that performed best on the test images in that column after adapting on the training images in the leftmost column. Note that, for each pair of attributes, there is always at least one sampled classifier that is sensitive to that attribute pair without paying attention to the third attribute.