Diversify and Disambiguate:

Learning From Underspecified Data


Yoonho Lee, Huaxiu Yao, Chelsea Finn

Paper | Code

Abstract

Many datasets are underspecified, which means there are several equally viable solutions for the data. Underspecified datasets can be problematic for methods that learn a single hypothesis because different functions that achieve low training loss can focus on different predictive features and thus have widely varying predictions on out-of-distribution data. We propose DivDis, a simple two-stage framework that first learns a diverse collection of hypotheses for a task by leveraging unlabeled data from the test distribution. We then disambiguate by selecting one of the discovered hypotheses using minimal additional supervision, in the form of additional labels or inspection of function visualization. We demonstrate the ability of DivDis to find hypotheses that use robust features in image classification and natural language processing problems with underspecification.

We propose Diversify and Disambitguate (DivDis), a two-stage framework for learning from underspecified data. In the Diversify stage, we train a set of diverse functions that each achieve low predictive risk in the source domain while making different predictions on unlabeled data from a separate target domain. Through a small amount of additional supervision in the Disambiguate stage, the model finds the best of the set of diversified functions.

As an illustrative task, consider 2D binary classification with ambiguity, shown on the right. Note that multiple hypotheses each describe the data equally well, potentially leading to different predictions on the out-of-distribution unlabeled data.

Shown above is a visualization of the Diversify stage when training with two heads. Our diversification loss acts as a repulsive force in function space, producing the two functions that are, in a sense, farthest from each other within the set of near-optimal functions. Conventional methods such as ERM do not discover such functions even though they achieve near-zero training loss.

Running our Diversify stage with a large number of heads results in a dense covering of the set of functions compatible with the given data.