Semantic Supervision

Enabling Generalization over Output Spaces

Austin W. Hanjie*, Ameet Deshpande*, Karthik Narasimhan

Department of Computer Science, Princeton University

* = Equal contribution

Motivation

semsup_animation_v2.mov

Illustration of semantic supervision

The standard supervised classification setup involves classifying inputs as one or many of the predefined outputs. As a result, generalization over the input space has been a significant point of effort for the machine learning community.

We propose semantic supervision (SemSup), which is a unified paradigm for enabling generalization even over the output space. Semantic supervision facilitates generalization to unseen descriptions, unseen classes, unseen superclasses, and unseen tasks.

Abstract

In this paper, we propose semantic supervision (SemSup) - a unified paradigm for training classifiers that generalize over output spaces. In contrast to standard classification, which treats classes as discrete symbols, SemSup represents them as dense vector features obtained from descriptions of classes (e.g., "The cat is a small carnivorous mammal"). This allows the output space to be unbounded (in the space of descriptions) and enables models to generalize both over unseen inputs and unseen outputs (e.g., "The aardvark is a nocturnal burrowing mammal with long ears"). Specifically, SemSup enables four types of generalization to (1) unseen class descriptions, (2) unseen classes, (3) unseen super-classes, and (4) unseen tasks. Through experiments on four classification datasets across two variants (multi-class and multi-label), two input modalities (text and images), and two output description modalities (text and JSON), we show that our SemSup models significantly outperform standard supervised models and existing models that leverage word embeddings over class names. For instance, our model outperforms baselines by 40% and 20% precision points on unseen descriptions and classes, respectively, on a news categorization dataset (RCV1). SemSup can serve as a pathway for scaling neural models to large unbounded output spaces and enabling better generalization and model reuse for unseen tasks and domains.

Semantic supervision paradigm

While the standard supervised classification framework treats classes as discrete symbols, our paradigm semantic supervision (SemSup) can use different kinds of rich descriptions to learn semantic representations for classes. In this example, we depict two variants of our models which use descriptions in (1) natural language and (2) structured JSON.


We sample one of many automatically-collected descriptions of classes throughout training to represent the different aspects of each class:

  1. "Tigers are striped big cats"

  2. "The tiger is the largest living cat species"

  3. "It has dark vertical stripes on an orange fur"

Capabilities


Instances (shown as images) and outputs in the form of class descriptions are embedded into a joint input-output space. Semantic supervision allows models to generalize to

(a) unseen descriptions ("a large orange striped carnivore"),

(b) unseen classes (penguin),

(c) unseen superclasses (felines), and

(d) unseen tasks (flower classification instead of animal classification).



Results

Generalization to unseen descriptions

SemSup (our model, green) outperforms baselines when unseen descriptions are used at test time.

Example:

Seen description: Flatfish is a fish belonging to the order Pleuronectiformes.

Unseen description: A flatfish moves its fins up and down like a fan.


Generalization to unseen classes

SemSup (our model, green) outperforms baselines on unseen (new) classes.

Example:

Seen classes: Dog 🐢, cat 🐱, tiger πŸ…

Unseen classes: Wolf 🐺, lion 🦁


Generalization to superclasses

SemSup (our model, green) outperforms baselines when tested on superclasses not seen during training.

It achieves good performance even though the superclasses are at a higher level of granularity than classes seen during training.

Example:

Seen classes: Dog 🐢, cat 🐱, parrot 🦜, eagle πŸ¦…

Unseen superclasses: Mammal, Bird

Generalization to unseen tasks

SemSup (our model, green) outperforms baselines when we perform task transfer from RCV1 to 20 Newsgroups (20 NG).

Example classes:

RCV1: Asset transfer πŸ”„, inflation πŸ“ˆ, science and technology πŸ§ͺ

20NG: Motorcycle 🏍️, electronics ⚑, Microsoft Windows πŸ–₯️