A Comprehensive Overhaul of Feature Distillation

Announcement

Nov, 19, 2019, Segmentation codes were released on Github

Nov, 6, 2019, Slides and poster were released

Sep, 10, 2019, Codes were released on Github

Jul, 23, 2019, The paper was accepted in ICCV 2019 (Poster)

Apr, 3, 2019, The paper was released on Arxiv

Publication

A Comprehensive Overhaul of Feature Distillation

Byeongho Heo, Jeesoo Kim, Sangdoo Yun, Hyojin Park, Nojun Kwak, and Jin Young Choi

IEEE International Conference on Computer Vision (ICCV), 2019

Fig 1. Performance of distillation methods in ImageNet

Abstract

We investigate the design aspects of feature distillation methods achieving network compression and propose a novel feature distillation method in which the distillation loss is designed to make a synergy among various aspects: teacher transform, student transform, distillation feature position and distance function. Our proposed distillation loss includes a feature transform with a newly designed margin ReLU, a new distillation feature position, and a partial L2 distance function to skip redundant information giving adverse effects to the compression of student. In ImageNet, our proposed method achieves 21.65% of top-1 error with ResNet50, which outperforms the performance of the teacher network, ResNet152. Our proposed method is evaluated on various tasks such as image classification, object detection and semantic segmentation and achieves a significant performance improvement in all tasks.

Table 1. Difference in various kinds of feature distillation.

Fig 2. Position of distillation target layer

Fig 3. Framework of proposed distillation method

Table 2. Experiments settings and performance of proposed method in CIFAR-100 dataset

Table 3. Performance on ImageNet dataset