AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks
Abstract
Existing fine-tuning methods use a single learning rate over all layers. In this paper, first, we discuss that trends of layer-wise weight variations by fine-tuning using a single learning rate do not match the well-known notion that lower-level layers extract general features and higher-level layers extract specific features. Based on our discussion, we propose an algorithm that improves fine-tuning performance and reduces network complexity through layer-wise pruning and auto-tuning of layer-wise learning rates. The proposed algorithm has verified the effectiveness by achieving state-of-the-art performance on the image retrieval benchmark datasets (CUB-200, Cars-196, Stanford online product, and Inshop)
Fig 1. The conceptual figure of the proposed algorithm
Fig 2. (a) Layer-wise weight variations and (b) layer-wise learning rate adaptations by our AutoLR algorithm
Fig 3. The class activation map (CAM) visualization of several layers (1, 4, 8, 14) according to the sorting quality
Fig 3. The class activation map (CAM) visualization of several layers (1, 4, 8, 14) according to the sorting quality
News
12/02, 2020 The conference paper was accepted in AAAI2021.
Publication
AutoLR: Layer-wise Pruning and Auto-tuning of Learning Rates in Fine-tuning of Deep Networks
Youngmin Ro, and Jin Young Choi
35th AAAI Conference on Artificial Intelligence (AAAI), 2021
If you have questions, please contact youngmin4920@gmail.com