Our two Deep Learning papers have been accepted !!!

Post date: Sep 14, 2020 2:18:52 AM

We are very happy to have two DEEP LEARNING papers accepted in the ICTC 2020 Conference

The 11th International Conference on ICT Convergence

October 21-23, 2020 / Ramada Plaza Hotel, Jeju Island, Korea

Those two papers are the first "Deep Learning" topic papers published by our lab.

We hope that those two papers are good seeds for the much high quality of future papers on Deep Learning Research.

[Paper 1]=================================

Dear Mr. Jaemin Jeong

Congratulations! - your paper #1570655882 ("Filter Combination Learning for Convolutional Neural Network") for ICTC 2020 has been accepted.

Abstract—In this paper, we propose a method for representing convolution filters of a Convolutional Neural Network (CNN) model as linear combinations of small number of basis filters that are provided as input features. In our approach, the combination coefficients are searched (trained) with the given input basis filters (IBFs) to best generate the convolution filter parameters. Since all the convolution filters are generated by the linear combinations of the IBFs, the size of an CNN model can be compressed if the number of coefficients for the linear combinations is less than that of filter parameters. In our experiments, widely used well-known deep learning models such as VGG-16, ResNet-18 are utilized. About 70% of the learnable parameters in those models are reduced with the observation of 1-2% accuracy drops. Note that the aim of our approach is not for reducing the number of parameters and involved arithmetic operations. Our primary goal of work is to investigate the possibility of expressing filters with a small set of IBFs by linear combinations. The second goal is to compress a model by linear combinations and to make a benefit when the model needs to be distributed and stored (particularly downloaded to mobile devices through Wifi). The code for experiments are available at https://github.com/jjeamin/Filter Generation Network.

[Paper 2]=================================

Dear Ms. Yunhee Woo

Congratulations! - your paper #1570668577 ("Zero-Keep Filter Pruning for Energy Efficient Deep Neural Network") for ICTC 2020 has been accepted.

Abstract—Recent deep learning models succeed to achieve high accuracy and fast inference time, but they require high performance computing resources because of a large number of parameters. However, not all systems have high-performance hardware. Sometimes, deep learning model needs to be run on edge devices such as IoT devices or smartphones. The edge devices have limited performance and energy consumption. On these devices, the amount of computation must be reduced. Pruning is one of the well-known approaches to solve this problem. In this work, we propose “zero-keep filter pruning” for an energy-efficient deep neural network. The proposed method maximizes the number of zero elements in filters by replacing small values with zero and pruning the filter that has the lowest number of zeros. In the conventional approach, the filters that have the highest number of zeros are generally pruned. As a result, through this zero-keep filter pruning, we can have the filters that have many zeros in a model. We compared the results of the proposed method with the random filter pruning and proved that our method shows better performance with much fewer non-zero elements with marginal accuracy drop. We also compare the number of remained filters with random and proposed pruning methods after pruning. Finally, we discuss a possible multiplier architecture, zero-skip multiplier circuit, which skips the multiplications with zero to accelerate and reduce energy consumption.

=================================