Abstract: Inducing and leveraging sparse activations during training and inference is a promising avenue for improving the computational efficiency of deep networks, which is increasingly important as network sizes continue to grow and their application becomes more widespread. Here we use the large width Gaussian process limit to analyze the behaviour, at random initialization, of nonlinear activations that induce sparsity in the hidden outputs. A previously unreported form of training instability is proven for arguably two of the most natural candidates for hidden layer sparsification; those being a shifted ReLU (ϕ(x) = max(0, x−τ ) for τ ≥ 0) and soft thresholding (ϕ(x) = 0 for |x| ≤ τ and x − sign(x)τ for |x| > τ ). We show that this instability is overcome by clipping the nonlinear activation magnitude, at a level prescribed by the shape of the associated Gaussian process variance map. Numerical experiments verify the theory and show that the proposed magnitude clipped sparsifying activations can be trained with training and test fractional sparsity as high as 85% while retaining close to full accuracy.
In plain English: Our main contribution in this paper is the identification of a training instability that arises with natural sparsity-encouraging modifications to common activation functions, and the proposal of a solution to that problem: clipping.
We have two proofs of concept of our methodology: one being a very deep---100 layers, width 300---feedforward neural network (DNN) trained to classify the digits in the MNIST dataset, the other being a convolutional neural network (CNN) with 50 layers and 300 channels per layer. The depth of these neural networks necessitates some care in their initialisation, particularly when the activation functions used are sparsity inducing. We do this with an edge-of-chaos (EoC) analysis, which is a treatment involving the variance map of the neural network.
When we use our methodology of clipping the activations, along with the EoC initialisation, we observe that we can train an activation-sparse DNN just as well as, and with the same accuracy as, the benchmark (non-sparse) DNN, with 85% activation sparsity. That is, 85% of the outputs of the DNN's nodes are 0, with no sacrifice of classification accuracy. We saw similar things with the CNN, where an 85% activation sparsity dropped only 4% accuracy. Full accuracy is retained at 70% activation sparsity for the CNN.
So why would we want activation sparsity in the first place? Sparse activations can significantly improve the computational efficiency of deep networks during both training and inference by reducing floating point operations (FLOPs) and memory requirements. This is especially useful when computing the forward pass of a very large neural network.
Aren't there simpler ways to increase sparsity? Our method is baked into the training of the networks, and comes with its own theoretical underpinnings. There is therefore no need to do any "post-production" as is common with pruning methods. Furthermore, the training cost using our method is reduced, as the network is already sparse during training. Finally, there has been little to no attention given to activation sparsity of CNNs: prior to our paper, the only way to do this was using "post-production" methods.
TLDR: Our paper provides a theoretical understanding of why certain sparsifying activations fail during deep network training and offers a practical solution (magnitude clipping), thereby enabling the successful training of deep neural networks with high activation sparsity.
Short gallery: