ILFO: Adversarial Attack on Adaptive Neural Networks
With the increase in the number of layers and parameters in neural networks, energy consumption of neural networks has become a great concern to society, especially to users of handheld or embedded devices. In this paper, we investigate the robustness of neural networks against energy-oriented attacks. Specifically, we propose ILFO (Intermediate Output Based Loss Function Optimization) attack against a type of energy-saving neural networks, Adaptive Neural Networks (AdNN). An AdNN can dynamically deactivate part of its model based on the need of the inputs to decrease energy consumption. ILFO (Intermediate Output Based Loss Function Optimization) leverage intermediate output as a proxy to infer the relation between input and its corresponding energy consumption. ILFO has shown an increase up to 100 % of the remaining FLOPs (floating-point operations per second) count of AdNNs with minimum noise added to input images. To our knowledge, this is the first attempt to attack the energy consumption of a DNN.
Overview of AdNN
AdNNs are a type of Neural Networks which saves energy/time during inference by selectively deactivating certain part of the model. It generally uses computing units between two layers or residual blocks to compute intermediate output and decides which part to be deactivated based on those outputs.
Types of AdNN
Attacking Early-termination AdNN
SACT computes halting score for each input position in a block after each layer in a block. A position is called active if its cumulative halting score is less than one and inactive the cumulative halting score exceeds one. Ponder cost (ρ) of a position is the expressed as summation of number of residual units for which the position is active and the remainder value of the cumulative halting score after exceeding one. We try to formulate an attack increases ponder cost of each input position as shown in figure.
Attacking Conditional-skipping AdNN
Skipnet uses gating units before each residual blocks. If the output of the gate is less than 0.5 the next block is skipped. So, for attack formulation, we try to increase gate outputs to reach 0.5.
The algorithm mainly has two parts. The first part (a) formulates the loss function based on the AdNN. Second part (b) uses the loss function to create an iterative optimization process to attack the model. (b) generates the attacked image.
We use three metrics to evaluate our attack model - effectiveness, quality and efficiency. We have used Imagenet and Cifar10 datasets for evaluation. We use two baseline techniques - (1) Gaussian Distribution Noise and (2) Basic Constrained-Based Model
Percentage of flops increased by attacks. (a) Results after attacking SkipNet using Cifar10 images. (b) Results after attacking SACT using Cifar10 images. (c) Results after attacking SkipNet using Imagenet images. (d) Results after attacking SACT using Imagenet images. The results suggest that ILFO has performed better than all the techniques for all datasets.
Percentage of AdNN reduced flops increased by attacks. (a) Results after attacking SkipNet using Imagenet images. (b) Results after attacking SkipNet using Cifar10 images. (c) Results after attacking SACT using Imagenet images. (d) Results after attacking SACT using Cifar10 images. The results suggest that ILFO has increased the reduced FLOPs of AdNN upto 100 %.
We have used three metrics to measure quality of generated images - (1) Euclidean distance of pixels, (2) Canny Edge Detection and (3) FFT. Figure (a) and (b) show the result of Canny Edge Detection and Euclidean Distance between Pixels. The results suggest that ILFO and Constraint-based technique changes the image quality in a similar way.
Efficiency has been measured by power consumption of CPU during attack process. The results in the left column are for Cifar10 dataset and in the right column are for Imagenet dataset. The results suggest that ILFO requres lesser energy consumption than Constraint-based technique.
Effectiveness Comparison Against More Baseline (Mean Filter, Salt and Pepper Noise)
We have used mean filter and salt and paper noise to modify the input images and performed the experiments. Experiments with Gaussian distribution noise is already shown in the paper. For mean filter, we have used 2×2 filters for Cifar images and 9×9 filters for Imagenet images.
Percentage of AdNN reduced flops increased by ILFO and baseline techniques. (a) Results after attacking SACT using Cifar10 images. (b) Results after attacking SkipNet using Cifar10 images. (c) Results after attacking SkipNet using Imagenet images. (d) Results after attacking SACT using Imagenet images. The results suggest that ILFO has increased the reduced FLOPs of AdNN upto 100 %. The above figure suggests that ILFO has performed significantly better than other two baseline techniques.