In this section, we further explore the repairing performance of ArchRepair by using ArchRepair on different architecture levels: Layer-level and Block-level, and comparing their accuracy of repaired models. The results show the models repairing on block-level have highest accuracy, demonstrating that ArchRepair can fully take its all advantages through block-level repairing.
In this experiment, we repair 4 different DNN models (i.e. ResNet-18. ResNet-50, ResNet-101 and DenseNet-121) on 2 architecture levels (i.e. Layer-level and Block-level). Note that in Empirical Study, we divide architecture into four different levels (neuron-level, layer-level, block-level and network-level), while ArchRepair cannot be applied on neuron-level and network-level, therefore we only test ArchRepair on the other two:
Layer-level: ArchRepair first identifies the most vulnerable layer and repair it, then use the repaired layer to replace the original one.
Block-level: ArchRepair first identifies the most vulnerable block and repair it, then replace the original block with the repaired block.
We repair and evaluate on the original dataset (which dataset?), and results are displayed in the following table.
According to the table, reparing on block-level is a better choice, where the accuracy of ResNet-18 is improved about 3.55% comparing with repairing on layer-level, and 4.5% on ResNet-50 and ResNet-101, which indicate that block-level repairing is more effective.
Besides, we also compare with the results in Empirical Study. We notice that although repairing ResNet-18 on layer-level based on adjusting architecture do not perform better than repairing based on adjusting weight, whereas the accuracies of other models repaired by block-level ArchRepair are all higher than the model repaired based on adjusting weight, indicating that adjusting the architecture is more effective than only adjusting the weights, especially in block-level repairng.