RQ4

Experiment Setup

Model Enhancement Task: In this experiment, we use four adversarial attacks, i.e., DeepFool, C&W, FGSM, and PGD to generate 1000 adversarial samples. We compare the retraining accuracy and efficiency between our modularized training approach and traditional approach. Both aim to force the original model to perform well on those adversarial samples.

Model Repair Task: In this experiment, given a trained model m, we mutate m by progressively replacing its model weights with random values until its training accuracy drops by 10%, 30%, and 50%. For each mutation configuration (i.e., a model architecture, a dataset, and a dropping accuracy rate), we generate 3 model mutants.

For each model mutant, we compare DeepArc with traditional model retraining regarding the retraining accuracy and efficiency.

Results

Code Demo

https://github.com/hnurxn/Deep-Arc/RQ4