To answer RQ3, we conduct an experiment using two prevalently-employed adversarial sample generation techniques, Fast Gradient Sign Method (FGSM) and Projected Gradient Descent (PGD) . We generate 1,000 adversarial samples, with FGSM reducing the model's average accuracy to 18.53%, and PGD to 3.8%. We aim to compare the Training Parameter Ratio (TPR) and the Retrained Accuracy (RA) of existing approaches and our proposed method. Specifically, the experiment settings include fully training (i.e., comparison baseline), DeepArc, NeuSemSlice, and a combination of NeuSemSlice with DeepArc under identical training epochs. Additionally, this RQ further assesses the resilience of each method against similar adversarial attacks. We generate another 1,000 adversarial samples after the model re-adaption, and obtain the model accuracy against the samples (i.e., shown as Re-Attack Accuracy, RAA).