Implementation of Defensive Distillation

Adversarial attacks are proven a security cirtical vurnerablity for DNN models. Thus, in order to protect them from these different adversarial defense strategies are explored. One of these was Defensive Distillation came out in 2016 paper link. Defensive distillation provides gradient masking to the DNN models against any gradient based attacks like FGSM, IGSM, etc.

I have done study for this method as part of bigger project and also implemented the method for the same. Code for the same can be found in my Github repo. below image show gimplses of how fgsm attack is now less effective when we use defensive distillation in DNN model.Â