Robust Malware Detection Challenge
Robust Malware Detection Challenge Summarization Video
The bulk of adversarial machine learning research has been focused on crafting attacks and defenses for image classification. In this challenge, we consider adversarial machine learning in the context of robust malware detection. In the era of modern cyber warfare, cyber adversaries craft adversarial perturbations to malicious code to evade malware detectors. Crafting adversarial examples in the malware classification setup is more challenging than image classifications: malware adversarial examples must not only fool the classifier, their adversarial perturbations must not alter the malicious payload. The gist of this challenge is to defend against adversarial attacks by building robust detectors and/or attack robust malware detectors based on binary indicators of imported functions used by the malware. The challenge has two tracks:
- Defense Track: Build high-accuracy deep models that are robust to adversarial attacks.
- Defense Track Winners: Laurens Bliek, Christian Hammerschmidt, AzqaNadeem, SiccoVerwer
- Defense Track Winners: ChangmingXu, AnandithaRaghunath, Steven Jorgensen, Karla Mejia
- Attack Track: Craft adversarial malware that evades detection on adversarially trained models.
- Attack Track Winners: Laurens Bliek, Christian Hammerschmidt, AzqaNadeem, SiccoVerwer
- Detailed descriptions: https://github.com/ALFA-group/malware_challenge/blob/master/docs/challenge.pdf
- Toolkit for both tracks can be found at https://github.com/ALFA-group/malware_challenge
- Challenge Submission Deadline: 15 July 2019
The winners of the challenge will be announced at the workshop and will receive cash prizes sponsored by the MIT-IBM Watson AI Lab. The winners will also be invited to submit a technical report/poster about their techniques.