Course number: CS5331 and CS4331
Time/Location: Mon/Wed/Fri 6:00-6:50 pm, Livermore Center 00101
Class material will be posted here and on Blackboard.
Syllabus for CS5331 Syllabus for CS4331
Piazza for the course is now alive. Please ask all questions/discussions via Piazza
Instructor: Tara Salman (tsalman@ttu.edu)
TA: Samin Dehbashi Sani (samin.dehbashi@ttu.edu)
Office hours (Salman): MWF 4:00-5:00 pm, EC211D
TA Office hours: Th/F 1:30-3 pm.
course objectives: The course is an introduction to adversarial attacks on Machine Learning (ML) that will focus on recent advances in the principles of the attacks, their effects, and possible defense strategies. A specific emphasis will be on recent advances in attacks on deep learning models, due to their prevalence in modern machine learning applications. It is designed to be practical and covers as much as theory as possible. Some specific objectives include:
Outline the different categories of adversarial attacks against machine learning models.
Describe common defense approaches against adversarial attacks for improved robustness of machine learning models.
Understand the basics of adversarial privacy attacks and privacy-preserving defense methods.
Content: The topics covered include an introduction to deep learning, evasion attacks against white-box and black-box machine learning models, privacy attacks, poisoning attacks, defense strategies against common adversarial attacks, and robust machine learning models. If time allows, it will also discuss adversarial attacks in specific applications, such as cybersecurity or medical.
Key topics
Evasion attacks and defenses,
Privacy attacks and defenses,
Poisoning attacks and defenses.
Learning outcomes
By the end of this course, you will have built adversarial attacks against machine learning and practically applied them to real-world applications. You will be able to see the effect of the attacks and how defenses can protect against them. The techniques you learn in this course apply to various machine learning security problems and serve as the foundation for further study in any application area you pursue.
If you are unsure about any of these, please speak to the instructor
The course is for undergraduate/graduate CS students
Knowledge Prerequisites:
Lots of Python programming, especially for Machine Learning! This will be critical to complete the programming assignments. If you do not know Python, or are rusty, you may find some resources below.
Basic Machine Learning Knowledge
It is preferred that the students are familiar with at least one of the following machine learning libraries: TensorFlow, Keras, or PyTorch.
Assignments: 35% ( submitted via blackboard)... 0 is 5%, 1 is 10%, 2 is 10%, 3 is 10%
Class contributions: 5%.... attendance is 1%, Knowledge quiz is 1%, class participation is 3%
Course project: 40%
Exam: 25% (Friday, Nov 22nd, 6 PM-8 PM)
Assignments: 75% ( submitted via blackboard)... 0 is 10%, 1 is 15%, 2 is 15%, 3 is 15%, 4 is 20%
Class contributions: 5%.... attendance is 1%, Knowledge quiz is 1%, class participation is 3%
Exam: 25% (Friday, Nov 22nd, 6 PM-8 PM)
Please check the syllabus for details
Assignment 0 and its ipynb file are now available. It is due on September 20th before the class.
The proposal document and the template file are now available. It is due on October 18th before the class.
Assignment 1 and its ipynb file are now available. It is due on October 14th before the class.
Assignment 2 and its ipynb file are now available. It is due on Nov 4th before the class.
Assignment 3 and its ipynb file are now available. It is due on Dec 9th at noon.
Project evaluation rubric. The project implementation is due on Dec 9th at noon.
Note that to access the videos, you need your TTU account
Lecture 1 (Introduction to the course)
Intro to AML lectures
Lecture 2 (Introduction to AML)
Lecture 3 (Introduction to AML)
Lecture 4 (Introduction to AML)
Lecture 5 (Introduction to AML)
Date: Sep 4th
Reading Resources:
Maybe https://github.com/AndrewZhou924/Awesome-model-inversion-attack would be helpful for inversion attacks covered on Sep 6th.
Deep learning background lectures
Lecture 6 (DL 1)
Lecture 7 (DL 2)
Lecture 8 (DL 3)
Date: Sep 11th
Reading Resources:
Evasion attacks and defenses lectures
Lecture 8 (Evasion 1)
Date: Sep 11th
Reading Resources:
Lecture 9 (Evasion 2)
Date: Sep 13th
Reading Resources:
Lecture 10 (Evasion 3)
Date: Sep 16th
Reading Resources:
Lecture 11 (Evasion on Black-box 1)
Date: Sep 20th
Reading Resources:
Bhagoji et al. (2017) Exploring the Space of Black-box Attacks on Deep Neural Networks
Lecture 12 (Evasion on Black-box 2)
Date: Sep 23rd
Reading Resources:
Brendel, Rauber, and Bethge (2018) Decision-Based Adversarial Attacks: Reliable Attacks Against Black-Box Machine Learning Models
Chen and Jordan (2019) HopSkipJumpAttack: A Query-efficient Decision-based Adversarial Attack
Liu et al. (2017) Delving into Transferable Adversarial Examples and Black-box Attacks
Lecture 13 (Evasion on Black-box 3)
Date: Sep 25th
Reading Resources:
Lecture 14 (Evasion Defenses 1)
Date: Sep 30th
Reading Resources:
Lecture 15 (Evasion Defenses 2)
Date: Oct 2nd
Reading Resources:
Xu (2017) Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks
Carlini (2017) Adversarial Examples Are Not Easily Detected: Bypassing Ten Detection Methods
Papernot (2016) Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models
Lecture 16 (Evasion Defenses 3)
Date: Oct 4th
Reading Resources:
Madry (2017) Towards Deep Learning Models Resistant to Adversarial Attacks
Tramer (2017) Ensemble Adversarial Training: Attacks and Defenses
Zhang (2019) Theoretically Principled Trade-off between Robustness and Accuracy
Raghunathan (2020) Understanding and Mitigating the Tradeoff Between Robustness and Accuracy
Croce (2021) RobustBench: A Standardized Adversarial Robustness Benchmark
Lecture 17 (Evasion Defenses 4)
Date: Oct 7th
Reading Resources:
Privacy attack and defenses lectures
Lecture 18 (Membership inference attack 1)
Date: Oct 9th
Reading Resources:
October 11th is a guest speaker on evasion attacks
Lecture 19 (Membership inference attack 2 and feature inference attacks)
Date: Oct 14th
Reading Resources:
Lecture 20 (XAI privacy leakage and model inversion attacks)
Date: Oct 16th
Reading Resources:
Lecture 21 (Privacy Defenses)
Lecture 22 (Privacy Defenses 2)
Lecture 23 (Privacy Defenses 3)
Date: Oct 25th
Reading Resources:
Poisoning attacks and defenses
Lecture 24 (Poisoning attacks)
Lecture 25 (Poisoning attacks2)
Date: Nov 1st
Reading Resources:
Lecture 26 (Poisoning attacks3)
Date: Nov 4th
Reading Resources:
Lecture 27 (Poisoning attacks4)
Date: Nov 6th
Reading Resources:
Lecture 28 (Poisoning defenses)
Date: Nov 6th
Reading Resources:
Lecture 29 (Poisoning defenses2)
Date: Nov 6th
Reading Resources:
All papers will be available per lecture
no book
For attacks and defenses, we will be using Adversarial Robustness Toolbox (ART) library. Other ones include cleverhans, or scratchai (not used in the class but could be used in projects.
AML Tutorial – by Bo Li, Dawn Song, and Yevgeniy Vorobeychik
Nicholas Carlini website
The official Python tutorial is quite comprehensive. There is also a useful glossary.
You might also find Google Python Class interesting.
Any Python for machine learning online crash course.