This is a preliminary schedule and is subject to change at anytime as the term progresses.
Paper Summaries
Due on January 12 (Tuesday) at 12pm noon
1. SoK: Security and Privacy in Machine Learning. Nicolas Papernot, Patrick McDaniel, Arunesh Sinha, and Michael P. Wellman. (pdf)
Due on January 14 (Thursday) at 12pm noon
1. Explaining and Harnessing Adversarial Examples. Ian Goodfellow, Jonathon Shlens, Christian Szegedy. (pdf) [Presentation by Anmol Chachra]
2. Intriguing Properties of Neural Networks. Christian Szegedy, Wojciech Zaremba, Ilya Sutskever, Joan Bruna, Dumitru Erhan, Ian Goodfellow, Rob Fergus. (pdf) [Presentation by Maxwell Aladago]
Due on January 19 (Tuesday) at 12pm noon
1. The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks. Yuheng Zhang, Ruoxi Jia, Hengzhi Pei, Wenxiao Wang, Bo Li, Dawn Song. (pdf) [Presentation by Danielle Fang]
2. Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures. Matt Fredrikson, Somesh Jha, Thomas Ristenpart. (pdf) [Presentation by Kevin Ge]
Due on January 21 (Thursday) at 12pm noon
Any paper under the Model Inversion Attacks topic listed here (except the ones for which the paper summaries are due on January 19).
Due on January 26 (Tuesday) at 12pm noon
1. Membership Inference Attacks Against Machine Learning Models. Reza Shokri, Marco Stronati, Congzheng Song, Vitaly Shmatikov. (pdf) [Presentation by Kang Gu]
2. Machine Learning with Membership Privacy using Adversarial Regularization. Milad Nasr, Reza Shokri, Amir Houmansadr. (pdf) [Presentation by Jason Linehan]
Due on January 28 (Thursday) at 12pm noon
Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning. Milad Nasr, Reza Shokri, and Amir Houmansadr. (pdf) [Presentation by Brody McNutt]
Due on February 2 (Tuesday) at 12pm noon
1. Towards Evaluating the Robustness of Neural Networks. Nicholas Carlini, and David Wagner. (pdf) [Presentation by Joshua Ackerman]
2. Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. Nicolas Papernot, Patrick McDaniel, Xi Wu, Somesh Jha, Ananthram Swami. (pdf) [Presentation by Lessley J. Hernández]
Due on February 4 (Thursday) at 12pm noon
1. Practical Black-Box Attacks against Machine Learning. Nicolas Papernot, Patrick McDaniel, Ian Goodfellow, Somesh Jha, Z. Berkay Celik, Ananthram Swami. (pdf) [Presentation by Julia Martin]
2. Adversarial Examples for Evaluating Reading Comprehension Systems. (pdf) [Presentation by Chongyang Gao]
Due on February 11 (Thursday) at 12pm noon
1. Evasion Attacks against Machine Learning at Test Time. Battista Biggio, Igino Corona, Davide Maiorca, Blaine Nelson, Nedim Srndic, Pavel Laskov, Giorgio Giacinto, Fabio Roli. (pdf) [Presentation by Maxwell Aladago]
Due on February 16 (Tuesday) at 12pm noon
1. Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning. (pdf) [Presentation by Julia Martin]
2. Certified Defenses for Data Poisoning Attacks. (pdf) [Presentation by Anmol Chachra]
Due on February 18 (Thursday) at 12pm noon
1. High Accuracy and High Fidelity Extraction of Neural Networks. Matthew Jagielski, Nicholas Carlini, David Berthelot, Alex Kurakin, and Nicolas Papernot. (pdf) [Presentation by Joshua Ackerman]
2. Stealing Machine Learning Models via Prediction APIs. Florian Tramèr, Fan Zhang, Ari Juels, Michael K. Reiter, and Thomas Ristenpart. (pdf) [Presentation by Kevin Ge]
Due on February 23 (Tuesday) at 12pm noon
1. Not one but many Tradeoffs: Privacy Vs. Utility in Differentially Private Machine Learning. Benjamin Zi Hao Zhao, Mohamed Ali Kaafar, Nicolas Kourtellis (pdf) [Presentation by Brody McNutt]
2. Deep learning with differential privacy. Martin Abadi, Andy Chu, Ian Goodfellow, H Brendan McMahan, Ilya Mironov, Kinal Talwar, and Li Zhang. (pdf) [Presentation by Kang Gu]
3. Model Explanations with Differential Privacy. Neel Patel, Reza Shokri, and Yair Zick. (pdf) [Presentation by Jason Linehan]
Due on February 25 (Thursday) at 12pm noon
1. Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Alexandra Chouldechova. (pdf) [Presentation by Danielle Fang]
2. Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. (pdf) [Presentation by Chongyang Gao]
3. On the Privacy Risks of Model Explanations. Reza Shokri, Martin Strobel, and Yair Zick. (pdf)