SPSL: Secure and Private Systems for Machine Learning

ISCA Workshop, June 18th, 2021

Machine Learning has various applications in our daily lives from health care and smart homes to autonomous vehicles and personal assistants. While performance and convergence aspects of ML have been researched extensively since the beginning, the importance of privacy and security aspects has gotten attention in recent years with more official regulations in this domain. While various approaches for machine learning security and privacy have recently been proposed and explored, existing solutions with strong theoretical protection guarantees have high overhead, which limits their applicability. For practical ML systems with strong security and privacy needs, we will need more system-level research to reduce overhead and provide holistic protection. Since ML security and privacy research focusing on system aspects is still a new frontier, the goal of this workshop is to explore practical system design and implementation solutions to provide security and privacy.


There will be a panel discussion, you can submit your questions here.


Schedule (Time in Eastern Time)

Keynote Speakers

Vitaly Shmatikov (Cornell University), Title: Breaking and Salvaging Federated Learning.

Vitaly Shmatikov is a Professor of Computer Science at Cornell University and Cornell Tech. Before joining Cornell, he worked at the University of Texas at Austin and SRI International. He received the Caspar Bowden PET Award for Outstanding Research in Privacy Enhancing Technologies three times, in 2008, 2014, and 2018, and was a runner-up in 2013. Dr. Shmatikov’s research group won the Test-of-Time Awards from the IEEE Symposium on Security and Privacy and the ACM Conference on Computer and Communications Security (CCS), multiple Best Practical Paper and Best Student Paper Awards from IEEE S&P and NDSS, and the NYU-Poly AT&T Best Applied Security Paper Award. Dr. Shmatikov earned his PhD in computer science and MS in engineering-economic systems from Stanford University.

Kristin Lauter (Facebook), Title: Private AI: Machine Learning on Encrypted Data

Krinstin Lauter is currently West Coast Head of Research Science in Facebook AI Research, leading the groups in Core Machine Learning, Computer Vision, Robotics, Natural Language Processing, and other areas. Prior to joining Facebook, she was in Microsoft working on developing new cryptographic systems, research on post quantum systems, and researching to find faults in current cryptographic systems. She worked with coworkers at Microsoft to develop a cryptographic algorithm from supersingular isogeny graphs. She created a HASH function from it and presented it at NIST HASH function competition. Prior to joining Microsoft, she held positions as a visiting scholar at Max Planck Institut fur Mathematik in Bonn, Germany, T.H. Hildebrandt research assistant professor at the University of Michigan, and a visiting researcher at Institut de Mathematiques Luminy in France. Dr. Luter earned her MS and PhD in mathematics from University of Chicago.


Invited Speakers

Nicholas Carlini (Google Brain), Title: Large Underspecified Models: Less Secure, Less Private

Nicholas Carlini is a research scientist at Google Brain. He studies the security and privacy of machine learning, for which he has received best paper awards at ICML and IEEE S&P. He obtained his Ph.D. from the University of California, Berkeley in 2018.

Abstract of the Talk: Driven by the recent advances in hardware acceleration, neural networks can now train ever larger models on ever larger datasets. Because supervised datasets are often comparatively small, in order to make use of these vast compute resources, neural networks now have to train on underspecified objectives: those where the model itself defines its own objective function.

In this talk, he argues that while training underspecified models at scale may benefit accuracy, it comes at a cost to security and privacy. Compared to their supervised counterparts, large underspecified models are less adversarially robust, less stable, and less private. Addressing these challenges will require new solutions.

Ahmad-Reza Sadeghi (TU Darmstadt), Title: Enclaved AI: AI Security & Privacy with Enclave Computing

Ahmad-Reza Sadeghi is a professor of Computer Science at the TU Darmstadt, Germany. He is the head of the Systems Security Lab at the Cybersecurity Research Center of TU Darmstadt. Since 2012 he has also been leading three several Intel Collaborative Research Centers on Secure Mobile and Embedded Computing, Trustworthy Autonomous Systems, and since 2020 on Private AI.

Prof. Sadeghi holds a Ph.D. in Computer Science and MScs in Electrical Engineering as well as Industrial Engineering. Prior to academia, he has been working in R&D of Telecommunications enterprises, amongst others Ericsson. For his influential research on Trusted and Trustworthy Computing he received the renowned German “Karl Heinz Beckurts” award. This award honors excellent scientific achievements with high impact on industrial innovations in Germany. In 2018, Prof. Sadeghi received the ACM SIGSAC Outstanding Contributions Award for dedicated research, education, and management leadership in the security community and for pioneering contributions in content protection, mobile security and hardware-assisted security.

Jakub Szefer (Yale University), Title: Machine Learning Security on Cloud FPGAs

Jakub Szefer’s research focuses on computer architecture and hardware security. His research encompasses secure processor architectures, cloud security, FPGA attacks and defenses, and hardware FPGA implementation of cryptographic algorithms. He is currently an Associate Professor of Electrical Engineering at Yale University, where he leads the Computer Architecture and Security Laboratory (CASLAB). Prior to joining Yale, he received Ph.D. and M.A. degrees in Electrical Engineering from Princeton University, and B.S. degree with highest honors in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign. He has received the NSF CAREER award in 2017. Jakub is the author of first book focusing on processor architecture security: “Principles of Secure Processor Architecture Design”. Recently, he has received the 2021 Ackerman Award for Teaching and Mentoring from Yale’s School of Engineering and Applied Science.

Abstract of the Talk: In this talk we will show how, in a multi-tenant FPGA setting, it is possible for adversaries to steal machine learning inputs or architecture details through voltage-based channels. This talk will cover our recent work on analyzing security of custom-built neural network accelerators, as well as attacks on off-the-shelf processor-like accelerators, both realized on commercial cloud-based FPGAs.

If you have any questions for invited speakers, you can enter them here and we will address them during the panel discussion.

Call For Papers

We invite research papers and practitioner reports to address the challenges associated with secure and private systems for machine learning (SPSL).

The broader context of this workshop is that there is a growing need for protecting data and model information in machine learning (ML) systems, in the presence of various malicious entities in the compute and communication fabric. This inaugural SPSL workshop, held in conjunction with ISCA 2021, hopes to bring together researchers designing computer systems for machine learning and experts in security and privacy to foster collaboration and a common space to exchange ideas.


There are no published proceedings associated with this workshop, and hence as per IEEE/ACM rules a workshop submission can be concurrently submitted to a conference.


Topics of submitted papers include, but not limited to, the following areas, with particular emphasis on system design and implementation aspects:

  • Systems for privacy-preserving training and/or inference

  • Systems for robustness of ML computations

  • Secure hardware (such as trusted execution environment) for secure and private ML

  • System support and acceleration for algorithmic solutions such as secure multi-party computation (MPC), homomorphic encryption (HE), federated learning, and differential privacy

  • Hardware-algorithm co-design for ML security and/or privacy

  • System-level challenges in applied cryptography for ML

  • New attacks and defenses on ML systems, including side-channel attacks.

  • Study and evaluation of practical ML system security and privacy

Submission Instructions

  • The submissions are restricted to 4 pages of main content, with no more than 2 pages of additional references and appendix.

  • All submissions must be in PDF format and should follow the ISCA'21 Latex Template.

  • Please follow the guidelines provided at ISCA 2021 Paper Submission Guidelines.

  • The submissions should be anonymized for double-blind review.

  • Please submit your paper no later than April 30th- Midnight Anywhere on Earth.

Important Dates

  • Abstract submission deadline: April 23th, 2021- Midnight Anywhere on Earth No need to submit the Abstract!

  • Full paper submission deadline: April 30th, 2021- Midnight Anywhere on Earth

  • Author Notification: June 1st, 2021

  • Workshop Date: June 18th, 2021 - 10:45 AM to 4:45 PM (ET)

Organizing Committee

  • Murali Annavaram (University of Southern California)

  • Edward Suh (Cornell University, Facebook AI Research)

  • Wenjie Xiong (Facebook AI Research)

  • Hanieh Hashemi (University of Southern California)

Program Committee

Moinuddin Qureshi (Georgia Tech)

Tim Sherwood (UC Santa Barbara)

Nael Abu-Ghazaleh (UC Riverside)

Russell Tessier (University of Massachusetts Amherst)

Christopher Fletcher (University of Illinois at Urbana-Champaign)

Chuan Guo (Facebook AI Research)

Daniel Holcomb (University of Massachusetts Amherst)

Mark Silberstein (Technion)

Jakub Szefer (Yale University)

Nishanth Chandran (Microsoft Research, India)

Brandon Reagen (New York University)

Byoungyoung Lee (Seoul National University)

Mengjia Yan (Massachusetts Institute of Technology)

Rosario Cammarota (Intel Labs)

Ruoxi Jia (Virginia Tech)

Aydin Aysu (North Carolina State University)

Nader Sehatbakhsh (University of California, Los Angeles)

Fan Yao (University of Central Florida)

Hoda Naghibijouybari (Binghmaton University)

Sadegh Riazi (UC San Diego)

Simha Sethumadhavan (Columbia Univeristy)

Mohit Tiwari (UT Austin)

Houman Homayoun (UC Davis)

If you have any questions, please contact us at spsl.workshop.chair@gmail.com