Secure multi-party computation allows a network of mutually distrustful players, each holding a secret input, to run an interactive protocol in order to evaluate a function on their joint inputs in a secure way, i.e. without revealing anything more than what the output of the function naturally reveals. It enables powerful mechanisms for minimizing risks of data breaches and allows to decentralize a single point of failure. It can be instantiated in highly interactive settings or alternatively through the use of homomorphic encryption.
Neural Networks are function approximators that can represent any function. Mathematically, they are a composition of several functions (layers) that are learned through gradient-based optimization. Neural networks suffer many possible attacks with results ranging from poor learning or distorted classification results, to breach of privacy and disclosure of sensitive training data. Research on how to address these vulnerabilities is ongoing, and secure multi-party computation techniques are being applied in some settings.
The course is thought as an introduction to secure computation, and will cover both its theoretical foundations and its applications to practical settings. Further, it covers introduction to neural networks from both theoretical and practical standpoints and analyzes the security of the functions learned by these networks through study of different attack techniques and countermeasures.
Secure Communication: TLS Protocol
Secure two-party computation (2PC): Two-party protocols and definitions for passive and active security. Zero-knowledge proofs. Coin tossing. Oblivious transfer. Yao's garbled circuits protocol
Secure multi-party computation (MPC): Multi-party protocols and definitions for passive and active security. Secret sharing schemes. MPC with honest majority.
Homomorphic encryption (HE)
Neural Networks Basics: Security issues in NN, intro to Machine Learning (ML), Gradient-Based Optimization, Overfitting, Regularization and Hyperparameters
Deep Neural Networks (DNN): DNN as function approximators, Logistic Regression and Gradient Descent, Activation Functions, Backpropagation, Convolutions
Secure DNNs: Adversarial examples, NN poisoning, trojaning, data privacy
Federated Learning: Learning in a distributed way, MPC in federated learning (secure aggregation)
The slides for the course can be downloaded from here.
The following books are also suggested as further references:
[1] David Evans, Vladimir Kolesnikov and Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. now publishers inc., 2018 (https://securecomputation.org/).
[2] Carmit Hazay and Yehuda Lindell. Efficient Secure Two-Party Protocols, Springer, 2010.
[3] Ronald Cramer, Ivan Bjerre Damgård, Jesper Buus Nielsen. Secure Multiparty Computation and Secret Sharing. Cambridge University Press, 2015.
[4] Dan Boneh, Victor Shoup. A Graduate Course in Applied Cryptography. 2024.
[5] Vladimir Kolesnikov, Mike Rosulek. A Pragmatic Introduction to Secure Multi-Party Computation. 2022.
[5] Ian Goodfellow, Yoshua Bengio, Aaron Courville. Deep Learning. MIT Press, 2016. Available Online
[6] Michael Nielsen. Neural Networks and Deep Learning. 2019. Available Online
Lectures time:
Tuesday 16:00 - 19:00 Aula G50, Palazzina G, Viale Regina Elena.
Wednesday 14:00 - 16:00 Aula 1L, Via del Castro Laurenziano.
Classroom (class code: 7t7t74m): https://classroom.google.com/
Oral exam (for both parts) and project (for the ML part only).
[24/02/2025] The course will start on February 26, 2025.