ML Model Stealing
Machine learning models may be considered confidential due to their sensitive training data, commercial value, or use in security applications. Often, these confidential ML models are deployed with publicly accessible query interfaces. For example, ML-as-a-service allows users to train models on potentially sensitive data and charge others for access on a pay-per-query basis.
Model extraction attacks have been developed to “steal” public access ML models. In such attacks, an adversary with black-box access but no prior knowledge of the model’s parameters or training data can near-perfectly extract target ML models for popular model classes, including logistic regression, neural networks, and decision trees.
In the case where the development of ML models involves expensive data collection and training procedures, ways to prove original model ownership are a way to mitigate model stealing.
To read more, check out the full project proposal
Machine Unlearning
Welcome to the frontier of ML research! Our cutting-edge team is dedicated to pioneering Machine Unlearning techniques. As technology evolves, so do the threats it poses, and our mission is to stay one step ahead. By improving the techniques used to allow models to unlearn data, we can enable the removal of biases from models, and return privacy and data sovereignty to individuals.
To learn more, check out the full project proposal
Signature Forgery Detection
Project summary: an introduction to image recognition using deep learning with convolutional networks (CNN), and then applying this knowledge to create an application that uses CNN and/or its derivative models in detecting potential signature forgeries.
To learn more, check out the full project proposal