The Effectiveness of Membership Inference Attacks on Quantized Machine Learning Models

Charles Kowalski

Authors: Charles Kowalski, Azadeh Famili, and Dr. Yingjie Lao

Faculty Mentor: Dr. Yingjie Lao

College: College of Engineering, Computing, and Applied Sciences

ABSTRACT

Advances in artificial intelligence have propelled machine learning models into widespread use. Their ability to process information through neuron-like interactions and provide generally accurate conclusions has made them invaluable in many industries, especially the sales and medical fields. The implementation of machine learning requires intense computational infrastructure, limiting its expansion to enterprise users. Neural network compression is one of the proposed methods which allows for edge device deployment. This would allow portable devices and consumer electronics to tap into the possibilities of machine learning.


However useful, the adoption of machine learning methods presently comes with associated risks. Machine learning models can inadvertently reveal information on the data that they were trained on. This danger is especially potent in applications of networks that involve private, personal data, such as the healthcare industry. If machine learning is to be further utilized in technology, the methods by which they are made accessible must be evaluated and revised to defend against such data leaks. This research focuses on a membership inference attack on a full precision network and compressed network.

Video Introduction

Charles Kowalski 2021 Undergraduate Poster Forum