Security & Privacy in Machine Learning & Deep Learning
Machine learning and deep learning models have the potential to inadvertently expose sensitive information present in the data used for their training. I am currently involved in research aimed to develop new algorithms designed to enhance data and model security by applying diverse security technologies into AI models. In particular, my present research is centered on two key technologies:
Homomorphic encryption, which safeguards data privacy during the training process through encryption.
Differential privacy, which adds randomness to algorithms, ensuring the protection of data and models during inference.
It is widely known that security technologies can substantially reduce the efficiency and effectiveness of algorithms for various reasons. Therefore, it is necessary to develop new algorithms that are both secure and effective, which constitutes the primary focus of my research. Future research will focus on developing an integrated AI security system that leverages various privacy-enhancing technologies, including multi-party computation and federated learning.
Fairness in Machine Learning & Deep Learning
It has been revealed that machine learning models can produce discriminatory outcomes for different demographic groups, including those defined by race, gender, and age. One primary factor contributing to these results is the historical biases ingrained in the training data. For example, if the data indicates a historical trend of a particular gender being more frequently hired for a specific job, a model trained on this data is more likely to hire that gender under the same conditions. Fair machine learning aligns with global objectives such as the UN's Sustainable Development Goals, and aims to develop new algorithms mitigating such discrimination.
In general, adding fairness to an algorithm comes at the cost of decreased utility. Therefore, improving the trade-off between utility and fairness is a key focus of research. I am particularly interested in understanding the relationship between generalization error and fairness constraints through statistical complexity analysis methodologies. Models that simultaneously address security, privacy, and fairness represent an intriguing and emerging research area for ensuring the ethical use of AI.Â
Industrial Applications
As represented by the GDPR in Europe and the revised Data 3 Act in Korea, nations around the world are strengthening legal restrictions on the use of personal data. As all industries become hyper-personalized, it is clear that the importance of data protection is only going to grow. In the training and use of AI models, while performance has been of the paramount value so far, in the future, how well they can address ethical issues will be an even greater concern.
I am currently conducting research to protect customer information within the financial sector, especially in the robo-advisor, which is a core component of fintech. Thus far, I have developed safe portfolio optimization methods using homomorphic encryption. Future research will focus on developing a safe transaction method using blockchain technology to construct a more robust and practical system.
In addition, I aim to develop integrated AI security systems customized to protect personal or corporate data across various industries, including manufacturing and healthcare. In particular, the healthcare industry is expected to have a very high demand for security technologies due to its utilization of highly sensitive health information.