Research
Research
Privacy-preserving machine learning (PPML) is a field of research that combines cryptographic techniques like Homomorphic Encryption (HE), Multi-party Computation (MPC), and Differential Privacy (DP), with artificial intelligence systems to maintain data privacy while performing AI models. We are conducting cutting-edge research in PPML to open an era where Large Language Models (LLM) like ChatGPT can be used freely without any concerns about data leakage.
Federated learning, which allows training AI models without gathering data from each user in one place, is an attractive technique but has significant security vulnerabilities. Secure federated learning is a field that analyzes the security issues arising in the federated learning process, researches plausible passive and active attack scenarios, and develops effective defense techniques against them.
Beyond traditional encryption, which is primarily used to transmit data over public channels without the risk of data leakage, more advanced encryption schemes are actively being researched to allow secure computations or more fine-grained data access while maintaining security. We aim to add new functionalities while perfectly guaranteeing the rigorous mathematical security criteria. Representative fields include Homomorphic Encryption (HE), Multi-party Computation (MPC), and Functional Encryption (FE).
As the development of quantum computers advances significantly, it is now necessary to consider the replacement and transformation of cryptographic algorithms by quantum computers. Quantum computers can attack existing cryptographic algorithms more effectively than traditional computers and improve the performance of these algorithms by utilizing quantum concepts. Our goal is to analyze the comprehensive impact of quantum computers on cryptographic algorithms.