Safe AI Lab specializes in the security of AI models, focusing on privacy-preserving machine learning, fairness verification and certification, and security-enhanced machine learning. We are interested in inherent vulnerabilities within AI models.
Privacy-preserving machine learning: secure computations and differential privacy
Fairness-aware machine learning: fairness verification and certification
Security-enhanced machine learning: adversarial robustness, poisoning attacks and model extraction attacks
Deep representation learning
Transfer learning
50, UNIST-gil, Eonyang-eup, Ulju-gun, Ulsan, Republic of Korea
☎️ +82 52 217 7599
✉️ srompark@unist.ac.kr