Sungyoon Lee
I'm an assistant professor of Computer Science at Hanyang University and the principal investigator of LRNING.
(Joint Appointments: Data Science / Artificial Intelligence / AI Systems / Institute for AI Research)
My research goal is to build a robust and reliable AI based on understanding of deep learning to solve important problems and develop technologies that can benefit people and society.
Research Interests
Deep learning
How does a machine learn? How can we improve it?
Optimization and Generalization [NeurIPS 22]
Learning Algorithms [ICML 23]
Implicit Bias of SGD [ICLR 23, ICML 23]
Neural Networks
Overparameterized Models
Transformers
Large Language Models (LLMs)
How can we build a robust and reliable machine?
Trustworthy AI, AI Safety and Alignment
Robustness
Adversarial Robustness [TPAMI 23]
Certified Defense [NeurIPS 20, NeurIPS 21]
Distribution Shift
Education & Research Experience
Korea Institute for Advanced Study (Sep. 2021 - Feb. 2023)
Research Fellow at Center for AI and Natural Sciences
Ph.D. (- Aug. 2021)
in Mathematical Sciences (Advisor: Jaewook Lee / Co-advisor: Seung-Yeal Ha)
Thesis: Robustness of Deep Neural Networks to Adversarial Attack: from Heuristic Methods to Certified Methods
B.S. (- Feb. 2016)
(Double Major) in Material Science and Engineering + Mathematical Sciences
Selected Ongoing Works
Prediction Risk and Estimation Risk of the Ridgeless Least Squares Estimator under General Assumptions on Regression Errors
SL, Sokbae Lee
arxiv
Selected Publications
Implicit Jacobian regularization weighted with impurity of probability output
SL, Jinseong Park, Jaewook Lee
ICML 2023
paper/poster/slides
We show that SGD has an implicit regularization effect on the logit-weight Jacobian norm in neural networks. This regularization effect is weighted with the impurity of the probability output, and thus it is active in a certain phase of training. Based on these findings, we propose a novel optimization method which leads to similar performance as other SOTA sharpness-aware optimization methods such as SAM and ASAM.
A New Characterization of the Edge of Stability Based on a Sharpness Measure Aware of Batch Gradient Distribution
SL, Cheongjae Jang
ICLR 2023
paper/poster/slides
We provide a clearer characterization of the Edge of Stability and extend it to general mini-batch SGD. Moreover, based on the analysis, we propose a new scaling rule, LSSR (Linear and Saturation Scaling Rule), between learning rate and batch size.
A Reparametrization-Invariant Sharpness Measure Based on Information Geometry
Cheongjae Jang, SL, Yung-Kyun Noh, Frank C. Park
NeurIPS 2022
paper/poster
We provide a reparametrization-invariant sharpness measure based on information geometry.
GradDiv: Adversarial Robustness of Randomized Neural Networks via Gradient Diversity Regularization
SL, Hoki Kim, Jaewook Lee
IEEE TPAMI (Feb. 2023)
paper/code
The effect of proxy-gradients-based adversarial attacks on randomized neural networks highly relies on the directional distribution of the loss-input gradients. We propose Gradient Diversity (GradDiv) regularizations that minimize the concentration of the gradients to weaken these proxy-gradients-based attacks.Towards Better Understanding of Training Certifiably Robust Models to Adversarial Examples
SL, Woojin Lee, Jinseong Park, Jaewook Lee
NeurIPS 2021
paper/code/poster/slides
We identify smoothness of the objective loss landscape as an important factor in building certifiably robust model against adversarial attacks.
Lipschitz-Certifiable Training with a Tight Outer Bound
SL, Jaewook Lee, Saerom Park
NeurIPS 2020
paper/code/poster/slides
Certifiable training minimizes an upper bound on the worst-case loss over valid adversarial perturbations, and thus the tightness of the upper bound is crucial. We propose a certified defense method with a tight upper bound.
CV
Theory without practice is empty, practice without theory is aimless.
學而不思則罔, 思而不學則殆 -論語 爲政篇