We, LRNING (pronounced as "learning"), study deep learning.
Team A (Learning and Reasoning)
Representation Learning (Feature Learning, Self-Supervised Learning, ...)
Optimization and Generalization (Learning Dynamics, Implicit Bias, Sharpness, Edge-of-Stability, ...)
Large Language Models (Transformers, In-Context Learning, Task Learning, Emergent Abilities, ...)
Team B (Safety and Alignment)
AI Safety (Adversarial Robustness, LLM Jailbreak, Adversarial Prompting, AI Alignment, Hallucinations, Privacy-Preserving ML, ...)
Team C (LLMs and Generative Models)
Large Language Models (Transformers, In-Context Learning, Physics of LMs, ...)
Generative Models (Generalization, Memorization, Overoptimization, Controllable Generation, ...)
If you are interested in joining our group, please send an email to me with a short CV (e.g., research interests, future goals, achievements, etc.) and arrange an interview. Especially, if you are a student at Hanyang University, I highly recommend you to take any of my courses.
Group Study (3rd/4th year undergraduate students)
Textbook: "Probabilistic Machine Learning" by Kevin Murphy
Probabilistic Machine Learning: An Introduction (Ch 1-8, 13-15 + Ch 16, 19, 20)
Probabilistic Machine Learning: Advanced Topics (Ch 19 + Ch 4, 10, 20, 21, 24, 25, 35)
Winter (Jan. 10 - Feb. 28) ~8 weeks (Application Deadline = Jan. 1)
Optional Test (Anytime)
Spring (May. 10 - Jun. 10) ~5 weeks (Application Deadline = May. 1)
Summer (Jul. 10 - Aug. 31) ~8 weeks (Application Deadline = Jul. 1)
Optional Test (Anytime)
Fall (Nov. 10 - Dec. 10) ~5 weeks (Application Deadline = Nov. 1)
Undergraduate Intern (4th year students after the group study)
Paper Reading
Members
XXX XXX (MS Student, Mar. 2026-)
(undergraduate intern, Aug. 2025-Feb. 2026)
My research goal is to ...
XXX XXX (MS Student, Mar. 2026-)
(undergraduate intern, Jan. 2025-Feb. 2026)
My research goal is to ...
Jimin Yeom (MS Student, Sep. 2025-)
(undergraduate intern, May. 2024-Aug. 2025)
My research goal is to build a trustworthy, interpretable ai by understanding how models learn.
Trustworthy ai
adversarial robustness
robustness accuracy trade-off
catastrophic overfitting
llm jailbreaking
Interpretable ai
in-context learning
linear transformer
model unlearning
Gwangho Kim (MS Student, Mar. 2025-)
My research goal is to understand why deep learning works.
Diffusion model
Why diffusion models generalize well
Transformer
Hee-Sung Kim (MS-PhD Student, Mar. 2025-)
(undergraduate intern, Jan. 2025-Feb. 2025)
My research aims to develop improved machine learning methodologies by deepening the understanding of deep learning.
Deep Learning
Optimization and Generalization
Implicit Bias
Training Dynamics
Inconsistency (Disagreement)
Sharpness
Large Language Models
(Linear) Transformers
In-Context Learning
Changsu Shin (MS Student, Mar. 2025-)
(undergraduate intern, May. 2024-Feb. 2025)
My research focuses on developing an in-depth understanding of generative models.
Deep Learning
What are the factors influencing the generalization performance of diffusion models?
Diffusion Model Architecture
Training Dynamics and Generalization Trade-offs
Data Augmentation for Improved Generalization
Mathematical Framework for Generalization
Over-optimization in diffusion model
Mode Collapse and Sample Diversity in Generalization
Juhwan Kim (MS Student, Mar. 2024-)
(undergraduate intern, Oct. 2023-Feb. 2024)
Deep Learning
How can we build a better feature extractor?
Self-Supervised Learning
Optimization and Generalization
Advanced Learning Algorithms
Continual Learning
Jonghyun Hong (MS Student, Mar. 2024-)
EMNLP 2025 [hong2025variance]
(undergraduate intern, Aug. 2023-Feb. 2024)
My research goal is to build interpretable and trustworthy LLMs.
Interpretable LLMs
An interpretable analysis of training instabilities, such as loss spikes and gradients exploding, observed during LLMs pre-training.
(EMNLP 2025) Variance Sensitivity Induces Attention Entropy Collapse and Instability in Transformers
Trustworthy LLMs
Sungyoon Lee (PI, Mar. 2023-)
See Home.