Hi! I'm currently a research intern at KAIST AI under the supervision of Prof. Seong Joon Oh, and I’m excited to continue my PhD there starting in September 2026.
My research lies in the field of AI Safety, with a particular interest in building models that are both fair, responsible, and transparent. Specifically, I am interested in machine unlearning, which seeks to selectively remove the influence of specific data to protect user privacy, copyright, and data ownership. Recently, I have been exploring unlearning techniques in both large language models (LLMs) and large vision-language models (LVLMs). Also, I have developed a growing interest in model explainability, as understanding how a model forgets certain knowledge is crucial for ensuring trustworthy and verifiable unlearning. By combining unlearning and explainability, my goal is to contribute toward safer and more accountable AI systems.
If you are interested in my research or would like to explore potential collaborations, please feel free to reach out — I am always open to new ideas and collaborations.
News
02/2026: Joined the STAI group at KAIST AI.
01/2026: 1 paper accepted @ ICLR 2026.
11/2025: 🏆 Granted AI SeoulTech Graduate Scholarship by Seoul Scholarship Foundation ($3.5K).
09/2025: 1 paper accepted @ NeurIPS 2025.
07/2025: 1 paper accepted @ COLM 2025.
01/2025: Served as a visiting scholar at the Tübingen AI Center (Host: Prof. Dr. Seong Joon Oh).
03/2024: 🏆 Granted Digital Human & Entertainment Scholarship by Smilegate ($27K).
03/2024: 🏆 Granted Albatross Fellowship by Sogang University ($21K, Graduated in the top 10% of the class).