Hi! π I am a Ph.D. student at KAIST AI (Advisors: Se-Young Yun and Hwanjun Song).Β
My research focuses on enhancing AI's Reasoning Capabilities and accelerating its Inference Efficiency. Specifically, I enhance model's ability by scaling synthetic data without direct human supervision [C2, C3, C5]. Concurrently, I accelerate inference speeds through novel model architectures, adaptive computation, and efficient decoding strategies [C1, C6, W3].
Contact me: yujin [dot] src [at] gmail [dot] com [CV / Scholar / Github / LinkedIn]
(C: Conference, W: Workshop, P: Preprint, *: Equal Contribution (1st Authors), ^: Equal Advising)
2025
[C6/W4] Mixture-of-Recursions: Learning Dynamic Recursive Depths for Adaptive Token-Level Computation
Sangmin Bae*, Yujin Kim*, Reza Bayat*, Sungnyun Kim, Jiyoun Ha, Tal Schuster, Adam Fisch, Hrayr Harutyunyan, Ziwei Ji, Aaron Courville^, Se-Young Yun^
The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS). 2025. San Diego [paper] [code (468+β)]
ICML Workshop on Efficient Systems for Foundation Models (ES-FoMo III). 2025
2024
[C4] BAPO: Base-Anchored Preference Optimization for Personalized Alignment in Large Language Models
Gihun Lee, Minchan Jeong, Yujin Kim, Hojung Jung, Jaehoon Oh, Sangmook Kim, Se-Young Yun
Findings of the Association for Computational Linguistics: EMNLP 2024. Miami [paper]
[C3/W2] Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Yujin Kim, Jaehong Yoon, Seonghyeon Ye, Sangmin Bae, Namgyu Ho, Sung Ju Hwang^, Se-young Yun^.
Proceedings of the 2024 Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). 2024. Mexico City [paper] [code]
NeurIPS Workshop on Synthetic Data Generation with Generative AI. 2023. (Oral)
2023
[C1] NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
2022
[W1] Revisiting the Updates of a Pre-trained Model for Few-shot Learning
Yujin Kim*, Jaehoon Oh*, Sungnyun Kim, Se-Young Yun.
ICML Workshop on Updatable Machine Learning (UpML). 2022. (Oral) [paper]Β
Research Intern @NAVER Cloud
Bundang-Gu, Gyeonggi, South Korea
Mar. 2024 - Aug. 2024 (6 month)
Developed an inference-efficient adapter network under multiple task-specific serving scenarios.
NVIDIA-HPE TensorRT-LLM Hackathon
SKT AI Fellowship, Grand Prize (2021)
2D-3D keypoint matching model by incorporating graph attention network.
Panel detection model by constructing panel dataset with synthetic data generation.
Korea Advanced Institue of Science and Technology (KAIST), Seoul, Korea, Mar. 2025 - Ongoing
Ph.D in Kim Jaechul Graduate School of Artificial Intelligence (Advisor: Se-Young Yun and Hwanjun Song)
Korea Advanced Institue of Science and Technology (KAIST), Seoul, Korea, June. 2022 - Feb. 2025
M.S. in Kim Jaechul Graduate School of Artificial Intelligence (Advisor: Se-Young Yun)
Thesis: Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Model
Sogang University, Seoul, Korea, Mar. 2017 - Aug. 2022
B.A. in Economics and B.S in Artificial Intelligence