Preprints 🗒️
(P: Preprint, *: Equal Contribution)
(J: Journal, C: Conference, W: Workshop, *: Equal Contribution (1st Authors), ^: Equal Advising)
2025
[C16/W8] Flex-Judge: Text-Only Reasoning Unleashes Zero-Shot Multimodal Evaluators
[W7] Revisiting Multi-Agent Debate as Test-Time Scaling: A Systematic Study of Conditional Effectiveness
Yongjin Yang*, Euiin Yi*, Jongwoo Ko, Kimin Lee, Zhijing Jin, Se-Young Yun
ICML 2025 Workshop on Multi-Agent Systems (MAS). 2025. Vancouver [paper] [code]
[C15] DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs
[C14] SeRA: Self-Reviewing and Alignment of LLMs using Implicit Reward Margins
Jongwoo Ko, Saket Dingliwal, Bhavana Ganesh, Sailik Sengupta, Sravan Bodapati, Aram Galstyan
The Thirteenth International Conference on Learning Representations (ICLR). 2025. Singapore [paper]
[C13] Beyond Correlation: The Impact of Human Uncertainty in Measuring the Effectiveness of Automatic Evaluation and LLM-as-a-Judge
Aparna Elangovan, Lei Xu, Jongwoo Ko, Mahsa Elyasi, Ling Liu, Sravan Bodapati, Dan Roth
The Thirteenth International Conference on Learning Representations (ICLR). 2025. Singapore [paper]
2024
[C10] DistiLLM: Towards Streamlined Distillation for Large Language Models
[C9] Fine-tuning Pre-trained Models for Robustness Under Noisy Labels
Sumyeong Ahn, Sihyeon Kim, Jongwoo Ko, Se-Young Yun.
International Joint Conference on Artificial Intelligence (IJCAI). 2024. Jeju [paper]
2023
[C8] NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
[C7] Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
[C6] Revisiting Intermediate Layer Distillation for Compressing Language Models: An Overfitting Perspective
[C5/W2/W3] CUDA: Curriculum of Data Augmentation for Long-tailed Recognition
Sumyeong Ahn*, Jongwoo Ko*, Se-Young Yun
[C5] The Eleventh International Conference on Learning Representations (ICLR). 2023. Kigali. (Notable-Top-25%) [paper] [code]
[W2] NeurIPS 2022 Workshop on Distribution Shifts (DistShift). 2022. New Orleans [paper] [website]
[W3] NeurIPS 2022 ML Safety Workshop (MLSW). 2022. New Orleans [paper] [website]
[C3/W1] A Gift from Label Smoothing: Robust Training with Adaptive Label Smoothing via Auxiliary Classifier under Label Noise
2022
[J2] Deep Learning-Based Cataract Detection and Grading from Slit-Lamp and Retro-Illumination Photographs: Model Development and Validation Study
Ki Young Son*, Jongwoo Ko*, Eunseok Kim, Si Young Lee, Min-Ji Kim, Jisang Han, Eunhae Shin, Tae-Young Chung, Dong Hui Lim
Ophthalmology Science. 2(2). 100147 [paper]
2021
[C1] FINE Samples for Learning with Noisy Labels