Deep Learning Researcher @ KRAFTON
email: jwcho at krafton dot com
webpage: sites.google.com/view/jaewoongcho
I am a deep learning researcher at KRAFTON.
I received my B.S. degree, M.S. degree, and Ph.D. degree in electrical engineering from KAIST in 2014, 2016, and 2022 respectively, under the supervision of Prof. Changho Suh.
Information theory, Machine learning
(Jan. 2021) TOP 10 KAIST Research Achievements of 2020 (KAIST Annual R&D Report):
(Aug. 2020) Won the naver paper award
(2019) Won the outstanding teaching assistant award
EE623: Information Theory (Fall 2019)
(2025)
Distilling LLM Agent into Small Models with Retrieval and Code Tools
Minki Kang, Jongwon Jeong, Seanie Lee, Jaewoong Cho, Sung Ju Hwang
NeurIPS 2025 (spotlight)
Delving into Large Language Models for Effective Time-Series Anomaly Detection
Junwoo Park, Kyudan Jung, Dohyun Lee, Hyuck Lee, Daehoon Gwak, ChaeHun Park, Jaegul Choo, Jaewoong Cho
NeurIPS 2025
FlashAdventure: A Benchmark for GUI Agents Solving Full Story Arcs in Diverse Adventure Games
Jaewoo Ahn, Junseo Kim, Heeseung Yun, Jaehyeon Son, Dongmin Park, Jaewoong Cho, Gunhee Kim
EMNLP 2025
Alignment as Distribution Learning: Your Preference Model is Explicitly a Language Model
Jihun Yun, Juno Kim, Jongho Park, Junhyuck Kim, Jongha Jon Ryu, Jaewoong Cho, Kwang-Sung Jun
Arxiv (preprint)
Orak: A Foundational Benchmark for Training and Evaluating LLM Agents on Diverse Video Games
Dongmin Park, Minkyu Kim, Beongjun Choi, Junhyuck Kim, Keon Lee, Jonghyun Lee, Inkyu Park, Byeong-Uk Lee, Jaeyoung Hwang, Jaewoo Ahn, Ameya S. Mahabaleshwarkar, Bilal Kartal, Pritam Biswas, Yoshi Suhara, Kangwook Lee, Jaewoong Cho
Arxiv (preprint)
T1: Tool-integrated Self-verification for Test-time Compute Scaling in Small Language Models
Minki Kang, Jongwon Jeong, Jaewoong Cho
Arxiv (preprint)
Efficient Generative Modeling with Residual Vector Quantization-Based Tokens
Jaehyeon Kim, Taehong Moon, Keon Lee, Jaewoong Cho
ICML 2025
Lexico: Extreme KV Cache Compression via Sparse Coding over Universal Dictionaries
Junhyuck Kim, Jongho Park, Jaewoong Cho, Dimitris Papailiopoulos
ICML 2025
Rare-to-Frequent: Unlocking Compositional Generation Power of Diffusion Models on Rare Concepts with LLM Guidance [Github]
Dongmin Park, Sebin Kim, Taehong Moon, Minkyu Kim, Kangwook Lee, Jaewoong Cho
ICLR 2025 (spotlight)
DiTTo-TTS: Efficient and Scalable Zero-Shot Text-to-Speech with Diffusion Transformer [🔊Demo]
Keon Lee, Dong Won Kim, Jaehyeon Kim, Jaewoong Cho
ICLR 2025
Bridging the Gap between Expert and Language Models: Concept-guided Chess Commentary Generation and Evaluation
Jaechang Kim, Jinmin Goh, Inseok Hwang, Jaewoong Cho, Jungseul Ok
NAACL 2025
(2024)
Task Diversity Shortens the ICL Plateau
Jaeyeon Kim, Sehyun Kwon, Joo Young Choi, Jongho Park, Jaewoong Cho, Jason D. Lee, Ernest K. Ryu
TMLR
Latent Paraphrasing: Perturbation on Layers Improves Knowledge Injection in Language Models
Minki Kang, Sung ju Hwang, Gibbeum Lee, Jaewoong Cho
NeurIPS 2024
Simple Drop-in LoRA Conditioning on Attention Layers Will Improve Your Diffusion Model
Joo Young Choi, Jaesung R. Park, Inkyu Park, Jaewoong Cho, Albert No, Ernest K. Ryu
TMLR
Accelerating Multilingual Language Model for Excessively Tokenized Languages
Jinmin Hong, Gibbeum Lee, Jaewoong Cho
ACL 2024
A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models
Taehong Moon, Moonseok Choi, EungGu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, Juho Lee
ICML 2024
Can Mamba Learn How to Learn? A Comparative Study on In-Context Learning Tasks
Jongho Park, Jaeseung Park, Zheyang Xiong, Nayoung Lee, Jaewoong Cho, Samet Oymak, Kangwook Lee, Dimitris Papailiopoulos
ICML 2024
CLAM-TTS: Improving Neural Codec Language Model for Zero-Shot Text-to-Speech [🔊Demo]
Jaehyeon Kim, Keon Lee, Seungjun Chung, Jaewoong Cho
ICLR 2024
Image Clustering Conditioned on Text Criteria
Sehyun Kwon, Jaeseung Park, Minkyu Kim, Jaewoong Cho, Ernest K. Ryu, Kangwook Lee
ICLR 2024
(2023)
Mini-Batch Optimization of Contrastive Loss
Jaewoong Cho*, Kartik Sreenivasan*, Keon Lee, Kyunghoo Mun, Soheun Yi, Jeong-Gwan Lee, Anna Lee, Jy-yong Sohn, Dimitris Papailiopoulos, Kangwook Lee
TMLR
Predictive Pipelined Decoding: A Compute-Latency Trade-off for Exact LLM Decoding
Seongjun Yang, Gibbeum Lee, Jaewoong Cho, Dimitris Papailiopoulos, and Kangwook Lee
TMLR
Censored Sampling of Diffusion Models Using 3 Minutes of Human Feedback
TaeHo Yoon, Kibeom Myoung, Keon Lee, Jaewoong Cho, Albert No, Ernest K. Ryu
NeurIPS 2023
(2022)
Equal Experience in Recommender Systems
Jaewoong Cho, Moonseok Choi, Changho Suh
Arxiv (preprint)
(2020)
A fair classifier using kernel density estimation
Jaewoong Cho, Gyeongjo Hwang, Changho Suh
NeurIPS 2020
A fair classifier using mutual information
Jaewoong Cho, Gyeongjo Hwang, Changho Suh
Proceedings of the IEEE International Symposium on Information Theory
(2019)
Wasserstein GAN can Perform PCA
Jaewoong Cho, Changho Suh
Proceedings of Allerton Conference on Communication, Control, and Computing
(2018)
Two-way interference channel capacity: How to have the cake and eat it too
Changho Suh, Jaewoong Cho, David Tse
IEEE Transactions on Information Theory, vol. 64, no. 6
(2017)
Two-way interference channel capacity: How to have the cake and eat it too
Changho Suh, Jaewoong Cho, David Tse
Proceedings of the IEEE International Symposium on Information Theory
(2016)
To feedback or not to feedback
Changho Suh, David Tse, Jaewoong Cho
Proceedings of the IEEE International Symposium on Information Theory