Jongwon Jeong
Β GoogleScholar / LinkedIn / Twitter / Github
Β GoogleScholar / LinkedIn / Twitter / Github
Ph.D. Student at University of Wisconsin-Madison
Contact
Email: jsjsjs964@gmail.com (Permanent), jongwon.jeong@wisc.edu
Hello. I am a first-year Ph.D. student in Electrical and Computer Engineering (ECE) at the University of Wisconsin-Madison, advised by Prof. Robert Nowak and Prof. Kangwook Lee. I will join IBM Research (AI Foundation Models) as a research intern in Summer 2026.
Previously, I was an Applied AI Research Scientist at Deep Learning Div. in KRAFTON. Also, I was an AI Research Scientist at Applied AI Lab, NLP Center in NCSOFT Corp. I received B.S. and M.S. degrees in Electrical Engineering (EE) from Korea Advanced Institute of Science and Technology (KAIST) in 2018 and 2020, respectively, where I was advised by Prof. Sae-Young Chung.
My research interests focus on language model (LM) agents that are practically usable in real-world environments. To this end, I pursue two directions: (1) improving agent capability under real-usage cases by identifying key limitations of current agents and developing empirical and theoretical methods to address them, and (2) improving agent efficiency by building computationally efficient agents, often based on small language models (sLMs), that close the performance gap with large models at lower compute costs. From an application perspective, I am also interested in applying LM agents to domains such as Embodied AI, Game AI, and Systems (RS).
Language Model Agents; Agentic Reasoning; Small Language Models (sLMs); Knowledge Distillation
Embodied AI; Game AI; Recommender Systems (RS)
(Jan. 2026) πΒ Our paper "T1: Tool-integrated Verification for Test-time Compute Scaling in Small Language Models" has been accepted at ICLR 2026! See you in Rio de Janeiro ποΈ!
(Dec. 2025) Β π I will join IBM Research (AI Foundation Models) as a Research Intern in Summer 2026.Β
(Sep. 2025) π Our paper "Distilling LLM Agents into Small Models with Retrieval and Code Tools" has been accepted at NeurIPS 2025 (Spotlight)! Huge thanks to my collaborators, especially Minki, for the incredible collaboration π.
(Aug. 2025) πβ¨ Excited to start my new academic journey at UWβMadison!
KRAFTON Inc. β Applied AI Research Scientist (Full-time)
Natural Language DL Team, Applied DL Department, Deep Learning Division
Oct 2023 β Aug 2025
NCSOFT Corp. β AI Research Scientist (Full-time)
Applied AI Lab, NLP Center
Aug 2020 β Oct 2023
Fulfilled alternative mandatory military service (Sep 2020 β Sep 2023)
Koh Young Technology β Research Intern (Internship)
Medical Vision Team (KAIST EE Co-op Program)
Mar 2017 β Aug 2017
TAPE: Tool-Guided Adaptive Planning and Constrained Execution in Language Model Agents
Jongwon Jeong, Jungtaek Kim, Kangwook Lee
Preprint (to appear), 2026
T1: Tool-integrated Verification for Test-time Compute Scaling in Small Language Models
Minki Kang*, Jongwon Jeong*, Jaewoong Cho (*: Equally Contributed)
International Conference on Learning Representations (ICLR) 2026
How to Correctly Report LLM-as-a-Judge Evaluations [Code]
Chungpa Lee, Thomas Zeng, Jongwon Jeong, Jy-yong Sohn, Kangwook Lee
arXiv, 2025
Distilling LLM Agent into Small Models with Retrieval and Code ToolsΒ [Code]
Minki Kang, Jongwon Jeong, Seanie Lee, Jaewoong Cho, Sung Ju Hwang
Neural Information Processing Systems (NeurIPS) 2025. (Spotlight)
A full list of my work is available here.
Ph.D. in Electrical and Computer Engineering, University of Wisconsin-Madison, Sep 2025 β Present
M.S. in Electrical Engineering, KAIST, Aug. 2018 β Aug. 2020.
B.S. in Electrical Engineering (Cum Laude), KAIST, Mar. 2014 β Aug. 2018.
Korea Science Academy of KAIST, Mar. 2011 β Feb. 2014.
Top Reviewer, Learning on Graphs (LoG), Nov. 2024.
Graduation with Honors (Cum Laude), KAIST, Aug. 2018.
National Science & Engineering Scholarship, Mar. 2016 β Mar. 2018.
(Nov. 2024) Graduate seminar, Ajou University
Title: Data-centric Approaches for Graph Deep Learning and Beyond: Theory, Challenges, and Real-world Applications
Conference Reviewer: AAAI (2024, 2025, 2026), LoG (2024, 2025), ICML (2026)
Journal Reviewer: TMLR (2025)