Computer Science
New York University
CV / Google Scholar / Github / LinkedIn / Email
Email : dongkyu dot cho at nyu dot edu
ABOUT
I’m a Computer Science Ph.D. student at the NYU Courant Institute School, advised by Prof. Rumi Chunara. I also work closely with NYU Langone. Previously, I worked with Amazon as an Applied Scientist Intern, advised by Prof. Hengrui Cai and Dr. Rui Song. I finished my master’s and bachelor's at Seoul National University, advised by Prof. Sanghack Lee. Before my doctoral studies, I was a Researcher at LG AI Research.
I am primarily interested in Foundation Models, broadly in understanding their memorization and generalization. I am enthusiastic about a wide range of subjects, including:
Memorization
Continual Learning / Unlearning for Foundation Models [W3]
Multi-agentic Memory for Agentic Collaboration
Generalization
Causal / Verifiable Reasoning Models [W4]
Foundation Models for safety-critical domains (e.g., Healthcare) [C1, W5, W2, W1]
Post Training (e.g., Weight Space Learning [W3], Model Collaboration [C1, W5], RLHF)
Previously, I worked on Causal Representation Learning & Causality-inspired Generalization Algorithms. Apart from research, I’m a big fan of History👨🏻🏫 and Jazz🎷. I'm also interested in the application of Artificial Intelligence to humanities research (e.g., Histoinformatics). Please feel free to reach out for multi-disciplinary research collaborations!
NEWS
[2025.11.10] 🔥 Our paper on Inference-Time Model Collaboration was accepted on both ResponsibleFM - NeurIPS 2025 and Responsible Synthetic Data - AAAI 2026 as a Spotlight paper!
[2025.10.10] 🔥 Our paper on Semi-Verifiable Reasoning was accepted on AMLC 2025 Evaluation Paradigms for GenAI: Planning, Tools, and Real-World Performance as an Oral paper!
[2025.09.23] 🔥 Our paper on Semi-Verifiable Reasoning was accepted on ForLM- NeurIPS 2025 and ER- NeurIPS 2025 as a Spotlight!
[2025.08.13] Our paper on Continual Learning was accepted on CLVision- ICCV 2025. See you in Honolulu!
[2025.04.04] This summer, I will join Amazon in Seattle as an Applied Scientist Intern.
[2025.03.05] Our paper on Continual Learning was accepted at WSL - ICLR 2025. See you in Singapore!
[2025.02.26] Our paper on Model-to-Model Regularization was accepted at CVPR 2025. See you in Nashville!
[2024.09.01] I've begun my Ph.D. journey in Computer Science at New York University!
(C: Conference, W: Workshop, J: Journal, P: Preprint)
[C1] PEER Pressure: Model-to-Model Regularization for Single Source Domain Generalization
Dongkyu Cho, Inwoo Hwang, Sanghack Lee
CVPR 2025 (Acceptance Rate: 22.12%)
[W5] Expert-guided Clinical Text Augmentation via Query-Based Model Collaboration
Dongkyu Cho*, Miao Zhang*, Gregory D. Lyng, Rumi Chunara
NeurIPS 2025, Workshop on Responsible FM
AAAI 2026, Workshop on Responsible Synthetic Data (Spotlight)
Collaboration with Optum/United Healthcare
[W4] Correct Reasoning Paths Visit Shared Decision Pivots
Dongkyu Cho, Amy B.Z. Zhang, Bilel Fehri, Rumi Chunara, Hengrui Cai, Rui Song
NeurIPS 2025, Workshop on Foundations of Reasoning in Language Models
NeurIPS 2025, Workshop on Efficient Reasoning (Spotlight)
AMLC 2025, Evaluation Paradigms for GenAI: Planning, Tools, and Real-World Performance (Oral)
Work done during Dongkyu's internship @ Amazon
[W3] Cost-Efficient Continual Learning with Sufficient Exemplar Memory
Dongkyu Cho, Taesup Moon, Rumi Chunara, Kyunghyun Cho, Sungmin Cha
ICLR 2025, Workshop on Weight Space Learning
ICCV 2025, 6th Workshop on Continual Learning in Computer Vision
[W2] ShERPA: Leveraging Neuron Alignment for Knowledge-preserving Fine-tuning
Dongkyu Cho, Jinseok Yang, Jun Seo, Seohui Bae, Dongwan Kang, Hyeokjun Choe, Woohyung Lim
ICLR 2024, Workshop on Mathematical and Empirical Understanding of Foundation Models
[W1] Learning to ignore: Single Source Domain Generalization via Oracle Regularization
Dongkyu Cho, Sanghack Lee
NeurIPS 2023, Causal Representation Learning Workshop
[P4] Towards Factual Measure of Verifiable Reasoning
[P3] Tree of Concepts: Towards Interpretable Continual Learning Models (Working Paper with NYU Langone)
[P2] Towards Clinical Reasoning Models: Verifiable Rewards via Expert Supervision
[Ph.D.] Doctor of Philosophy, New York University
Courant Institute School of Mathematics, Computing, and Data Science
Doctor of Computer Science (September 2024 ~ May 2029)
Advisor: Professor Rumi Chunara
Field of Research: Model Generalization, Foundation Models, Causality
[MS] Master of Science, Seoul National University
Master of Data Science (March 2021 ~ August 2023)
Advisor: Professor Sanghack Lee
Field of Research: Causality, Causal Representation Learning
[BA] Bachelor of Arts, Seoul National University
Information Science & Culture/ Western History (March 2014 ~ February 2021)
Field of Research: Quantitative Historical Research.
Amazon Science - Applied Scientist Intern (May 2025 ~ )
Applied Scientist Intern at Amazon, Worldwide Trust.
Advised by Prof. Hengrui Cai and Dr. Rui Song
LG AI Research - Research Scientist Intern (July 2023 ~ July 2024)
Research Scientist Intern at LG AI Research, Data Intelligence Lab
Research Field: Time-Series Foundation Models, Loss Landscapes and Model Merging, Alignment of Large Language Models for Time-Series Forecasting, LLM-driven Causal Discovery
Causality Lab, SNU GSDS - Research Assistant (July 2021 ~ August 2023)
Research Assistant at Causality Lab, Seoul National University GSDS
Research Field: Leveraging causality for effective Out-of-Distribution Generalization
VAIV Company - Analyst Intern (January 2019 ~ February 2019)
Analyst Intern at VAIV Company
NLP-based Market Sentiment Analysis
Invited Talk at NYU Center for Health Data Science (09.2025)
Invited Talk at NYU Digital Health Work (03.2025),
Invited Talk at SNU GSDS Student Seminar (05.2023)