My research lies at the intersection of applied mathematics, optimal control, and machine learning. I focus on Hamilton–Jacobi–Bellman (HJB) and Hamilton–Jacobi–Isaacs (HJI) equations that arise in stochastic optimal control, differential games, and risk-sensitive decision-making. By combining rigorous PDE theory with physics-informed neural networks (PINNs), I develop neural policy iteration frameworks that overcome the curse of dimensionality while retaining theoretical guarantees.
(with T. Costa, J. Cummings, M. Jenkinson, J. Martinez, N. Olivares, A. Sezginer) Fast calculation of diffraction by photomasks, University of Minnesota. IMA, 2014
Constrained Hamilton--Jacobi equations and further applications via optimal control theory, Ph.D thesis, 2019
Well-posedness for constrained Hamilton--Jacobi equations, Acta Applicandae Mathematicae, 2020
On uniqueness for one-dimensional constrained Hamilton--Jacobi equation, Minimax Theory and its Applications, 2020
(with H. V. Tran and S. N.T. Tu) State-constraint static Hamilton-Jacobi equations in nested domains, SIAM Journal on Mathematical Analysis, 2020
(with K. Jun, I. Yang) Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs, NeurIPS, 2022
(with J. Shin, A. Hakobyan, M. Park, G. Kim, I. Yang) Infusing model predictive control into meta-reinforcement learning for mobile robots in dynamic environments, IEEE Robotics and Automation Letters, 2022
(with I. Yang) On Representation Formulas for Optimal Control: A Lagrangian Perspective, IET Control Theory & Applications, 2022
(with K. Kim, I. Yang) On concentration bounds for Bayesian identification of linear non-Gaussian systems, Proceedings of the 62th IEEE Conference on Decision and Control (CDC), 2023
(with K. Kim, I. Yang) Approximate Thompson sampling for learning linear quadratic regulators with O(\sqrt{T}) regret, Learning for Dynamics & Control Conference (L4DC, selected as an oral presentation), 2025
(with J. Jang) On a minimum eradication time for the SIR model with time-dependent coefficients, Proceedings of the AMS, 2025
(with Y. Park, M. Kim) Acceleration of grokking in learning arithmetic operations via Kolmogorov-Arnold representation, Neurocomputing, 2025
(with Y. Choi, K. Park) Deep reinforcement learning for optimal design of compliant mechanisms based on digitized cell structures, Engineering Applications of Artificial Intelligence, 2025
(with Y. Kim, M. Kim) Physics-Informed Neural Networks for optimal vaccination plan in SIR epidemic models, Mathematical Biosciences and Engineering, 2025
(with J. Lee) Hamilton--Jacobi based policy-iteration via deep operator learning, Neurocomputing, 2025
(with Y. Kim. K. Jun) Instance-dependent fixed-budget pure exploration in reinforcement learning, ICLR, 2026 (ICML EXAIT Workshop, 2025)
(with N. Cho, Y. Kim, K. Kim) Physics-informed approach for exploratory Hamilton--Jacobi--Bellman equations via policy iterations, AAAI, 2026
(with N. Cho) On the stability of Lipschitz continuous control problems and its application to reinforcement learning, submitted
(with M. Gim, H. Yang) Solving nonconvex Hamilton--Jacobi--Isaacs equations with PINN-based policy iteration, submitted
(with S. Choi, K. Kim) A diffusion-based generative model for financial time-series via geometric Brownian motion, submitted
(with N. Cho, M. Kim, Y. Kim) Neural policy iteration for stochastic optimal control: A physics-informed approach, submitted
(with K. Park, E. Kim) A human-guided generative model based on diffusion processes for topology optimization, submitted
(with D. Lee, M. Kim, S. Son) A physics-informed, global-in-time neural particle method for the spatially homogeneous Landau equation, submitted
(with M. Gim, H. Yang) Hamilton--Jacobi--Isaacs formulation of probabilistic reachable sets via mesh-free policy iteration, submitted
(with K. Lee) How diffusion shapes greedy updates: A semigroup perspective, submitted
(with S. Jeong, J. Huh) Neural policy iteration for dynamic portfolio choice with control-dependent diffusion, submitted
(with N. Cho) Policy iteration for stationary discounted Hamilton--Jacobi--Bellman equations: A viscosity approach, submitted
(with M. Kim, Y. Kim, N. Cho) Stabilized neural Hamilton--Jacobi--Bellman solvers: Error analysis and applications in model-based reinforcement learning, submitted
(with D. Kwon, G. Montufar, I. Yang) Training Wasserstein GANs without gradient penalties, preprint
(with K. Park, Y. Choi) Transferable reinforcement learning for targeted deformation shaping in functionally graded lattice structures, preprint
Ongoing research
Methodologies in Scientific AI
(with Y. Kim) Physics-informed approach for solving state-constraint Hamilton--Jacobi equations
(with Y. Park) Neural network approximation
(with M. Gim, H. Yang) FDM-PINN for stochastic differential game and Hamilton--Jacobi--Iassacs equations
From concentration to conditioning: Langevin sampling for smoothly constrained targets
(with S. Son, M. Kim, D. Lee) BGK simulator
(with J. Kim) Reaction simulator
From concentration to conditioning: Langevin sampling for smoothly constrained targets
Complex System Modeling & Simulation
(with Y. Park) Analysis of optimal vaccination policies for a controlled SIR model with state-dependent transmission and recovery rates
(with M. Kim, H. Yang) Erlang distributed SIR model - PINN approach
(with S. Choi, K. Kim) Generation of financial time-series via CEV process
(with J. Ahn, K. Lee, K. Lee) Discrete vs. continuous: Analyzing the impact of action space on reinforcement learning for facility layout planning, preprint