My research lies at the intersection of applied mathematics, optimal control, and machine learning. I focus on Hamilton–Jacobi–Bellman (HJB) and Hamilton–Jacobi–Isaacs (HJI) equations that arise in stochastic optimal control, differential games, and risk-sensitive decision-making. By combining rigorous PDE theory with physics-informed neural networks (PINNs), I develop neural policy iteration frameworks that overcome the curse of dimensionality while retaining theoretical guarantees.
(with T. Costa, J. Cummings, M. Jenkinson, J. Martinez, N. Olivares, A. Sezginer) Fast calculation of diffraction by photomasks, University of Minnesota. IMA, 2014
Constrained Hamilton--Jacobi equations and further applications via optimal control theory, Ph.D thesis, 2019
Well-posedness for constrained Hamilton--Jacobi equations, Acta Applicandae Mathematicae, 2020
On uniqueness for one-dimensional constrained Hamilton--Jacobi equation, Minimax Theory and its Applications, 2020
(with H. V. Tran and S. N.T. Tu) State-constraint static Hamilton-Jacobi equations in nested domains, SIAM Journal on Mathematical Analysis, 2020
(with K. Jun, I. Yang) Improved Regret Analysis for Variance-Adaptive Linear Bandits and Horizon-Free Linear Mixture MDPs, NeurIPS, 2022
(with J. Shin, A. Hakobyan, M. Park, G. Kim, I. Yang) Infusing model predictive control into meta-reinforcement learning for mobile robots in dynamic environments, IEEE Robotics and Automation Letters, 2022
(with I. Yang) On Representation Formulas for Optimal Control: A Lagrangian Perspective, IET Control Theory & Applications, 2022
(with K. Kim, I. Yang) On concentration bounds for Bayesian identification of linear non-Gaussian systems, Proceedings of the 62th IEEE Conference on Decision and Control (CDC), 2023
(with K. Kim, I. Yang) Approximate Thompson sampling for learning linear quadratic regulators with O(\sqrt{T}) regret, Leraning for Control and Decision Conference (L4DC), 2025
(with J. Jang) On a minimum eradication time for the SIR model with time-dependent coefficients, Proceedings of the AMS, 2025
(with Y. Park, M. Kim) Acceleration of grokking in learning arithmetic operations via Kolmogorov-Arnold representation, Neurocomputing, 2025
(with Y. Choi, K. Park) Deep reinforcement learning for optimal design of compliant mechanisms based on digitized cell structures, Engineering Applications of Artificial Intelligence, 2025
(with Y. Kim, M. Kim) Physics-Informed Neural Networks for optimal vaccination plan in SIR epidemic models, Mathematical Biosciences and Engineering, 2025
(with J. Lee) Hamilton--Jacobi based policy-iteration via deep operator learning, Neurocomputing, 2025
(with Y. Kim. K. Jun) Instance-dependent fixed-budget pure exploration in reinforcement Learning, ICLR, 2026 (ICML EXAIT Workshop, 2025)
(with N. Cho, Y. Kim, K. Kim) Physics-informed approach for exploratory Hamilton--Jacobi--Bellman equations via policy iterations, AAAI, 2026
(with N. Cho) On the stability of Lipschitz continuous control problems and its application to reinforcement learning, under review
(with M. Gim, H. Yang) Solving nonconvex Hamilton--Jacobi--Isaacs equations with PINN-based policy iteration, under review
(with S. Choi, K. Kim) A diffusion-based generative model for financial time-series via geometric Brownian motion, under review
(with N. Cho, Y. Kim) Neural policy iteration for stochastic optimal control: A physics-informed approach, under review
(with K. Park, E. Kim) Human-in-the-loop diffusion for AI-driven topology optimization, under review
Preprints and projects in progress
(with D. Kwon, G. Montufar, I. Yang) Training Wasserstein GANs without gradient penalties
(with J. Ahn, K. Lee, K. Lee) Action-space design for stable reinforcement learning in office layout optimization
(with D. Lee, M. Kim, S. Son) Physics-informed approach for solving the space-homogeneous Landau equation
(with Y. Park) Analysis of Optimal vaccination policies for a controlled SIR Model with state--dependent transmission and recovery rates
(with K. Park, Y. Choi) Flexible functional graded lattice structure via reinforcement learning
(with S. Choi, K. Kim) Generation of financial time-series via CEV process
(with N. Cho, M. Kim) Physics-informed neural network for model-based reinforcement learning
(with M. Gim, H. Yang) Stochastic reachability via neural policy iteration
(with M. Gim, H. Yang) FDM-PINN for stochastic differential game and Hamilton--Jacobi--Iassacs equations
(with Y. Kim,) Physics-informed approach for solving state-constraint Hamilton--Jacobi equations
(with M. Yoo) TBA