Breaking the Dimensional Barrier: Dynamic Portfolio Choice with Parameter Uncertainty via Pontryagin Projection, with Hyeng-Keun Koo
From AlphaGo to PGDPO: How Neural Networks Learn Adjoint Dynamics, with Hyeng-Keun Koo, Alain Bensoussan [GitHub]
Finite Horizon and Optimal Portfolio Choice with Stochastic Income: A Reinforcement Learing Approach, with Seyoung Park, Hojin Ko, Alain Bensoussan
Breaking the Dimensional Barrier for Continuous-Time Time-Inconsistent Control, with Hyeng-Keun Koo, Byung Hwa Lim
Dynamic RP-IPCA: Endogenizing Factor Timing in Latent Factor Models, with Seungwon Jeong
DeepONet Surrogate Modeling for Interest Rate Option Pricing, Greeks, and Robustness , with Sang-Hyun Lee, Seungwon Jeong
Deep Operator Learning for Forecasting Multi-scale Implied Volatility Surfaces, with Minji Lee
End-to-End Learning of Asset Betas for Sharpe-Optimal Portfolios, with Dongwan Shin
Model-Based Reinforcement Learning with Non-Exponential Discounting: Pontryagin-Guided Direct Policy Optimization in Continuous Time, with Hojin Ko
Model-Based Reinforcement Learning for Continuous-Time Delay Systems: A Pontryagin-Guided Direct Policy Optimization Framework, with Ji-Hun Kim
Adversarial Time-Series Domain Adaptation for Early-Stage IPO Price Prediction, with Youngwoo Lee
Beyond its current scope, PG-DPO may admit natural extensions to 1) transaction costs, 2) taxation, 3) mean-field control, 4) optimal stopping, 5) EZ utility, and 6) multi-agent frameworks.