>> PG-DPO
Pontryagin-Projected Schrödinger Bridges
Iterative Pontryagin-Guided Policy Optimization: Achieving Convergence and Stability via Fixed-Point BPTT, with Seungwon Jeong
>> PG-DPO
Breaking the Dimensional Barrier in Dynamic Portfolio Choice with Transaction Costs
Finite Horizon and Optimal Portfolio Choice with Stochastic Income: A Reinforcement Learing Approach, with Seyoung Park, Hojin Ko, Alain Bensoussan
Scalable Deep Hedging: Breaking the Curse of Dimensionality in High-Dimensional Portfolios, with Seungho Na
>> DeepONet
Deep Operator Learning for Forecasting Multi-scale Implied Volatility Surfaces, with Minji Lee
Learning the Black-Scholes Operator: Handling Time-Dependent Parameters via Deep Neural Operators, with Myeongsik Kim
Pricing the Portfolio Cube: Deep Operator Learning for Dynamic Structured Product Books, with Yoonyoung Byun
Universal Deep Hedging: A Deep Operator Learning Approach, with Seungho Na
>> Others
Physics-Informed Deep Operator Learning for Finite-Horizon Stochastic Optimal Control, with Seungwon Jeong, Yeoneung Kim
End-to-End Learning of Asset Betas for Sharpe-Optimal Portfolios, with Dongwan Shin
Adversarial Time-Series Domain Adaptation for Early-Stage IPO Price Prediction, with Youngwoo Lee
Beyond its current scope, PG-DPO admits natural extensions to partial equilibrium settings—including transaction costs, taxation, rough volatility, optimal stopping, Epstein-Zin utility, and belief-state dynamics—as well as general equilibrium frameworks such as mean-field control and multi-agent games.