>> PG-DPO
Breaking the Dimensional Barrier in Dynamic Portfolio Choice with Transaction Costs, with Hojin Ko
Breaking the Dimensional Barrier in Tax-Efficient Dynamic Portfolio Choice, with Seungwon Jeong
Finite Horizon and Optimal Portfolio Choice with Stochastic Income: A Reinforcement Learning Approach, with Seyoung Park, Hojin Ko, Alain Bensoussan
Pontryagin-Guided Deep Hedging for Large-Scale Portfolios, with Seungho Na
>> DeepONet
Learning the Black-Scholes Operator: Handling Time-Dependent Parameters via Deep Neural Operators, with Myeongsik Kim
Pricing the Portfolio Cube: Deep Operator Learning for Dynamic Structured Product Books, with Yoonyoung Byun
Deep Operator Learning for Forecasting Multi-scale Implied Volatility Surfaces, with Minji Lee
>> Others
End-to-end Learning of Asset Betas for Sharpe-Optimal Portfolios, with Dongwan Shin
PINN-Policy Iteration for Merton-Type Diffusion Control with Volatility Control, with Seungwon Jeong, Yeoneung Kim
Scalable Dynamic Portfolio Allocation via Physics-Informed Neural Networks, with Seungwon Jeong, Yeoneung Kim
Adversarial Time-Series Domain Adaptation for Early-Stage IPO Price Prediction, with Youngwoo Lee
Beyond its current scope, PG-DPO admits natural extensions to partial equilibrium settings—including transaction costs, taxation, rough volatility, optimal stopping, Epstein-Zin utility, smooth ambiguity and belief-state dynamics—as well as general equilibrium frameworks such as mean-field control and multi-agent games.