Abstracts and Titles

Faryan Amir-Ghassemi:

Hedge Fund Aggregate Performance and Best Ideas

Abstract: Our research finds that an aggregate portfolio of actively managed hedge funds has demonstrated historically abnormal excess return, as compared to the market portfolio. However, there has been a significant decline in this abnormal return after the 2008 financial crisis. This is evinced by analyzing over twenty years of regulatory mandated holdings-level data across an index of thousands of hedge fund firms filing domestic equity holdings with the SEC (Forms 13F). Through this data source, we can avoid common biases that plague hedge fund return analysis through self-reported databases. By analyzing holdings rather than direct returns, we can further decompose the security selection of this universe of hedge funds through the lens of canonical ‘Best Ideas’ methodologies. Through this, we demonstrate that the abnormal return of hedge fund ‘Best Ideas’ relative to their aggregate portfolios are insignificant. These findings contrast with similar studies conducted on mutual funds.

Bahman Angoshtari:

“Predictable Forward Performance Processes”

Abstract: Predictable Forward Performance Processes (PFPP) are stochastic control frameworks for an agent who controls a dynamically evolving system but can only prescribe the system dynamics for a short period ahead. This is a common scenario in which the controller must re-calibrate her model for the underlying system periodically through time. In an optimal investment setting with a classical expected utility objective and assuming that a complete market model is re-calibrated, we show that the construction of PFPP reduces to a one-period problem involving an integral equation which is then solved by the Fourier transform. We also discuss a generalization to rank-dependent objectives and show that the corresponding integral equation is reduced to a linear (abstract) Volterra equation. This is joint work with Shida Duan.


Anna Bykhovskaya:

"High-Dimensional Canonical Correlation Analysis" 

Abstract: In the talk I will discuss high-dimensional canonical correlation analysis (CCA) with an emphasis on the vectors that define canonical variables. I will show that when two dimensions of data grow to infinity jointly and proportionally, the classical CCA procedure for estimating those vectors fails to deliver a consistent estimate. This provides the first result on the impossibility of identification of canonical variables in the CCA procedure when all dimensions are large. As a countermeasure, I will derive the magnitude of the estimation error, which can be used in practice to assess the precision of CCA estimates. Finally, I will show an application of the results to the analysis of cyclical vs. non-cyclical stocks.


Igor Cialenco:

“On time consistency of dynamic risk and performance measures generated by distortion functions”

Abstract. We extend the notion of risk measures generated by distortion functions to the dynamic discrete time setup. Consequently, using dual representations, we define coherent acceptability indices – a special class of performance measures - generated by families of dynamic risk measures generated by distortion functions. In this talk: 


This is joint work with T. R. Bielecki and H. Liu


Gökçe Dayanıklı:

“Mean Field Models to Regulate Carbon Emissions in Electricity Production”

Abstract: The most serious threat to ecosystems is the global climate change fueled by the uncontrolled increase in carbon emission levels. In this talk, we introduce a game between a regulator who controls the carbon tax levels and a mean field of electricity producers. We first focus on introducing the model of the electricity producers who choose how much they invest in renewable energy (fixed decision) and how much they use nonrenewable energy (time variable decision) in the electricity production. The trade-off between higher levels of revenue from electricity production and the negative effects of carbon emission on the environment affects electricity producers’ decisions to choose how much cheaper, reliable but polluting nonrenewable energy or clean but random renewable energy to use in production. We compare these decisions under two different settings where the producers are competitive (Nash Equilibrium) or cooperative (Social Optimum). We show that both the Mean Field Nash equilibrium and the Mean Field Social Optimum can be characterized by a nonstandard forward-backward stochastic differential equation system and discuss the existence and uniqueness of the solutions. Later, we introduce a regulator who controls the carbon tax levels and analyze the Stackelberg Equilibrium between the regulator and the mean field agents. (This is a joint work with René Carmona and Mathieu Laurière.)


Ibrahim Ekren:

"Second order PDEs on the Wasserstein space"

Abstract: We prove a comparison result for viscosity solutions of second order parabolic partial differential equations in the Wasserstein space. The comparison is valid for semisolutions that are Lipschitz continuous in the measure in a Fourier-Wasserstein metric and uniformly continuous in time. The class of equations we consider is motivated by Mckean-Vlasov control problems with common noise and filtering problems. We also mention applications for prediction problems with expert advice. The proof of comparison relies on a novel version of Ishii's lemma, which is tailor-made for the class of equations we consider.


Zach Feinstein:

“Implied Volatility of the Constant Product Market Maker in Decentralized Finance”

Abstract: Automated Market Makers (AMMs) are a decentralized approach for creating financial markets by allowing investors to invest in liquidity pools of assets against which traders can transact. Liquidity providers are compensated for making the market with fees on transactions. The collected fees, along with the final value of the pooled portfolio, act as a derivative of the underlying assets with price given by the pooled assets. Following this notion, we study the implied volatility constructed from the constant product market makers.


Qi Feng:

“Deep Signature Algorithm for Multi-dimensional Path-Dependent Options”

Abstract: In this talk, I will present the deep signature algorithms for solving path-dependent options. We extend the backward scheme in [Huré-Pham-Warin. Mathematics of Computation 89, no. 324 (2020)] for state-dependent FBSDEs with reflections to path-dependent FBSDEs with reflections, by adding the signature layer to the backward scheme. Our algorithm applies to both European and American type option pricing problems while the payoff function depends on the whole paths of the underlying forward stock process. We prove the convergence analysis of our numerical algorithm with explicit dependence on the truncation order of the signature and the neural network approximation errors. Numerical examples for the algorithm are provided including: Amerasian option under the Black-Scholes model, American option with a path-dependent geometric mean payoff function, and the Shiryaev's optimal stopping problem. In the end, I will talk about its generalization and future works.


Camilo Hernández:

“Propagation of chaos for Schrödinger problems with interacting particles”

Abstract: The mean field Schrödinger problem (MFSP) is the problem of finding the most likely path of a McKean-Vlasov type particle with constrained initial and final configurations. It was first introduced by Backhoff et al. (2020), who studied its existence and long-time behavior. This talk aims to show how ideas from propagation of chaos for backward particle systems allow us to derive the MFSP as the (large population) limit of a sequence of classical Schrödinger problems among finite (but interacting) particles. The method rests upon the study of suitably penalized problems using stochastic control techniques, and it further allows us to derive other interesting results on the MFSP. This talk is based on a joint work with Ludovic Tangpi.


Emma Hubert:

“Stackelberg games and moral hazard with constraints: a stochastic target approach”

Abstract: In this talk, we provide a general approach to reformulate any stochastic Stackelberg differential games as a single-level optimisation problem with target constraint. More precisely, by considering the backward stochastic differential equation associated to the continuation utility of the follower as a controlled state variable for the leader, in the spirit of what is done in principal-agent problems, we are able to rewrite the leader’s unconventional stochastic control problem as a more standard stochastic control problem, yet with stochastic target constraints. Then, using the methodology developed by Soner and Touzi (2002) [2] or Bouchard, Élie, and Imbert (2010) [1], the optimal strategies as well as the corresponding value of the Stackelberg equilibrium can then be obtained by solving a well-specified system of Hamilton-Jacobi-Bellman equations. We will illustrate these results through a simple example, and briefly explain how this approach can also be used in principal-agent problems to incorporate constraints on the terminal payment.


Joint work with Camilo Hernández, Nicolás Hernández-Santibáñez and Dylan Possamaï

[1] B. Bouchard, R. Élie, and C. Imbert. Optimal control under stochastic target constraints. SIAM Journal on Control and Optimization, 48(5):3501–3531, 2010.

[2] H. Soner and N. Touzi. Dynamic programming for stochastic target problems and geometric flows. Journal of the European Mathematical Society, 4(3):201–236, 2002.


Adriana Ocejo Monge:

“The effect of fees on optimal allocation and utility of payoff with financial guarantees”

Abstract: We propose a novel fee structure of a class of insurance contracts with market exposure and financial guarantees at maturity. The problem is to modify the typical investment strategy used in this financial product by incorporating an investment-dependent fee structure with the goal to increase the policyholder’s utility of payout while keeping the contract fairly priced for the insurer. This yields a constrained non-concave utility maximization problem. We solve the associated constrained stochastic control problem using a martingale approach and analyze the impact of the fee structure on the optimal investment strategies and utility of payout. Numerical results show that it is possible to find an optimal portfolio for a wide range of fees, while keeping the contract fairly priced.


Silvana Pesenti:

“Uncertainty Propagation and Dynamic Robust Risk Measures”

Abstract: We introduce a framework for quantifying propagation of uncertainty arising in a dynamic setting. Specifically, we define dynamic uncertainty sets designed explicitly for discrete stochastic processes over a finite time horizon. These dynamic uncertainty sets capture the uncertainty surrounding stochastic processes and models, accounting for factors such as distributional ambiguity. Examples of uncertainty sets include those induced by the Wasserstein distance and $f$-divergences.


We further define dynamic robust risk measures as the supremum of all candidates' risks within the uncertainty set. In an axiomatic way, we discuss conditions on the uncertainty sets that lead to well-known properties of dynamic robust risk measures, such as convexity and coherence. Furthermore, we discuss the necessary and sufficient properties of dynamic uncertainty sets that lead to time-consistencies of robust dynamic risk measures. We find that uncertainty sets stemming from $f$-divergences lead to strong time-consistency while the Wasserstein distance results in a new notion of non-normalised time-consistency. Moreover, we show that a dynamic robust risk measure is strong or non-normalised time-consistent if and only if it admits a recursive representation of one-step conditional robust risk measures arising from static uncertainty sets.


This is joint work with Marlon Moresco and Melina Mailhot, Concordia University.


Konstantinos Spiliopoulos: 

“Normalization effects and mean field theory for deep neural networks” 

Abstract: We study the effect of normalization on the layers of deep neural networks. A given layer $i$ with $N_{i}$ hidden units is allowed to be normalized by $1/N_{i}^{\gamma_{i}}$ with $\gamma_{i}\in[1/2,1]$ and we study the effect of the choice of the $\gamma_{i}$ on the statistical behavior of the neural network’s output (such as variance) as well as on the test accuracy on the MNIST and CIFAR10 data sets. We find that in terms of variance of the neural network’s output and test accuracy the best choice is to choose the $\gamma_{i}$’s to be equal to one, which is the mean-field scaling. We also find that this is particularly true for the outer layer, in that the neural network’s behavior is more sensitive in the scaling of the outer layer as opposed to the scaling of the inner layers. The mechanism for the mathematical analysis is an asymptotic expansion for the neural network’s output and corresponding mean field analysis. An important practical consequence of the analysis is that it provides a systematic and mathematically informed way to choose the learning rate hyperparameters. Such a choice guarantees that the neural network behaves in a statistically robust way as the $N_i$ grow to infinity.


Kim Weston:

“A multi-agent targeted trading equilibrium with transaction costs”

Abstract:  We prove the existence of a continuous-time Radner equilibrium with multiple agents and transaction costs. The agents are incentivized to trade towards a targeted number of shares throughout the trading period and seek to maximize their expected wealth minus a penalty for deviating from their targets. Their wealth is further reduced by transaction costs that are proportional to the number of stock shares traded. The agents’ targeted number of shares is publicly known. In equilibrium, each agent optimally chooses to trade for an initial time interval before stopping trade. Our equilibrium construction and analysis involves identifying the order in which the agents stop trade. The transaction cost level impacts the equilibrium stock price drift. We analyze the equilibrium outcomes and provide numerical examples. This model provides the first example of an equilibrium with proportional transaction costs and an arbitrary finite number of agents. 


This work is joint with Jin Hyuk Choi and Jetlir Duraj.


Jiongmin Yong:

“Stochastic Linear-Quadratic Optimal Control Problems in Large Time Horizons --- Turnpike Properties”

Abstract: For deterministic optimal control problems in very large time horizons (either finite dimensional or infinite dimensional), under proper conditions, the optimal pair will stay near a solution of a proper static optimization problem. Such a phenomenon is called the ``turnpike’’ property. The proper static optimization problem usually is the one with the objective function being the running cost rate function and with the constraint being the equilibria of the vector field of the state equation. However, for stochastic problems, mimicking the idea of the deterministic problems will lead to an incorrect static optimization problem. In this talk, we will look at stochastic linear-quadratic optimal control problem in large durations. We will correctly formulate the proper limit optimization problem and establish the turnpike properties of the corresponding stochastic linear-quadratic problem.


Bin Zou:

“Equilibrium Loss Reporting for a Risk-Averse Insured of Deductible Insurance”

Abstract: We consider a risk-averse insured who purchases a deductible insurance contract and follows a barrier strategy to decide whether she should report a loss. The insurer adopts a bonus- malus system with two rate classes in pricing, and the insured will move to or stay in the more expensive class if she reports a loss. When the deductibles are exogenously given, we establish a sufficient and necessary condition under which the insured will underreport losses, and obtain her equilibrium barrier strategy in semi-closed form. Next, we allow the insured to choose the deductibles of her insurance contract and show that the equilibrium deductibles are strictly positive, suggesting that full insurance, often assumed in related literature, is not optimal. Our study provides a theoretical justification for the prevalent phenomenon of underreporting losses across non-life insurance sectors.


Student Presentations


Vedant Choudhary:

“FuNVol: A Multi-Asset Implied Volatility Market Simulator using Functional Principal Components and Neural SDEs”

Abstract: We introduce a novel multi-asset market simulator for generating sequences of implied volatility (IV) surfaces and the corresponding equity price paths that is faithful to historical data. We do so using a combination of functional data analysis and neural stochastic differential equations (SDEs) combined with a probability integral transform penalty to reduce model misspecification. The simulator leverages functional principal component (FPC) analysis to decompose the implied volatility surfaces into a set of FPCs representing the stylized facts while capturing the maximum variability. The neural SDEs model the stochastic evolution of the FPC coefficients over time, allowing for realistic and flexible volatility simulation. The neural network architecture embedded within the SDE framework is trained on historical data, enabling it to learn and replicate complex patterns. 

The performance of FuNVol is extensively evaluated using real-world financial data. We demonstrate that learning the joint dynamics of IV surfaces and prices produces market scenarios that are consistent with historical features and lie within the sub-manifold of surfaces that are essentially free of static arbitrage, without the need to explicitly impose such constraints. Moreover, the simulator showcases its versatility by simulating multi-asset scenarios, allowing for comprehensive analysis of inter-asset volatility dependencies.


April Nellis:

"A neural network approach to high-dimensional optimal switching problems with jumps in energy markets"

 

Abstract: We develop a backward-in-time machine learning algorithm that uses a sequence of neural networks to solve optimal switching problems in energy production, where electricity and fossil fuel prices are subject to stochastic jumps. We then apply this algorithm to a variety of energy scheduling problems, including novel high-dimensional energy production problems. Our experimental results demonstrate that the algorithm performs with accuracy and experiences linear to sub-linear slowdowns as dimension increases, demonstrating the value of the algorithm for solving high-dimensional switching problems.


This is a joint work with Dr. Bayraktar and Dr. Cohen.

 

Lu Vy:

“A unified approach to informed trading via Monge-Kantorovich duality”

Abstract: We solve a generalized Kyle model type problem using Monge-Kantorovich duality and backward stochastic partial differential equations. First, we show that the the generalized Kyle model with dynamic information can be recast into a terminal optimization problem with distributional constraints. Therefore, the theory of optimal transport between spaces of unequal dimension comes as a natural tool. Second, the pricing rule of the market maker and an optimality criterion for the problem of the informed trader are established using the Kantorovich potentials and transport maps. Finally, we completely characterize the optimal strategies by analyzing the filtering problem from the market maker's point of view. In this context, the Kushner-Zakai filtering SPDE yields to an interesting backward stochastic partial differential equation whose measure-valued terminal condition comes from the optimal coupling of measures.


Kevin Zhang:

“A Probabilistic Approach to Discounted Infinite Horizon Mean-Field Games”


Abstract: We study discounted infinite horizon mean field games. In particular, we consider the probabilistic weak formulation of the game as introduced in [1] for the finite horizon case. We prove existence of solutions to both the extended and non-extended versions of the mean field game under similar assumptions as in the finite horizon case. The key idea is to construct local versions of the stable topologies usually considered in the finite horizon case. Uniqueness follows under standard Lasry-Lions monotonicity assumptions. Furthermore, we present how sequences of mean field games on finite time horizons approximate the infinite horizon game. Interestingly, under the Lasry-Lions monotonicity, we can quantify the convergence rate to the infinite horizon game using a newly found stability result for mean field games. The talk is based on a joint work with Rene Carmona and Ludovic Tangpi.

 

 

[1] Carmona, René, and Daniel Lacker. “A PROBABILISTIC WEAK FORMULATION OF MEAN FIELD GAMES AND APPLICATIONS.” The Annals of Applied Probability 25, no. 3 (2015): 1189–1231. http://www.jstor.org/stable/24520471.


Student Posters

Andrei Kulumbetov:

A Cointegration-Based Algorithm for Optimal Portfolio Construction


Abstract: In this project, an algorithm for optimal portfolio construction is proposed. At the core of the algorithm, the k-means clustering algorithm with the DTW metric is used, along with cointegration- based selection of assets. Time series were smoothed out beforehand, utilizing regression splines. The final portfolio is optimized using Markowitz theory and the Sharpe Ratio. The Omega-Ratio is used as a measure of the algorithm's performance on a test set. The results have shown that this algorithm can produce profitable portfolios at least one year ahead. The algorithm was also applied to assess the impact of the Russian-Ukrainian war on market predictability. It generated a profitable portfolio in the pre-invasion training set and an unprofitable one in the post-invasion training set.


Soham Mudalgikar:

"Behavior of Largest Eigenvalue of Sample Correlation and Moment Matching"

Abstract: The largest eigenvalue of a sample correlation matrix derived from a matrix with independent and identically distributed (i.i.d.) Gaussian entries converges to the Tracy-Widom distribution as the matrix size approaches infinity, with its ratio approaching a positive constant. Additionally, refining the centering and scaling parameters used for normalization enhances numerical accuracy when the sample size is small. Employing moment matching between the sample covariance matrix of a multivariate t-distribution and a Wishart matrix offers further improvements in the convergence of the largest eigenvalue of the normalized sample correlation matrix, which can be used for setting a hypothesis test for estimating the rank of low-rank structure or information part in an information-plus-noise matrix.


Md Arafatur Rahman:

A Deep Learning Scheme for Discrete-Time Control Problems


Abstract: We have developed a Deep Neural Network (DNN) algorithm designed to solve optimal control problems in discrete time. We tested our algorithm for the optimal execution problem in a Limit Order Book proposed by Obizhaeva and Wang in 2006. With the inclusion of linear price impact and small number of time steps, our DNN prediction is almost identical to the closed form solution. Furthermore, our algorithm complies with a field estimate for the optimal execution and is adaptable for use in both deterministic and stochastic control problems. Additionally, it provides numerous advantages over traditional numerical optimization methods, encompassing scenarios involving non-convex optimization, non-linear price impact, time-varying resilience, and time-varying liquidity etc. Currently, we are working on the multiscale version of our scheme aimed at significantly reducing training time for a larger number of time steps. 

We gratefully acknowledge our support from the National Science Foundation