10th Western Conference on Mathematical Finance

All times are in Pacific Time (PT).

Friday, January 15

2:00 - 2:10 p.m.

Opening


2:10 - 3:10 p.m.

Darrell Duffie (Stanford University)

Using Exchange Prices for Off-Exchange Trade is Inefficient

In our modeled setting, we show that the now-common practice of size discovery detracts from overall financial market efficiency. A continually operating exchange uses double auctions to discover prices and clear markets. At each of a series of size-discovery sessions, efficient asset allocations are achieved using terms of trade that are based on the most recent exchange price. Traders can mitigate their exchange price impacts by waiting for size-discovery sessions. This waiting causes socially costly delays in the rebalancing of asset positions across traders. As the frequency of size-discovery sessions is increased, exchange market depth is further lowered and position rebalancing is further delayed, more than offsetting the gains from trade that occur at each of the size-discovery sessions.

From joint work with Sam Antill (Harvard University): https://www.darrellduffie.com/uploads/pubs/AntillDuffieSep2020.pdf


3:10 - 3:45 p.m.

Andrea Angiuli (UC Santa Barbara)

Bridging the Gap of Reinforcement Learning for MFG and MFC problems [slides]

In this talk we present a unified approach to solve mean field problems based on model free reinforcement learning. A mean field problem consists in a game with an infinity population of symmetric players. We can distinguish a cooperative game (Mean Field Control, MFC) and a non-cooperative game (Mean Field Game, MFG). In general, the solution of these problems is different reflecting the particular nature of the agents’ behaviors. The proposed Unified 2-scale Q-learning algorithm (U2-QL) is based on two learning rules: the first one targets the optimal strategy, while the second one looks at the distribution of the population at equilibrium. We show how different calibrations of the two scale learning scheme allows convergence to the solution of a MFC or MFG problem. Numerical results of the application of U2-QL to examples from Finance are discussed. Joint work with Jean-Pierre Fouque and Mathieu Laurière.

Discussant: Qi Feng


4:00 - 4:35 p.m.

Ran Zhao (Claremont Graduate University)

Credit Risk Contagion in a Network Economy: Evidence from Supply Chain Data

We extend the work of Cossin and Schellhorn (2007) by examining how credit risk spreads through operating cashflows uncertainty in network economy, where upstream suppliers have business sales to the downstream customers. We develop a Merton type structural credit risk model based on this input-output network. The model implication is important to credit risk measurement and management: the credit risk of a firm depends not only on endogenous characteristics, but also ex ante cashflow risk from downstream customers. We establish statistical significant and economic meaningful relation between customer credit risk and supplier credit risk from empirical input-output data. The magnitude of the credit risk spread effect is positively related to the proportion of total yearly sales to the customer.

Discussant: Yisub Kye


4:35 - 5:35 p.m.

Thaleia Zariphopoulou (University of Texas at Austin)

Human-machine interaction models and stochastic optimization

I will introduce a model of human-machine interaction (HMI) in portfolio choice (robo-advising). Modeling difficulties stem from the limited ability to quantify the human's risk preferences and describe their evolution, but also from the fact that the stochastic environment, in which the machine optimizes, adapts to real-time incoming information that is exogenous to the human. Furthermore, the human's risk preferences and the machine's states may evolve at different scales.

This interaction creates an adaptive cooperative game with both asymmetric and incomplete information exchange between the two parties. As a result, challenging questions arise on, among others, how frequently the two parties should communicate, what information can the machine accurately detect, infer and predict, how the human reacts to exogenous events, how to improve the inter-linked reliability between the human and the machine, and others. Such HMI models give rise to new, non-standard optimization problems that combine adaptive stochastic control, stochastic differential games, optimal stopping, multi-scales and learning.

Saturday, January 16

9:30 - 10:30 a.m.

Charles-Albert Lehalle (Capital Fund Management & Imperial College London)

Using and understanding machine learning for high frequency finance [slides]

Since it is now possible to use only Machine Learning (ML) techniques along the path of optimal liquidation, my talk will leverage on the main steps of building a trading algorithm using ML to explain important features of learning tools. First I will show how to use neural nets to solve optimal scheduling for liquidation of large orders, focusing on the difference between stochastic control and ML, at the light of an "explainable AI" angle. Then I will solve twice the optimal placement in an orderbook: via reinforcement learning and via direct stochastic approximation, putting the emphasis on their potential differences. I will finish by using deep learning on orderbooks to extract trading signals that can then be embedded into the two former frameworks; it will be the occasion to obtain a better understanding of information diffusion at the smallest time scale in markets, and to show how to see stochastic gradient descent as a fixed point and emphasis the role of the learning rate.


References (in order of appearance):


10:30 - 11:05 a.m.

Jorge Guijarro Ordonez (Stanford University)

Deep Learning Statistical Arbitrage

We propose a general framework for statistical arbitrage. Our approach generalizes the ideas of pairs trading and mean reversion by finding commonality and time series patterns in a flexible way. First, we remove all commonality based on linear and non-linear risk factors of individual stock returns, with the option of incorporating exogenous data like stock characteristics. Second, we extract in a flexible and data driven way the time series patterns in the residual portfolios, which we use to form optimal arbitrage portfolios. One key contribution of our paper is a novel convolutional and attentional neural network architecture applied to the panel of residual portfolios. It detects mean reversion and trend patterns in the panel and constructs optimal trading strategies to exploit them in a completely unsupervised and end-to-end way. We apply our model to daily US stock returns. Our optimal trading strategy is orthogonal to common risk factors and obtains a consistent out-of-sample Sharpe ratio greater than 4. Our strategies remain profitable after taking into account trading frictions and costs. This is joint work with Markus Pelger (Assistant Professor, Stanford Management Science and Engineering) and Greg Zanotti (PhD student, Stanford Management Science and Engineering).

Discussant: Yang Zhou


11:20 - 11:55 a.m.

Dangxing Chen (UC Berkeley)

Modeling the dynamics of the realized variance based on the high-frequency data

This paper studies in some detail a class of continuous-time stochastic volatility models. These models are direct models of daily asset return volatility based on realized measures constructed from high-frequency data. The models are capable of capturing patterns observed from empirical data, including the asymmetric mean-reversion effect and the different levels of innovations. We proposed a new calibration method that is both robust and sensitive to all parameters. Along with calibration techniques, rigorous assessments of performance are conducted. Empirical results suggest that our method is promising.

Discussant: Andrea Angiuli


11:55 - 12:30 p.m.

Xiaoqi Xu (UC Irvine)

Dark Pool Effects on Price Discovery and Economic Efficiency

This paper studies dark pool effects on exchange market efficiency and real economic efficiency in a model featuring managerial learning from the exchange market. When the exchange market lacks liquidity trading, an informed investor surely trades in the dark pool if firm fundamentals are bad and randomizes between the exchange market and the dark pool if firm fundamentals are good. Such asymmetric trading behavior generates asymmetric limits to arbitrage in the exchange market and leads to asymmetric firm investments. At some liquidity trading levels in the exchange market, the dark pool increases exchange market efficiency and thus economic efficiency; at some others, the dark pool surprisingly increases economic efficiency even if it harms exchange market efficiency. Hence, using exchange market efficiency to assess dark pools may overestimate their adverse effects on economic efficiency.

Discussant: Ran Zhao

Sunday, January 17

9:30 - 10:05 a.m.

Nicole Yang (UC Santa Barbara)

Relative Arbitrage Opportunities in N Investors and Mean-Field Regimes [slides]

The relative arbitrage portfolio, formulated in Stochastic Portfolio Theory (SPT), outperforms a benchmark portfolio over a given time-horizon with probability one. This paper analyzes the market behavior and optimal investment strategies to attain relative arbitrage both in the N investors and mean field regimes under some market conditions. An investor competes with a benchmark of market and peer investors, expecting to outperform the benchmark and minimizing the initial capital. With market price of risk processes depending on the market portfolio and investors, we develop a systematic way to solve a multi-agent optimization problem within SPT's framework. The objective can be characterized by the smallest nonnegative continuous solution of a Cauchy problem. By a modification in the structure of the extended mean field game with common noise and its notion of the uniqueness of Nash equilibrium, we show a unique equilibrium in N-player games and mean field games with mild conditions on the equity market. Based on the high-dimensional nature of the problem, numerical schemes will also be discussed. This talk is based on a paper with Tomoyuki Ichiba.

Discussant: Jorge Guijarro Ordonez


10:05 - 10:40 a.m.

Yisub Kye (UC Los Angeles)

A Reconciliation of the Top-Down and Bottom-Up Approaches to Risk Capital Allocations: Proportional Allocations Revisited [slides]

In the current reality of prudent risk management, the problem of determining aggregate risk capital in financial entities have been intensively studied. As a result, canonical methods have been developed and even embedded in regulatory accords. Though applauded by some and questioned by others, these methods provide a much desired standard benchmark for everyone. The situation is very different when the aggregate risk capital needs to be allocated to the business units (BUs) of a financial entity. That is, there are overwhelmingly many ways to conduct the allocation exercise, and there is arguably no standard method to do so on the horizon. Two overarching approaches to allocate the aggregate risk capital stand out. These are the top-down allocation (TDA) approach that entails that the allocation exercise be imposed by the corporate center, and the bottom-up allocation (BUA) approach that implies that the allocation of the aggregate risk to BUs is informed by these units. Briefly, the TDA starts with the aggregate risk capital that is then replenished among the BUs according to the views of the center, thus limiting the inputs from the BUs. The BUA does start with the BUs but it is, as a rule, too granular and so may lead to missing the wood for the trees. Irrespective of whether the TDA or the BUA is assumed, it is the proportional contribution of the riskiness of a stand-alone BU to the aggregate riskiness of the financial entity that is of central importance, and it is routinely computed nowadays as the quotient of the allocated risk capital due to the BU of interest and the aggregate risk capital due to the financial entity. For instance, in the simplest case when the mathematical expectation plays the role of the risk measure that generates the allocation rule, the desired proportional contribution is just a quotient of two means. Clearly, in general, this quotient of means does not concur with the mean of the quotient random variable that captures the genuine stochastic proportional contribution of the riskiness of the BU of interest. Inspired by this observation, hereon we reenvision the way in which the allocation problem is tackled in the state of the art. As a by-product, we unify the TDA and the BUA into one encompassing approach.

Discussant: Xiaoqi Xu


10:40 - 11:15 a.m.

Zimu Zhu (University of Southern California)

A dynamic Principal-Agent problem [slides]

In this talk we consider a dynamic Principal-Agent problem over time period [0, T]. The standard literature considers only the static problem, namely the optimal contract at time 0. However, when we consider the problem dynamically, the problem is typically time inconsistent in the sense that the optimal contract found at time 0 does not remain optimal at a later time t (when considering the problem over [t,T]). Such time inconsistency is irrelevant if the contract will be enforced once it is signed. However, it will be a serious issue if the principal can fire the agent or if the agent can quit before the expiry date T. In this talk we focus on the case that the agent can quit, but with certain cost. There are one principal and a family of agents parameterized by their quality. When an agent (with certain quality) quits, the principal will hire a new agent with possibly different quality (and different individual reservation value). We shall take the equilibrium approach to deal with the time inconsistency issue. The principal’s utility at the equilibrium contract will be characterized through a system of HJB equations, where the HJB system is parametrized by the quality of the agents. The solutions could be discontinuous at the boundaries, and thus certain face lifting is needed. We find that the principal’s utility may or may not be lower by allowing the agent to quit. Moreover, if the quitting cost has a uniform lower bound, the the principal will only see agents quitting for finitely many times. The talk is based on a joint work with Jianfeng Zhang.

Discussant: Xiaoli Wei


11:15 - 11:50 a.m.

Yang Zhou (University of Washington)

Optimal Dynamic Futures Portfolio in a Regime-Switching Market Framework

We study the problem of dynamically trading futures in a regime-switching market. Modeling the underlying asset price as a Markov-modulated diffusion process, we present a utility maximization approach to determine the optimal futures trading strategy. This leads to the analysis of the associated system of Hamilton-Jacobi-Bellman (HJB) equations, which are reduced to a system of linear ODEs. We apply our stochastic framework to two models, namely, the Regime-Switching Geometric Brownian Motion (RS-GBM) model and Regime-Switching Exponential Ornstein-Uhlenbeck (RS-XOU) model. Numerical examples are provided to illustrate the investor's optimal futures positions and portfolio value across market regimes.

Discussant: Joseph Jackson


12:00 - 1:00 p.m.

Panel discussion with Darrell Duffie, Charles-Albert Lehalle, and H. Mete Soner


Monday, January 18

9:30 - 10:05 a.m.

Qi Feng (University of Southern California)

Signature Method for Option Pricing Problem With Path-Dependent Features

The classical models for asset processes in math finance are SDEs driven by Brownian motion of the following type $X_t = x + \int_0^t b(s, X_s) ds + \int_0^t σ(s, X_s) dB_s$. Then $u(t, X_t) = E[g(X_T )|F_t^X ]$ is a deterministic function of $X_t$ and $u(t, x)$ solves a parabolic PDE. In this talk, I will talk about two types of path-dependent option pricing problems. In the first scenario, the option function depends on the whole path of the Markov process $X$, the option pricing problem is indeed to compute $E_P[g(X_{[0,T]})]$; In the second scenario, we consider the option pricing problem for rough volatility models, where the volatility of the asset process follows a Volterra SDE. The option function depends on the terminal value of a non-Markovian asset process. In both cases, the function $u(t,\cdot)$ solves the so-called Path-Dependent PDEs. Due to the path-dependent feature, the standard numerical algorithms are not efficient for both cases. We introduce the “Signature” idea from the Rough Path theory to our numerical algorithms to improve the efficiency. Our first algorithm is based on “deep signature” and deep learning methods for BSDEs; Our second algorithm is based on cubature formula for “Volterra signature”, which is motived from the “cubature formula” for the signature of Brownian motion. The talk is based on two joint works with Man Luo, Zhaoyu Zhang, and Jianfeng Zhang.

Discussant: Dangxing Chen


10:05 - 10:40 a.m.

Joseph Jackson (University of Texas at Austin)

Well-posedness for non-Markovian quadratic BSDE systems [slides]

We consider non-Markovian quadratic BSDE systems with structural constraints on the driver. Using techniques from Malliavin calculus and BMO martingale theory, we give existence and uniqueness results for BSDE systems whose drivers have a quadratic triangular or quadratic linear structure and satisfy an "a-priori boundedness" condition. Many authors have applied BMO martingale theory to BSDEs, but our approach is novel in two respects. First, in order to handle quadratic triangular drivers, we develop some new results about linear BSDE systems with unbounded coefficients. Second, we introduce the notion of "uniform sliceability", and use it to study quadratic linear drivers. This is joint work with Gordan Žitković.

Discussant: Zimu Zhu


10:55 - 11:30 a.m.

Xiaoli Wei (UC Berkeley)

Itô’s formula for flow of measures on semimartingales

We state Itô’s formula along a flow of probability measures associated with general semimartingales. This extends recent existing results for flow of measures on Itô processes. Our approach is to first prove Itô’s formula for cylindrical polynomials and then use function approximation for the general case. Some applications to McKean-Vlasov controls of jump-diffusion processes and McKean-Vlasov singular controls are developed. This is joint work with Xin Guo, Huyên Pham.

Discussant: Nicole Yang


11:30 - 12:30 p.m.

H. Mete Soner (Princeton University)

Monte-Carlo methods for high-dimensional problems in quantitative finance

Stochastic optimal control has been an effective tool for many problems in quantitative finance and financial economics. Although, they provide the much needed quantitative modeling for such problems, until recently they have been numerically intractable in high-dimensional settings. However, several recent studies report impressive numerical results: Becker, Cheridito, and Jentzen study the optimal stopping problem providing tight error bounds and an efficient algorithm for up to 100 dimensional problems. Buehler, Gonon, Teichmann, and Wood on the other hand, consider the problem of hedging and again report results for high-dimensional problems. All these papers use a Monte-Carlo type algorithm combined with deep neural networks proposed by Han, E and Jentzen and further studied by Bachouch, Huré, Langrené, and Pham. In this talk I will outline this approach and discuss its properties. Numerical results, while validating the power of the method in high dimensions, also show the dependence of the dimension and the size of the training data. This is joint work with Max Reppen of Boston University.

Last modfied: January 19, 2021. Copyright Ⓒ 2020. Photo Copyright Ⓒ 2020.