Thursday 9th January
0930 – Welcome and registration (with coffee)
1025 – Opening remarks
1030 – Jan Obłój (Oxford)
OT Calibration: arithmetic and geometric Bass martingales
Optimal transport (OT) proves to be a powerful tool for non-parametric calibration: it allows us to take a favourite (non-calibrated) model and project it onto the space of all calibrated (martingale) models. The dual side of the problem leads to an HJB equation and a numerical algorithm to solve the projection. However, in general, this process is costly and leads to spiky vol surfaces. We are interested in special cases where the projection can be obtained semi-analytically. This leads us to the martingale equivalent of the seminal fluid-dynamics interpretation of the optimal transport (OT) problem developed by Benamou and Brenier. Specifically, given marginals, we look for the martingale which is the closest to a given archetypical model. If our archetype is the arithmetic Brownian motion, this gives the stretched Brownian motion (or the Bass martingale), studied previously by Backhoff-Veraguas, Beiglbock, Huesmann and Kallblad (among others). Here we consider the financially more pertinent case of Black- Scholes (geometric BM) reference and show it can also be solved explicitly. In both cases, fast numerical algorithms are available. Based on joint works with Julio Backhoff, Benjamin Joseph and Gregoire Leoper.
1110 – Coffee break
1130 – Nikolaos Constantinou (Warwick)
Mean-variance equilibria in continuous time
We revisit the classical topic of mean-variance equilibria in the setting of continuous time, where asset prices are driven by continuous semimartingales. We show that under mild assumptions, a mean-variance equilibrium corresponds to a quadratic equilibrium for different preference parameters. We then use this connection to study a fixed-point problem that establishes existence of mean-variance equilibria. Our results rely on fine properties of mean-variance hedging as well as a novel stability result for quadratic BSDEs. The talk is based on joint work with Christoph Czichowsky, Martin Herdegen and David Martins.
1155 – Lionel Sopgoui (Imperial)
Modeling the impact of climate transition on real estate prices
In this work, we propose a model to quantify the impact of the climate transition on a property in housing market. We begin by noting that property is an asset in an economy. That economy is organized in sectors, driven by its productivity which is a multidimensional Ornstein-Uhlenbeck process, while the climate transition is declined thanks to the carbon price, a continuous deterministic process. We then extend the sales comparison approach and the income approach to valuate an energy inefficient real estate asset. We obtain its value as the difference between the price of an equivalent efficient building following an exponential Ornstein-Uhlenbeck as well as the actualized renovation costs and the actualized sum of the future additional energy costs. These costs are due to the inefficiency of the building, before an optimal renovation date which depends on the carbon price process. Finally, we carry out simulations based on the French economy and the house price index of France. Our results allow to conclude that the order of magnitude of the depreciation obtained by our model is the same as the empirical observations.
1220 – Konrad Mueller (Imperial)
Fast deep hedging with second-order optimization
Hedging exotic options in presence of market frictions is an important risk management task. Deep hedging can solve such hedging problems by training neural network policies in realistic simulated markets. Training these neural networks may be delicate and suffer from slow convergence, particularly for options with long maturities and complex sensitivities to market parameters. To address this, we propose a second-order optimization scheme for deep hedging. We leverage pathwise differentiability to construct a curvature matrix, which we approximate as block-diagonal and Kronecker-factored to efficiently precondition gradients. We evaluate our method on a challenging and practically important problem: hedging a cliquet option on a stock with stochastic volatility by trading in the spot and vanilla options. We find that our second-order scheme can optimize the policy in 1/4 of the number of steps that standard adaptive moment-based optimization takes.
1245 – Lunch
1415 – Giulia Livieri (LSE)
A Mean Field Game approach for pollution regulation of competitive firms
We develop a model based on mean-field games of competitive firms producing similar goods according to a standard AK model with a depreciation rate of capital generating pollution as a byproduct. Our analysis focuses on the widely-used cap-and-trade pollution regulation. Under this regulation, firms have the flexibility to respond by implementing pollution abatement, reducing output, and participating in emission trading, while a regulator dynamically allocates emission allowances to each firm. The resulting mean-field game is of linear quadratic type and equivalent to a mean-field type control problem, i.e., it is a potential game. We find explicit solutions to this problem through the solutions to differential equations of Riccati type. Further, we investigate the carbon emission equilibrium price that satisfies the market clearing condition and find a specific form of FBSDE of McKean-Vlasov type with common noise. The solution to this equation provides an approximate equilibrium price. Additionally, we demonstrate that the degree of competition is vital in determining the economic consequences of pollution regulation.
1455 –Coffee break
1515 – Konark Jain (UCL)
No tick-size too small: A general method for modelling small tick limit order books
We investigate the disparity in the microstructural properties of the Limit Order Book (LOB) across different relative tick sizes. Tick sizes not only influence the granularity of the price formation process but also affect market agents' behavior. A key contribution of this study is the identification of several stylized facts, which are used to differentiate between large, medium, and small tick stocks, along with clear metrics for their measurement. We provide cross-asset visualizations to illustrate how these attributes vary with relative tick size. Further, we propose a Hawkes Process model that accounts for sparsity, multi-tick level price moves, and the shape of the book in small-tick stocks. Through simulation studies, we demonstrate the universality of the model and identify key variables that determine whether a simulated LOB resembles a large-tick or small-tick stock. Our tests show that stylized facts like sparsity, shape, and relative returns distribution can be smoothly transitioned from a large-tick to a small-tick asset using our model. We test this model's assumptions, showcase its challenges and propose questions for further directions in this area of research.
1540 – Yumin Lu (UCL)
Portfolio selection in contests
In an investment contest with incomplete information, a finite number of agents dynamically trade assets with idiosyncratic risk and are rewarded based on the relative ranking of their terminal portfolio values. We explicitly characterize a symmetric Nash equilibrium of the contest and rigorously verify its uniqueness. The connection between the reward structure and the agents' portfolio strategies is examined. A top-heavy payout rule results in an equilibrium portfolio return distribution with high positive skewness, which suffers from a large likelihood of poor performance. Risky asset holding increases when competition intensifies in a winner-takes-all contest.
1605 – Sturmius Tuschmann (Imperial)
Optimal portfolio choice with cross-impact propagators
We consider a class of optimal portfolio choice problems in continuous time where the agent's transactions create both transient cross-impact driven by a matrix-valued Volterra propagator, as well as temporary price impact. We formulate this problem as the maximization of a revenue-risk functional, where the agent also exploits available information on a progressively measurable price predicting signal. We solve the maximization problem explicitly in terms of operator resolvents, by reducing the corresponding first order condition to a coupled system of stochastic Fredholm equations of the second kind and deriving its solution. We then give sufficient conditions on the matrix-valued propagator so that the model does not permit price manipulation. We also provide an implementation of the solutions to the optimal portfolio choice problem and to the associated optimal execution problem. Our solutions yield financial insights on the influence of cross-impact on the optimal strategies and its interplay with alpha decays. This is joint work with Eduardo Abi Jaber and Eyal Neuman.
1630 – James Dalby (King's College London)
Collectivised pensions in the presence of systematic longevity risk
There is growing interest in collective pension investment in the UK, and the first collective defined contribution fund was launched in October. In this talk we will discuss a theoretically optimal way to manage a collective fund by creating an internal insurance market which is required to clear when everyone behaves optimally. We determine the equilibrium price of these contracts and the benefits they may bring when an insurance market is created to hedge systematic longevity risk. This talk is joint work with John Armstrong and the Pension Policy Institute and is funded by Nuffield grant FR-000024058.
1700 – Collaboration time
1900 – Dinner
Founder's Library, New College
Friday 10th January
0930 – Coffee and discussion
1030 – Laura Ballotta (Bayes Business School)
The term structure of implied correlations between S&P and VIX
We develop a joint model for the S&P500 and the VIX indices with the aim of extracting forward looking market consistent information on the correlation between the two markets. We achieve this by building the model on time changed Levy processes, deriving closed analytical expressions for relevant quantities directly from the joint characteristic function, and exploiting the market quotes of options on both indices. We perform a piece-wise joint calibration to the option prices to ensure the highest level of precision within the limits of the availability of quotes in the dataset and their liquidity. Using the calibrated parameters, we are able to quantify the ‘leverage/volatility feedback’ effect along the term structure of the VIX options and corresponding VIX futures. We illustrate the model using market data on SPX options and both futures and options on the VIX. This is joint work with Ernst Eberlein and Gregory Rayee.
1110 – Coffee break
1130 – Joseph Mulligan (Imperial)
How minimum performance thresholds bias backtests
It is generally accepted that investors prefer higher Sharpe ratios, and that strategies that don't meet a minimum performance threshold are rejected. But what is the impact of this filter? We study how setting any kind of performance thresholds (even just requiring the performance be positive) biases backtested Sharpe ratios upwards. Utilising Bayesian techniques we can perform inference which corrects for this bias and improves investment outcomes. This work complements existing approaches in the literature on multiple testing, e.g.~de Prado, Harvey, Chen. It is joint work with Johannes Muhle-Karbe and Antoine Jacquier.
1155 – Robert Boyce (Imperial)
Market making with exogenous competition
We study liquidity provision in the presence of exogenous competition. We consider a ‘reference market maker’ who monitors her inventory and the aggregated inventory of the competing market makers. We assume that the competing market makers use a ‘rule of thumb’ to determine their posted depths, depending linearly on their inventory. By contrast, the reference market maker optimises over her posted depths, and we assume that her fill probability depends on the difference between her posted depths and the competition’s depths in an exponential way. For a linear-quadratic goal functional, we show that this model admits an approximate closed-form solution. We illustrate the features of our model and compare against alternative ways of solving the problem either via an Euler scheme or state-of-the-art reinforcement learning techniques.
1220 – Milena Vuletic (Oxford)
VolGAN: A generative model for arbitrage-free implied volatility surfaces
We introduce VolGAN, a generative model for arbitrage-free implied volatility surfaces. The model is trained on time series of implied volatility surfaces and underlying prices and is capable of generating realistic scenarios for joint dynamics of the implied volatility surface and the underlying asset. We illustrate the performance of the model by training it on SPX implied volatility time series and show that it is able to learn the covariance structure of the co-movements in implied volatilities and generate realistic dynamics for the (VIX) volatility index. In particular, the generative model is capable of simulating scenarios with non-Gaussian distributions of increments for state variables as well as time-varying correlations. Finally, we illustrate the use of VolGAN to construct data-driven hedging strategies for option portfolios.
1245 – Lunch
1415 – John Armstrong (King's College London)
Collective Pensions in the UK
UK pensions have traditionally been restricted to defined benefit (DB) and defined contribution (DC) pensions such as those in the USS pension plan. In 2015 legislation was passed that hoped to encourage new “collective benefit” pensions, and after further legislation was passed in 2021, the first UK “collective defined contribution” scheme was launched in October 2024. We will look at how collective defined contribution (CDC) schemes operate and how well they perform in practice. We will see that the design of CDC pensions is grounded in pricing methodologies based on discounting. We will propose an alternative design which is grounded in the more rigorous risk-neutral pricing methodology and find that, in our simulations, it outperforms CDC both in terms of pension benefits for typical generations and in terms of eliminating intergenerational cross-subsidies.
1455 – Coffe break
1515 – Leonardo Baggiani (Warwick)
(U, ρ)-arbitrage and sensitivity to large losses
In this talk, we revisit portfolio selection in a one-period financial market under a general reward-risk framework, where reward is modelled by a utility functional U and risk by a risk functional ρ. We show that it can happen that utility goes to infinity while the risk remains acceptable and we call this phenomenon (U, ρ)-arbitrage. We show that the absence of (U, ρ)-arbitrage for all financial markets is equivalent to either ρ being sensitive to large losses or U being weakly sensitive to large losses. Here, sensitivity to large losses means that financial positions with the potential of a loss either have an unacceptable risk or an upper utility bound when scaled by a sufficiently large factor. Moreover, if either ρ or U is law invariant we prove that the global absence of (U, ρ)-arbitrage is equivalent to ρ (U) being (weakly) sensitive to large losses. Finally we show that in case of an S-shaped expected utility the global presence of (U, ρ)-arbitrage is closely linked to ρ being sensitive to large expected losses. This is a joint work with Martin Herdegen and Nazem Khan.
1540 – Andreea Popescu (Warwick)
Equilibrium asset pricing with Epstein-Zin stochastic differential utility
We revisit the classical problem of equilibrium asset pricing in a continuous-time complete-market setting, but in the case where investors’ preferences are described by Epstein-Zin Stochastic Differential Utility. The market is comprised of a riskless bond and a risky asset, where the latter pays continuously a stochastic dividend stream. The equilibrium is characterised by a system of strongly coupled Forward-Backward Stochastic Differential Equations (FBSDEs). This is joint work in progress with Dr. Martin Herdegen.
1605 – Muhammad Alif Aqsha (Oxford)
Strategic learning and trading in broker-mediated markets
We study strategic interactions in a broker-mediated market. A broker provides liquidity to a strategic informed trader and to noise traders while managing inventory in the lit market. We characterise the closed-form strategies of the strategic agents when they maximise trading performance while filtering each other’s private information; the informed trader accounts for the broker’s trading activity which is estimated from price impact while the broker estimates the informed trader’s private signal. Brokers hold a strategic advantage over traders who rely solely on prices to filter information. Information leakage in the client trading flow enhances the broker’s internalisation/externalisation tradeoff; she speculates profitably and mitigates risk effectively, which, in turn, impacts the trading performance of informed traders. We show that the clients’ trading flow yields an economic value that is comparable to transaction costs. In contrast, if the broker employs low signal-to-noise sources of information such as prices, then her trading performance is indistinguishable from that of a naive strategy that internalises the uninformed flow, externalises the informed flow, and offloads inventory at a constant rate. This is joint work with Leandro Sánchez-Betancourt and Fayçal Drissi.
1630 – Close