Plenary Talks
Sara Biagini (LUISS G. Carli)
Emission impossible: balancing environmental concerns and inflation.
This paper introduces a simple market equilibrium model to explore how policy measures targeting emission reductions impact prices (policy-induced inflation). The model provides a quantification of the potential rise in inflation as a direct consequence of more ambitious environmental policies. We examine the trade-offs between environmental ambitions and economic stability, determining the extent to which inflation can be tolerated in the pursuit of enhanced environmental outcomes. Joint work with Maria Arduca (LUISS), René Aid (Paris Dauphine) and Luca Taschini (Edinburgh and the LSE).
Luciano Campi (University of Milan)
Coarse correlated equilibria for continuous time mean field games and applications
We will consider coarse correlated equilibria in continuous mean field games. In games with finitely many players, they are generalizations of Nash equilibria, when a moderator (correlation device) can recommend strategies to the players that are not convenient to unilaterally reject. We will present existence and approximation results in a fairly general mean field game. In particular, the existence result will be based on an application of a minimax theorem to an auxiliary zero-sum game. We will also present a way to compute explicitly such equilibria in a linear-quadratic framework, motivated by an emission abatement game. The talk will be based on two joint papers with F. Cannerozzi, F. Cartellier and M. Fischer.
Roxana Dumitrescu (ENSAE Paris)
A new Mertens decomposition of Y g,ξ-submartingale systems and applications
We introduce the concept of Y g,ξ-submartingale systems, where the nonlinear operator Y g,ξ corresponds to the first component of the solution of a reflected BSDE with generator g and lower obstacle ξ. We first show that, in the case of a left-limited right-continuous obstacle, any Y g,ξ-submartingale system can be aggregated by a process which is right-lower semicontinuous. We then prove a Mertens decomposition, by using an original approach which does not make use of the standard penalization technique. These results are in particular useful for the treatment of control/stopping game problems and, to the best of our knowledge, they are completely new in the literature. We finally present two applications in Finance (based on joint works with R. Elie, W. Sabbagh and C. Zhou).
Giorgio Ferrari (Universität Bielefeld)
Stationary Mean-field Games of Singular Control
In this talk, I will present recent and ongoing results on existence, uniqueness, and characterization of equilibria for stationary mean-field games with singular controls. This class of problems finds natural applications in Economics and Finance, such as in investment problems in oligopolies. In those games, the representative agent employs a bounded-variation control in order to maximize an ergodic profit functional depending on the long-time average of the controlled state-process. Several variants of the considered games will be presented, which will differ with respect to the dimension of the state-process and the employed equilibrium criterion.
H. Mete Soner (Princeton University)
Synchronization Games
Building on Winfree's work, the Kuramoto model (1975) has become the corner stone of mathematical models of collective synchronization, and has received attention in all natural sciences, engineering, and mathematics. While the classical model postulates the dynamics of each oscillator in the form of a system of nonlinear ordinary differential equations, Yin, Mehta, Meyn, & Shanbhag (2010) use the mean-field game (MFG) formalism of Lasry & Lions, and Huang, Caines, & Malhame. In this talk, in addition to the Yin et.al model, we also introduce a simpler two state model which can be seen as a discretization of the original one. We outline results showing that the mean field approach also delivers same type of results including the phase transition from incoherence to synchronization. In particular, in the discrete setting we provide a comprehensive characterization of stationary and dynamic equilibria along with their stability properties. In all models, while the system is unsynchronized when the coupling is not sufficiently strong, fascinatingly, they exhibit an abrupt transition to a full synchronization above a critical value of the interaction parameter. In the subcritical regime, the uniform distribution representing incoherence is the only stationary equilibrium. Above the critical interaction threshold, the uniform equilibrium becomes unstable and there is a multiplicity of stationary equilibria that are self-organizing. The discrete model with discounted cost present dynamic equilibria that spiral around the uniform distribution before converging to the self-organizing equilibria. With an ergodic cost, however, unexpected periodic equilibria around the uniform distribution emerge.
Luitgard Veraart (London School of Economics and Political Science)
Systemic Risk in Markets with Multiple Central Counterparties
We provide a framework for modelling risk and quantifying payment shortfalls in cleared markets with multiple central counterparties (CCPs). Building on the stylised fact that clearing membership is shared among CCPs, we develop a modelling framework that captures the interconnectedness of CCPs and clearing members. We illustrate stress transmission mechanisms using simple examples as well as empirical evidence based on calibrated data. Furthermore, we show how stress mitigation tools such as variation margin gains haircutting by one CCP can have spillover effects on other CCPs. The framework can be used to enhance CCP stress-testing, which currently relies on the "Cover 2" standard requiring CCPs to be able to withstand the default of their two largest clearing members. We show that who these two clearing members are can be significantly affected if one considers higher-order effects arising from interconnectedness through shared clearing membership. Looking at the full network of CCPs and shared clearing members is therefore important from a financial stability perspective. This is joint work with Iñaki Aldasoro.
Contributed Talks
Robert Boyce (Imperial College London)
Unwinding Order Flow with Unknown Toxicity
We consider a central trading desk which aggregates the inflow of clients' orders which has unobserved toxicity. The desk chooses either to internalise the inflow or externalise it to the market in a cost effective manner. In this model, externalising the order-flow creates both price impact costs and an additional market feedback reaction for the outflow of trades. The desk's objective is to maximise the daily P&L subject to end of the day inventory penalization. We formulate this setting as a partially observable stochastic control problem and solve it in two steps. First, we derive the filtered dynamics of the inventory and toxicity, projected to the observed filtration, which turns the stochastic control problem into a fully observed problem. Then we use a variational approach in order to derive the unique optimal trading strategy. We illustrate our results for various scenarios in which the desk is facing momentum and mean-reverting inflows.
Galen Cao (University of Edinburgh)
Regret bounds for learning the price sensitivity of market participants in the Avellaneda-Stoikov market making model
We analyse the regret arising from learning the price sensitivity of market participants parameter in the ergodic setting of the Avellaneda-Stoikov market making model. We show that a learning algorithm based on a regularised maximum-likelihood estimator for the parameter achieves regret upper bound of an order O(ln(T)^2) with high probability. To obtain the result we need two key ingredients. The first is tight upper bounds on the derivative of the ergodic constant in the Hamilton--Jacobi--Bellman (HJB) equation with respect to the price sensitivity of market participants parameter. The second is the learning rate of the maximum- likelihood estimator which is obtained from concentration inequalities for Bernoulli signals. Numerical experiment confirms the convergence and the robustness of the proposed algorithm.
Jun Cheng (London School of Economics and Political Science)
Duality theory for utility maximisation in Volterra kernel models for transient price impact
Incorporating the impact of past trades on future prices is crucial to understand the profitability of trading strategies. Recently, Volterra kernels have been proposed for tractable models to capture this transient price impact in optimal execution problems. In this talk, we consider expected utility maximisation in such Volterra kernel transient price impact models. We solve this problem by convex duality and establishing the solution to a suitable dual problem. For this, we identify an appropriate class of dual variables and develop a novel super-replication theorem. Our results generalise earlier work on optimal execution to non-linear utility maximisation problems as well as previous results on convex duality with price impact to Volterra kernel models for transient price impact. The talk is based on joint work with Christoph Czichowsky.
Nikolaos Constantinou (University of Warwick)
Equilibria in incomplete markets – an FBSDE approach
Starting with a complete-market specification, we study equilibrium asset pricing over infinite time horizon in an incomplete market, where the incompleteness stems from an extra source of randomness for the dividend stream. We consider two heterogeneous agents with either CARA or CRRA preferences. In both cases, the equilibrium condition leads to a system of strongly coupled Forward-Backward Stochastic Differential Equations (FBSDEs). This talk is based on joint work in progress with Martin Herdegen.
Carla Crucianelli (Princeton University)
Interacting particle systems on graphs and its graphon limit
We consider a general interacting particle system with interactions on a random graph and study the large population limit of this system. When the sequence of underlying graphs converges to a graphon, we show convergence of the interacting particle system to a so called graphon stochastic differential equation. This is a system of uncountable many SDEs of McKean-Vlasov type driven by a continuum of Brownian motions. We make sense of this equation in a way that retains joint measurability and essentially pairwise independence of the driving Brownian motions of the system by using the framework of Fubini extension. The convergence result is general enough to cover nonlinear interaction, as well as various examples of sparse graphs.
Anna De Crescenzo (Université Paris Cité)
Nonlinear Graphon mean-field systems
We address a system of weakly interacting particles where the heterogenous connections among the particles are described by a graph sequence and the number of particles grows to infinity. Our results extend the existing law of large numbers and propagation of chaos results to the case where the interaction between one particle and its neighbors is expressed as a nonlinear function of the local empirical measure. In the limit of the number of particles which tends to infinity, if the graph sequence converges to a graphon, then we show that the limit system is described by an infinite collection of processes and can be seen as a process in a suitable L^2 space constructed via a Fubini extension. The proof is built on decoupling techniques and careful estimates of the Wasserstein distance.
Robert Denkert (Humboldt-Universität zu Berlin)
A randomisation method for mean-field control problems with common noise
We study mean-field control (MFC) problems with common noise using the randomisation method. In this approach, we substitute the control process with an independent Poisson point process, whose intensity we control instead. Consequently, the state dynamics become uncontrolled, and we optimise over a set of equivalent probability measures. We demonstrate the equivalence of the randomised problem to the original MFC problem. Furthermore, we obtain a representation of the value function as the minimal solution to a backward stochastic differential equation with constrained jumps. This leads to a randomised dynamic programming principle expressed as a supremum over equivalent probability measures. This presentation is based joint work with Idris Kharroubi and Huyên Pham.
Fabian Fuchs (Bielefeld University)
A comparison principle based on couplings of partial integro-differential operators
In this presentation, we present a comparison principle for viscosity solutions to abstract Hamilton-Jacobi-Bellman and Isaacs equations, which is based on the notion of a coupling for integro-differential operators. Examples that are covered by our setup include non-linear first and second order partial differential equations as well as non-local equations such as partial integrodifferential equations. In a first step, we introduce the notion of a coupling for operators defined on spaces of continuous functions, discuss the relation to optimal transport and couplings of probability measures, and illustrate the concept of a coupling for generators of Brownian motions and pure jump processes. In a second step, we provide some intuition on the use of couplings in the proof of the comparison principle and apply the abstract results to problems appearing in the context of stochastic optimal control and robust finance.
Felix Höfer (Princeton University)
Potential Games and Gradient Flows
Potential mean-field games (MFGs) are games that arise as first-order conditions of meanfield control problems. We provide a result that links minimizers of a generic mean-field control problem to Nash equilibria of a potential MFG. While traditional proofs of this fact either rely on the Fenchel-Rockafellar Duality Theorem or the Pontryagin maximum principle, we are able to considerably weaken existing regularity assumptions by using a direct probabilistic argument. Our approach extends to state dynamics with jumps, common noise, and we explain how MFGs of extended type appear when cost functionals are not separable. Finally, we make the connection to certain Wasserstein gradient flows; a link that has been leveraged to obtain prominent examples of mean-field games such as the Kuramoto MFG and Flocking MFG model.
Damian Jelito (Jagiellonian University)
Impact of non-exponential discounting on long-run impulse control problems
In the behavioural finance literature, there is a well-established conclusion that the classical exponential discounting does not properly reflect the time preferences of real economic agents. In particular, the actual agents usually exhibit decreasing impatience behaviour that cannot be modelled by the classical approach. However, replacing the exponential discount function with some other map usually leads to very complicated models with highly non-linear optimality equations and the time inconsistency of the optimal strategies. In this talk, we discuss the properties of the long-run impulse control problems when the nonexponential discount function is applied. Using a generic framework of Feller-Markov processes, we solve the associated optimality equation and construct an optimal strategy. Also, we compare the results with the undiscounted case and show that, under very natural assumptions, the optimal values of the discounted and undiscounted problems are equal to each other and there exists a direct link between the optimal strategies for both frameworks. This shows that the complicated discounted problem could be significantly simplified and the resulting optimal strategy is timeconsistent. The talk is based on the paper D. Jelito, Ł. Stettner, (2024), Impulse control with generalised discounting, SIAM Journal on Control and Optimization 62(2).
Philipp Jettkant (Imperial College London)
A Forward-Backward Approach to Endogenous Distress Contagion
In this talk, I will introduce a dynamic model of a banking network in which the value of interbank obligations is continuously adjusted to reflect counterparty default risk. An interesting feature of the model is that the credit value adjustments increase volatility in times of distress, leading to endogenous default contagion between the banks. The counterpart default risk can be computed backwards in time from the maturity date, leading to a specification of the model in terms of a forward-backward stochastic differential equation (FBSDE), coupled through the banks' default times. Although the FBSDE generically exhibits non-uniqueness, one can prove the existence of solutions with minimal and maximal default probabilities. I will conclude the talk by discussing a characterisation of the maximal default probabilities through a cascade of partial differential equations (PDEs), each representing a configuration with a different number of defaulted banks. The domain of each PDE has a free boundary that coincides with the banks' default thresholds.
Fritz Krause (Technische Universität Berlin)
Trading with jump information and price impact
We extend the transient price impact model of Bank and Dolinsky (2019) by introducing jumps to the fundamental asset price and market depth. This setting demands for a careful treatment of the order of information revelation and a trader's actions. We use Meyer sigma-fields to describe a trader's information flow and thereby stipulate the measuablity of investment strategies. These, then ladlag bounded variation processes, specify the precautionary action a trader may take upon receiving information about imminent jumps and reaction after a jump has occurred. By using suitable integrals for ladlag integators we formalize impact on the spread as well as the wealth dynamics, whose concavity is recovered. This allows us to study convex duality for superhedging, where we generalize the result of Bank-Dolinsky (2019) to our setting with jumps. In addition to that, we discuss duality for utility maximization in this singular stochastic control setting.
Giacomo Lanaro (University of Padova)
Price formation under asymmetry of information - a mean-field approach
In financial markets, quantifying the information possessed by an agent trading an asset is a crucial task, especially when there is no homogeneity between the amount of information that can be accessed by every player. Our purpose is to study the behaviour of an equilibrium price p determined by the market clearing condition (the match between the demand and the supply) in a market populated by agents who observe different amounts of information. We focus on a market model in which one asset is traded by N less informed agents and one major agent. We derive an equation for equilibrium price under which the market clearing condition is satisfied. We show that, under the observation of the equilibrium price process p, the less informed players bridge the gap in terms of the amount of available information. However, the equation for p is not tractable in the case of a market populated by N+1 agents. Hence, we study the mean-field limit of p, considering an infinite number of less informed small agents. We prove the existence of a mean-field solution to the equation for the price process p when N goes to infinity. We apply techniques related to the existence of weak mean-field game equilibria and based on the discretization of the common source of stochasticity shared by every agent. Finally, we show that the mean-field limit of the price p guarantees a weak form of the market clearing condition for the game with N+1 players.
Emmet Lawless (Dublin City University)
Consumption with stochastic investment opportunities
We investigate the equivalence between the investment-consumption problem with isoelastic preferences over an infinite horizon and an associated variational problem when the underlying market is complete. We focus on the case with a single state variable in which all model coefficients can depend. Under some mild assumptions we prove the utility maximisation problem is equivalent to solving a tractable, convex variational problem which is far more amenable to numerical methods. This approach circumvents the need to solve the associated Hamilton-JacobiBellman (HJB) equation, which even in relatively simple models can be an extraordinary difficult task. In addition, we discuss progress on the high dimensional case wherein the state variables follow a multivariate diffusion process and highlight possibilities of how this approach may be extended to tackle the incomplete market case.
Daria Sakhanda (ETH Zürich)
Optimal Consumption Policy in a Carbon-Conscious Economy: A Machine Learning Approach
Due to the significant carbon emissions generated by various sectors of the economy, fast economic growth can hinder efforts to combat climate change. We study this trade-off by considering an optimal control problem based on the single-good economy model of Borissov/Bretschger (2022) in discrete time. There, a social planner looks for the optimal consumption policy while ensuring simultaneously that the economy grows and overall emissions do not breach a given climate budget. We use a machine learning approach to find an approximate optimal solution to the social planner’s control problem. In addition, we present a formal proof demonstrating that the solution for the finite horizon problem converges to the solution for the infinite horizon problem. By integrating the transmission of economic fluctuations into our analysis, we consider the stochastic version of the model.
Nathan Sauldubois (École Polytechnique)
First order Martingale model risk hedging
This joint work with Nizar Touzi is concerned with the study of sensitivity of functional of measure under the martingale constraint. Following the work of [1] and [2], we study sensitivity analysis under classical Wasserstein distance and adapted Wasserstein distance. In each cases, we propose a new approach of the problem, allowing us to consider more general functional on the Wasserstein space. The counter-part will be to ask regularity of the functional.
This approach also allows us to obtain order two estimates for the adapted Wasserstein metric in the martingale case. In the unconstrained case we obtained higher order expansions for both the classical and adapted Wasserstein metric.
[1] Daniel Bartl and Johannes Wiesel. Sensitivity of multiperiod optimization problems in adapted Wasserstein distance. SIAM Journal on Financial Mathematics. 14(2):704-720, 2023.
[2] Daniel Bartl, Samuel Drapeau, Jan Obłój, and Johannes Wiesel. Sensitivity analysis of Wasserstein distributionally robust optimization problems. Proc. R. Soc. A. 47720210176, 2021.
Yuchen Sun (Humboldt-Universität zu Berlin)
Rough backward SDEs of Marcus-type with discontinuous Young drivers
We study backward differential equations that are jointly driven by Brownian martingales B and a deterministic discontinuous rough path W of q-variation for q<2. Integration of jumps is in the geometric sense in the spirit of Marcus-type stochastic differential equations. The local existence and uniqueness are shown through a direct fix-point argument. By developing a comparison theorem, we can derive an apriori bound of the solution, which helps us attain a unique global solution of the differential equation. If time permits, we will further establish the continuity of the rough backward SDE solution with respect to the terminal condition and the driving rough noise in a Skorokhod-type norm. A direct consequence of it is the connection to backward doubly SDE.
Topias Tolonen-Weckström (Uppsala University)
Irreversible investment with learning-by-doing
We study a model of irreversible investment for a decision-maker who has the possibility to invest in a project with unknown profitability. In this setting, we introduce and explore a feature of "learning-by-doing'', where the learning rate of the unknown profitability is increasing in the decision-maker's level of investment in the project. Our investment problem is formulated as a singular control problem with incomplete information. We show that, under some conditions on the functional dependence of the learning rate on the level of investment (the so called "signal-to-noise ratio''), the optimal strategy is to invest gradually in the project so that a two-dimensional sufficient statistic reflects below a monotone boundary. Moreover, this boundary can be characterised as the solution of a differential problem. We study the boundary, and additionally present a discrete time counterpart for the problem. The talk is based on joint work with Erik Ekström, Alessandro Milazzo, and Yerkin Kitapbayev.
Theresa Traxler (Vienna University of Economics and Business)
Playing with Fire? A Mean Field Game Analysis of Fire Sales and Systemic Risk under Regulatory Capital Constraints
We study the impact of regulatory capital constraints on fire sales and financial stability in a large banking system using a Mean Field Game of Control (MFGC) model. In our model banks adjust their holdings of a risky asset via trading strategies with finite trading rate in order to maximize expected profits. Moreover, a bank is liquidated if it violates a stylized regulatory capital constraint. We assume that the drift of the asset value is affected by the average change in the position of the banks in the system. This creates strategic interaction between the trading behavior of banks and thus leads to a MFGC. The problem can be translated into a system of coupled PDEs, the dynamic programming equation for the optimal strategy of a bank and the forward equation for the evolution of the distribution of bank's characteristics. We solve this system explicitly for a test case without regulatory constraints and numerically for both the unregulated and the regulated case. We compare the results and find that capital constraints can lead to a systemic crisis where a substantial proportion of the banking system defaults simultaneously. Moreover, we discuss proposals from the literature on macroprudential regulation. In particular, we show that in our setup a systemic crisis does not arise if the banking system is sufficiently well capitalized.
Sturmius Tuschmann (Imperial College London)
Optimal Portfolio Choice with Cross-Impact Propagators
We consider a class of optimal portfolio choice problems in continuous time where the agent’s transactions create both transient cross-impact driven by a matrix-valued Volterra propagator, as well as temporary price impact. We formulate this problem as the maximization of a revenue-risk functional, where the agent also exploits available information on a progressively measurable price predicting signal. We solve the maximization problem explicitly in terms of operator resolvents, by reducing the corresponding first order condition to a coupled system of stochastic Fredholm equations of the second kind and deriving its solution. We then give sufficient conditions on the matrix-valued propagator so that the model does not permit price manipulation. We also provide an implementation of the solutions to the optimal portfolio choice problem and to the associated optimal execution problem. Our solutions yield financial insights on the influence of cross-impact on the optimal strategies and its interplay with alpha decays.
Yuwei Wang (University of Warwick)
Portfolio Optimization under Time Risk Preference
As an analogue to the utility function of wealth in the classical expected utility framework, the discount function is proposed as a crucial performance criterion for measuring an individual's preferences or satisfaction regarding time risk -- the risk associated with events occurring sooner or later. Following this, we introduce a new class of portfolio selection problems in continuous-time, where the objectives of the investors are related to attaining goals at the maximum expected discounts. We tackled the associated Hamilton-Jacobi-Bellman (HJB) equation explicitly when the investment benchmark is a constant and the investment horizon is infinite. We characterized the admissible class of discount functions that yield control problems with sufficiently smooth value functions. For discount functions not belonging to this class, we discussed the theory of viscosity solutions adapted to our problem. Additionally, in scenarios where agents do not impose a specific time constraint before achieving their investment targets, we provided analytical and numerical examples. These examples help analyze the investment behavior of investors with diverse time preferences. The theory and techniques developed in this paper enable us to investigate control problems involving various time-inconsistent model of delay discounting.
Niklas Weber (LMU Munich)
Graph Neural Network Methods for Systemic Risk Management
This paper investigates systemic risk measures in stochastic financial networks of explicitly modelled bilateral liabilities. We extend the notion of systemic risk from Biagini, Fouque, Fritelli and Meyer-Brandis (2019) to graph structured data. This means that the systemic risk of a stochastic financial network is defined as the minimal amount of bailout capital needed to make the aggregated loss of the system acceptable in the sense of some univariate risk measure. One suitable aggregation function can be derived from a market clearing algorithm as proposed by Eisenberg and Noe (2001). In this setting we show the existence of optimal random bailout capital allocations that distribute the minimal bailout capital and save the network. Further, we study numerical methods for the approximation of systemic risk and optimal allocations of the bailout capital. We propose to use graph neural networks (GNNs) for computing approximately optimal bailout capital and compare their performance to several benchmark allocations. One feature of GNNs is that they respect permutation equivariance of the underlying graph data. In numerical experiments we find evidence that methods that respect permutation equivariance are superior to other approaches.
Huilin Zhang (Humboldt-Universität zu Berlin and Shandong University)
Stochastic Optimal Control with Rough Drivers
In this talk I would like to introduce the study of optimal control for systems with rough drivers. The stochastic optimal control for such systems is motivated by the pathwise stochastic control, which is applied to describe the case that an investor has his/her own personal information about the market. Besides, such systems also concern those systems with singular drivers. I will talk about the stochastic maximum principle and the dynamical programming for such systems. The talk is based on joint works with P. Friz, U. Horst, M. Grillo and K. Lê.
Rouyi Zhang (Humboldt-Universität zu Berlin)
Hawkes-based microstructure of rough volatility model with sharp rise
We consider the microstructure of a stochastic volatility model incorporating both market and limit orders. In our model, the volatility is driven by self-exciting arrivals of market orders as well as self-exciting arrivals of limit orders, which are modeled by Hawkes processes. The impact of market order on future order arrivals is captured by a Hawkes kernel with power law decay, and is hence persistent. The impact of limit orders on future order arrivals is temporary, yet possibly long-lived. After suitable scaling the volatility process converges to a fractional Heston model driven by an additional Poisson random measure. The random measure generates occasional spikes in the volatility process, which resembles the clustering of small jumps in the volatility process that has been frequently observed in the financial economics literature. Our results are based on novel uniqueness results for stochastic Volterra equations driven by a Poisson random measure and non-linear fractional Volterra equations. The presentation is based on joint work with Ulrich Horst and Wei Xu.
Yuliang Zhang (London School of Economics and Political Science)
A network approach to macroprudential buffers
I use network modelling of systemic risk to set macroprudential buffers from an operational perspective. I focus on the countercyclical capital buffer, an instrument designed to protect the banking sector from periods of excessive growth associated with a build-up of system-wide risk. I construct an indicator of financial vulnerability with a model of fire sales, which captures the spillover losses in the system caused by deleveraging and joint liquidation of illiquid assets. Using data on the U.S. bank holding companies, I show that the indicator is informative about the build-up of vulnerability and can be useful for setting the countercyclical capital buffer.
Posters
Samira Amiriyan (University of Liverpool)
Learning Price Function Under No Arbitrage
The challenge of options pricing has long been a significant issue in the world of Finance. So far, many attempts have been made to address this problem, leading to the introduction of three primary methods: 1. Transform Techniques, 2. Numerical Solutions of Partial Differential Equations (PDEs), and 3. Simulation Methods including Monte-Carlo techniques.
With the advent of Machine Learning, a fourth method has been introduced. In 1993, Malliaris and Salchenberger [1] pioneered the use of Artificial Neural Networks (ANN) for option pricing in their paper. Since then, over a hundred studies have been published exploring this approach. Our objective is to learn the Call price function C = C(T, K) based on empirical data, independent of traditional models, with No-arbitrage conditions by using Neural Networks.
Leonardo Baggiani (University of Warwick)
(U,ρ)-arbitrage and sensitivity to large losses
We revisit portfolio selection in a one-period financial market under a general reward-risk framework, where reward is modeled by a utility functional U and risk by a risk functional ρ. We show that it can happen that utility goes to infinity while the risk remains acceptable. We call this phenomenon (U,ρ)-arbitrage. We show that if either U or ρ is law-invariant, the absence of (U,ρ)-arbitrage for all financial markets is equivalent to either ρ or U being weakly sensitive to large losses. Here, weak sensitivity to large losses means that financial positions with the potential of a loss either have an unacceptable risk or an upper utility bound when scaled by a sufficiently large factor. Moreover, for a given financial market and in the case that U is concave and ρ is convex (but not necessarily cash-additive), we provide a dual characterisation in terms of equivalent martingale measures and dual objects linked to U and ρ. This is a joint work with Martin Herdegen and Nazem Khan.
Stefanie Hesse (Humboldt-Universität zu Berlin)
Common Noise by Random Measures: Mean-Field Equilibria for Competitive Investment and Hedging
We present a paper about mean-field games with common noise, where the common noise incorporates Poisson random measures in addition to Brownian noises, instead of solely the latter as usual in the literature. We consider mean-field portfolio games of optimal investment and hedging under competitive performance concerns, in terms of relative exponential utility maximization. We show a one-to-one correspondence between mean-field equilibria and solutions of McKean-Vlasov Backward Stochastic Differential Equations with Jumps (MKV-JBSDE). Our characterization employs an original change to a minimal relative entropy martingale measure. Using this, we prove a one-to-one relation between the MKV-JBSDE and a plain Backward Stochastic Differential Equations with Jumps (JBSDE), for which we established well-posedness. Thereby we prove existence and uniqueness of the mean-field equilibrium without requiring the competition weight parameter to be small, that means without a so-called weak interaction condition.
Wilfried Kenmoe Nzali (Weierstrass Institute for Applied Analysis and Stochastics, Berlin)
Volatile Electricity Market and Battery Storage
Today's electricity markets are extremely volatile, so battery storage systems can play a crucial role in reducing consumption costs. They allow energy to be stored during low-cost periods and used when prices are higher. In this work, we develop an optimal control problem that combines battery models with stochastic electricity price models. Our goal is to determine the optimal strategies for charging and discharging battery storage systems to minimize electricity consumption costs.
Xiaohang Ma (University of Connecticut)
Numerical Solutions of Optimal Stopping Problems for A Class of Hybrid Stochastic Systems
We have developed a numerical scheme tailored for a class of optimal stopping problems associated with stochastic hybrid systems, which involve both continuous states and discrete events. This initiative is driven by the diverse applications in various fields, including option pricing in financial markets, quickest detection in engineering systems, and beyond. To tackle these problems, we implemented feasible algorithms constructed through Markov chain approximation techniques. Our primary tasks involved designing and constructing discrete-time Markov chains that align closely with the switching diffusions, ensuring the convergence of appropriately scaled sequences, and verifying the convergence of both cost and value functions. We present numerical results to demonstrate the efficacy of our algorithms, demonstrating their practical utility in addressing complex optimal stopping problems pertinent to decision-making and option pricing.
Berenice Anne Neumann (Universität Trier)
Markovian randomized equilibria for general Markovian Dynkin games in discrete time
We study a general formulation of the classical two-player Dynkin game in a Markoviandiscrete time setting. We show that an appropriate class of mixed, i.e., randomized,strategies in this context are Markovian randomized stopping times, which correspond tostopping at any given state with a state-dependent probability. One main result is anexplicit characterization of Wald-Bellman type for Nash equilibria based on this notionof randomization. In particular, this provides a novel characterization for randomizedequilibria for the zero-sum game, which we use, e.g., to establish a new condition for theexistence and construction of pure equilibria, to obtain necessary and sufficient conditionsfor the non-existence of pure strategy equilibria, and to construct an explicit example witha unique mixed, but no pure equilibrium. We also provide existence and characterizationresults for the symmetric specification of our game. Finally, we establish existence ofa characterizable equilibrium in Markovian randomized stopping times for the generalgame formulation under the assumption that the state space is countable.
Beatrice Ongarato (University of Padova)
Semi-static variance-optimal hedging with self-exciting jumps
The aim of this work is to study a hedging problem in an incomplete market model where the underlying log-asset price is driven by a diffusion process with self-exciting jumps of Hawkes type. We aim at hedging a variance swap (target claim) at time T > 0, using a basket of European options (contingent claims). We investigate a semi-static variance-optimal hedging strategy, combining dynamic (i.e., continuously rebalanced) and static (i.e., buy-and-hold) positions to minimize the residual error variance at T. The semi-static strategy has already been computed in literature for different models. The purpose of our work is to solve the hedging problem for an unexplored model featuring self-exciting jumps of Hawkes type. The key aspect of our work is the generality of our framework, both from the perspective of the hedging and the model investigated. Moreover research into models with self-exciting jumps is significant as it has been observed that prices in the financial market (e.g. commodity markets) exhibit spikes having clustered behavior. In our work, we establish and analyze our model, studying its properties as an affine semimartingale. We characterize its Laplace transform to rewrite contingent claims using a Fourier transform representation. We finally obtain a semi-explicit expression for the hedging strategy. A possible further development might regard the problem of optimal selection of static hedging assets.
Ivo Richert (Christian-Albrechts-Universität zu Kiel)
Quasi-Maximum Likelihood Estimation of Partially Observed Affine and Polynomial Processes
In spite of their computational tractability and versatility in modelling real-world phenomena, existing theory on the statistical estimation of parameterised affine or polynomial processes is surprisingly sparse and has yet only focused on specific examples of polynomial diffusions in the past. Moreover, many practical applications such as stochastic volatility or other latent-factor models from financial mathematics lack a full observability of the components of the employed polynomial process, vitiating many classic statistical estimation methodologies. We close this gap by developing a general framework for estimating affine and polynomial processes partially observed at discrete points in time. This is achieved by developing a canonical discrete-time representation of polynomial processes in the form of a vector-autoregressive model, and then approximating the transition dynamics of this model by those of a Gaussian process with matched first and second moments using the popular Kalman filter. We establish weak consistency and asymptotic normality of the resulting Quasi-Maximum Likelihood estimators and derive easily computable explicit expressions for the asymptotic estimator covariance matrix. In addition, we illustrate our results by using the example of the popular Heston stochastic volatility model from financial mathematics as well as by the example of multivariate Lévy-driven Ornstein-Uhlenbeck processes.
Jasper Rou (Delft University of Technology)
Convergence of Deep Gradient Flow Methods for Option Pricing
In this research, we consider the convergence of neural network algorithms for option pricing partial differential equations (PDE). More specifically, we consider a Time-stepping Deep Gradient Flow method, where the PDE is solved by discretizing it in time and writing it as the solution of minimizing a variational problem. A neural network approximation is then trained to solve this minimization using stochastic gradient descent. This method reduces the training time compared to for instance the Deep Galerkin Method. We prove two things. First, as the number of nodes of the network goes to infinity that there exists a neural network converging to the solution of the PDE. This proof consists of three parts: 1) convergence of the time stepping; 2) equivalence of the discretized PDE and the minimization of the variational formulation and 3) convergence of the neural network approximation to the solution of the minimization problem by using a version of the universal approximation theorem. Second, as the training time goes to infinity that stochastic gradient descent will converge to the neural network that solves the PDE.
Bud Schiphorst (University of Amsterdam)
A Structural Credit Risk Model with Default Contagion
Structural threshold models are common industry practice for modelling portfolio credit risk, but often only consider default dependence via underlying common factors. That is, the default events of obligors are often assumed to be conditionally independent given underlying common factors, such as macroeconomic or industry-specific risk drivers. Another important form of dependence may however arise due to default contagion effects, in which an increase in default risk of one obligor directly causes an increase in default risk of another obligor. In corporate parent-subsidiary relationships, for example, increased default risk can propagate from a parent company to a subsidiary. As another example, increased default risk of a sovereign issuer may propagate to entities operating in the same country. We propose a structural threshold model that incorporates both indirect default dependence via underlying common factors, as well as direct default contagion effects. The model specifically allows for the special case where the default of one obligor guarantees the default of another, but also allows default risk to partially propagate from or to multiple different obligors. As a key contribution, we outline a procedure to estimate the contagion parameters from default probability data. Once calibrated, the model can be easily used for simulation of portfolio losses, similar to the structural threshold models used in practice. Based on a simulation study, we illustrate that ignoring default contagion effects may cause significant underestimation of credit portfolio tail risk. This risk is relatively well captured by using estimated default contagion parameters.
Henrik Valett (Christian-Albrechts-Universität zu Kiel)
Parameter estimation for polynomial processes
We consider parameter estimation for discretely observed generic polynomial (and in particular affine) Markov processes, which are often used in mathematical finance, e.g. in form of the popular Heston model or of (exponential) Lévy-driven models. Our approach is based on quasilikelihood methods. Specifically, we consider polynomial martingale estimating functions up to a certain degree. Within this class the Heyde-optimal estimating function can be computed in closed form. This allows us to derive consistency and asymptotic normality, based on results from [1] and the ergodic theory for Markov processes.
Caisheng Wang (University of Warwick)
Callable convertible bonds under liquidity constraints and hybrid priorities
This paper investigates the callable convertible bond problem in the presence of a liquidity constraint modelled by Poisson signals. We assume that neither the bondholder nor the firm has absolute priority when they stop the game simultaneously, but instead, a proportion m in [0,1] of the bond is converted to the firm's stock and the rest is called by the firm. The paper thus generalizes the special case studied in [G. Liang and H. Sun, Dynkin games with Poisson random intervention times, SIAM Journal on Control and Optimization, 57 (2019), pp. 2962–2991] where the bondholder has priority (m=1), and presents a complete solution to the callable convertible bond problem with liquidity constraint. The callable convertible bond is an example of a Dynkin game, but falls outside the standard paradigm since the payoffs do not depend in an ordered way upon which agent stops the game. We show how to deal with this non-ordered situation by introducing a new technique which may be of interest in its own right, and then apply it to the bond problem.
Yuxuan Wang (University College London)
A Cost Optimization Problem in Carbon Management
In this work, we consider a stochastic optimal problem modelling economically optimal carbon emissions abatement, under the goal of controlling the global temperature increasing in a proper range. We model carbon emissions paths using a stochastic differential equation
dX^{\gamma}_t = \mu(X^{\gamma}_t, \gamma_t) dt + \sigma(X^{\gamma}_t, \gamma_t) dW_t, 0
\leq t \leq T where \gamma denotes the abatement cost at each time. Our goal is to minimize cumulative abatement costs that lead for the goal to be achieved with high probability, that is \min_{\gamma \in \mathcal{U}_T} { E[\int_0^T e^{-rs}\gamma_s ds]: P(X_T^{\gamma}\leq M) \geq p}. Such setting, reminiscent of quantile hedging introduced in [1] and studied among others in [2], is compatible with applications at national level in the context of carbon taxes, or at company level in the context of climate reduction investment. Thanks to the work of [2], we transfer it into a stochastic target problem, establish the dynamic programming principle and derive the dynamic programming equation in both classic and viscosity case. By employing duality methods, we examine essential parameter constraints and provide a semi-explicit lower bound. Additionally, we propose a numerical scheme based on neural networks for model implementation.
Xiaocheng Wei (Queen Mary University of London)
Chebyshev Approximations and Neural Networks: A Hybrid Method for Parametric Risk Calculations
Counterparty credit risk (CCR) is the risk that the other party in a financial deal becomes unable to fulfill their obligations. It is a focus of financial regulations, which requires repetitive computations, posing a significant burden for financial institutions. To speed up the calculations, we accelerate the frequently called derivative pricer by combining Chebyshev polynomials with neural networks. Our method can approximate prices for all model parameters simultaneously with a fast evaluation. The key idea is to first apply Chebyshev interpolations on underlying prices and then to learn the parameter dependency of the Chebyshev coefficients with neural networks. The primary contribution of this hybrid method is that it complements the benefits and offsets individual shortcomings of the two approaches, thus significantly improving the training efficiency while maintaining accuracy. Namely, Chebyshev polynomials promise high precisions in low-dimensional data but suffer the curse of dimensionality, while neural networks perform well in dealing with highdimensional data but require resource-intensive training and tuning. We also derive explicit error bounds for prices and CCR exposure measures. We numerically validate the accuracy and efficiency of our method in various option pricing models and exposure measures with market-calibrated parameters. In Heston’s stochastic volatility model, we achieve a 5x training efficiency gain with no accuracy loss in comparison to the established approach, which applies option prices as the training target. For the evaluation, compared to the reference COS method, the speed-up factors range from 20 (at the money) to 1,000 (far in or out the money).
Kaiwen Zhang (Princeton University)
A Probabilistic Approach to Discounted Infinite Horizon and Invariant Mean Field Games
This paper considers discounted infinite horizon mean field games by extending the probabilistic weak formulation of the game as introduced by Carmona and Lacker (2015). Under similar assumptions as in the finite horizon game, we prove existence and uniqueness of solutions for the extended infinite horizon game. The key idea is to construct local versions of the previously considered stable topologies. Further, we analyze how sequences of finite horizon games approximate the infinite horizon one. Under a weakened Lasry-Lions monotonicity condition, we can quantify the convergence rate of solutions for the finite horizon games to the one for the infinite horizon game using a novel stability result for mean field games. Lastly, applying our results allows to solve the invariant mean field game as well.