In quantitative finance, solving the no-arbitrage pricing equations implied by a pricing model and a derivative’s payoff structure is a recurring theme. Quants made concerted efforts to develop efficient numerical routines to accurately approximate the solutions of such equations. Sometimes, the underlying model must be simplified to enable usage of efficient solvers.
Machine learning (ML) techniques can help to address this task. Two largely orthogonal approaches have been proposed: learning model implied hedge strategies and approximating existing solvers. The former approach, which goes by the name of deep hedging, was proposed by Bühler et al. in 2018. It allows for the consideration of hedging costs and hedge imperfections, and it does not require a reference solver. The latter is a modern variant of the classical topic of approximating (possibly noisy) functions via interpolation and regression techniques. It typically faces the challenge of high input data space dimensionality. It does require a reference solver, but that solver needs not be numerically efficient.
Work in the second direction has been pioneered by Ferguson and Green in 2018, who used deep neural networks (DNNs) for approximating prices of equity basket options, and by Ruiz and Zeron, who promoted combining Chebychev polynomial interpolation with dimension reduction techniques. In 2019, Geier et al. applied the DNN approach to Bermudan swaption pricing. The issues of interpolants’ smoothness and their extrapolation properties were addressed in 2020 by Huge and Savine, who proposed the usage of sensitivities as additional DNN training labels, and by Antonov et al. respectively. In 2021, Antonov and Piterbarg introduced two “semi-classical” approaches, based on stochastic sampling and on tensor train decompositions. Most recently, in 2022, Becker at al. introduced the approach of tuning Monte Carlo solvers by applying ML techniques to optimise quasi-random numbers with respect to a given family of models and payoff structures.
Despite these advancements, ML techniques for approximating pricers appear to be seldomly used in practice. This reticence may partly be driven by uncertainty as to whether ML-based pricing would survive supervisory scrutiny. Recently, supervisors have provided relevant guidance, which we regard as encouragement; see for example a 2020 publication by Deutsche Bundesbank and a 2022 publication by Deutsche Bundesbank and BaFin. In our opinion, explainability requirements, as stipulated in those publications, can be tackled by a combination of Explainable Artificial Intelligence (XAI) techniques and a sound ML model lifecycle process including thorough and ongoing monitoring of the ML technique’s performance.
The industry’s restraint may also be due to the technical and operational challenges associated to the development and maintenance of ML-based pricing infrastructure. According to our experience, teams working on such endeavours must have strong skills in classical financial engineering, data science, model validation, pricing architecture design, pricing infrastructure development, and MLOps. As a consultancy with extensive practical experience in these areas, we regard ourselves as a one-stop shop for this skillset.
We recently completed two feasibility studies for German banks, where the project teams investigated the usage of ML techniques for approximating Bermudan swaption prices in the context of Value-at-Risk (VaR) computations. The Bermudans were used to account for the American prepayment optionality that, under German law (BGB § 489), is embedded into amortising fixed-rate loans.
For one of these banks, we used deep neural networks (DNNs) in conjunction with principal component analysis (PCA) to obtain a trained DNN applicable to different trades on numerous dates and in various market scenarios. Sequential evaluation of DNNs, as required by the bank’s VaR workflow, was slower than expected, which we overcame by augmenting the Python-Keras-Tensorflow triad with the just-in-time compiler Numba.
We also implemented the Chebychev slider technique as proposed by Ruiz and Zeron in 2019, which combines PCA and Chebychev polynomials with an explicit dimension reduction technique, inspired by neglecting cross-terms in Taylor expansions. Here, we built one interpolant per trade and per valuation date, and we used PCA both in the rates and in the volatility space.
Both approaches resulted in significant execution time improvement (factor 4) compared to the reference PDE pricer. In terms of accuracy, we benchmarked against a linear approximation, based on a full set of sensitivities, obtaining the expected result, that in stressed markets and in extreme quantiles (as required for economic capital computations), the ML techniques outperform the linear proxy.
The bank intends to run the DNN based ML proxy within a parallel VaR computation on a daily basis to monitor the ML technique’s long-term performance. Performance reports will be generated automatically, including relevant plots and performance metrics.
All the previous results will be presented in more detail as part of this year’s Quant Minds International 2022 conference.
Christian Kappen is Manager at d-fine.