This site is part of the Informa Connect Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Connections and ideas that are powering quant innovation

Holistic view of XVAs – Chebyshev interpolation for speedup and unification

Share this article

Eventually, XVA calculations will meet computational challenges that can be very expensive to overcome. But with a bit of creative thinking, these challenges can be solved, as demonstrated below by Kathrin Glau, Lecturer in Financial Mathematics at Queen Mary University of London and FELLOW co-founded by Marie Skłodowska Curie at École Polytechnique Fédérale de Lausanne.

Risk aggregation, such as in XVA calculations, requires the computation of exposures of different instruments over the projected future life-span of a large portfolio. This poses a computational challenge. Moreover, taking risks of different asset classes into account additionally requires us to deal with different pricing methods simultaneously. We are addressing these challenges by numerical techniques, complexity reduction and interpolation, particularly Chebyshev interpolation.

Where does a computational bottleneck arise in XVA calculations?

For instance, to calculate credit exposures, the distributions of relevant underlying asset prices up to their longest maturity are projected forward in time. The standard approach to obtain the distributions of the price paths is through Monte Carlo simulation, that is, the paths of the underlying assets are randomly drawn according to the model distribution. Next, the option prices along the paths need to be evaluated. This results in a distribution of exposure paths. Typically, a whole portfolio of different options needs to be considered here. Finally, the required metric on the latter distribution is extracted. For instance, the expectation of the positive part of the prices along the paths are computed to determine the expected future exposure.

The bottleneck is the sheer number of calls of the pricers: For 1,000 simulated paths and 50 time steps we already require 50,000 calls of each of the pricers evaluating the different options in the portfolio. The efficiency of the pricers is therefore crucial. In the best case, the pricer is an explicit function. In the worst case, we need a new Monte Carlo simulation for each time we need to evaluate a price along the simulated paths. This results in nested Monte Carlo simulation. The computational task becomes even more challenging, if we need to compute the sensitivities as well.

Gaining efficiency by introducing an offline/online scheme

We propose to divide the computation into an offline and an online phase. The goal of the offline phase is to simplify the pricer. In essence, this preparation phase is only performed once and it should deliver a highly accurate approximation of the pricer that can be evaluated very quickly, and the approximation should be of a similar type for all products. Then, in the online phase, this fast, accurate and structurally simple pricer replaces the original pricers, which could be diverse and as slow as a Monte Carlo simulation or as sophisticated as a PDE solver.

Is there a good candidate for such a favourable pricer?

At this point, we are looking for a way to express price functions for any type of product in a highly efficient and, moreover, similar way. During the last two decades, a large and growing family of pricers have been developed. Exploiting specific features of the models and of the different option types has lead to highly specialised solutions. See for instance the article Lina von Sydow et. al.: BENCHOP – The BENCHmarking project in option pricing, International Journal of Computer Mathematics, 2015, 92(12), 2361-2379, for an overview on several approaches and their performance. So how could we ever merge all those together and finally get a unified and efficient method?

When a problem seems to be too hard to solve, before finally giving up, it is sometimes good to look for a very simple solution. All problems that we want to solve simultaneously have one common feature, namely the pricers in a specified model are functions of the underlying. So all we want to do is to approximate the several price functions of a variable. One approach to do this is by interpolation. If we continue that line of thoughts, a simple approach to interpolate functions is through polynomial interpolation. This means calculating the prices at N + 1 points which determines a unique polynomial of degree N that goes through these points.

Polynomial interpolation: Too simple to be good?

When thinking of polynomial interpolation we may imagine a polynomial meandering around the function, as it is exact in its nodes. We are used to paying a high price for achieving a high performance. Therefore, we might tend to think that the price we have to pay for being as accurate as one possibly can be in some points – namely exact – is that the interpolation is very inaccurate in other regions. This suggests that polynomial interpolation might have a less favourable convergence behaviour. Runge's phenomenon even gives an example of a smooth (holomorphic) function for which polynomial interpolation through equidistant nodes diverges.

Game over?

Well... looking more closely at Runge's example we see that there is a smooth function for which polynomial interpolation through equidistant nodes fails to converge. So the moral of this finding is that we should not use polynomial interpolation with equidistant nodes. Chebyshev nodes are not equidistant and Chebyshev interpolation is proven to converge uniformly for all Lipschitz continuous functions and even to converge exponentially for very regular (analytic) functions. Other pitfalls can similarly be sidestepped. The moral is that although care should be taken, when used in the right way, polynomial interpolation forms a versatile and powerful numerical tool for function approximation. See Nick Trefethen's article Six Myths of Polynomial Interpolation and Quadrature, Mathematics today 47(4), 184-188.

Chebyshev interpolation for POP: Speed-up and unification

One way Chebyshev interpolation can very successfully be exploited in finance is for the approximation of option prices as function of the underlying and of option and model parameters, that is, Chebyshev interpolation for POP (Parametric Option Pricing), just the case we are looking for. The reason is that option prices typically are very regular. While the payoff often has a kink or even a jump and thus is not everywhere smooth, thanks to the diffusive type of the distribution of the underlying, the price function typically is smooth at each moment before expiry. This is also the reason for the existence of Greeks such as delta and gamma in most models.

The idea is to use any pricer in the offline phase, for instance Monte Carlo, to compute the prices at the nodal points and a linear transform gives us the coefficients of the interpolation. Then, in the online phase, only the polynomial needs to be evaluated, which is in general several orders faster than the original pricer. Summarising, the major advantages of Chebyshev interpolation for POP is the speed-up and the unification of the pricers in the online phase. The approach is proposed in Gaβ, G., Mahlstedt, Mair: Chebyshev Interpolation for Parametric Option Pricing, Finance and Stochastics, 2018, 22(3), 701-731,

Concept also applies in broader context, for instance to interpolate implied volatility

Once seen that option prices can be well approximated by Chebyshev interpolation, the concept can also be tailored to other applications in finance of a similar type. The speed-up by Chebyshev interpolation be significant, in particular when repeated evaluations follow the offline phase. In some cases we even need to perform the offline phase only once in our lives. This in particular holds for the approximation of the Black-Scholes implied volatility as developed in G., Herold, Madan, Pötz: The Chebyshev method for the implied volatility, 2019, forthcoming in the Journal of Computational Finance, preprint of former version available on

Chebyshev interpolation for dynamic problems

The structure of Chebyshev interpolation yields high accuracy based on a fixed grid and thus excellently suits solving dynamic problems. We explore this line of research proposing a new method to compute the prices of early-exercise options by dynamic programming in G., Mahlstedt, Pötz, A new approach for American option pricing: The Dynamic Chebyshev method A particularly noteworthy feature of the method – besides its efficiency – is that it can be combined with any method to compute conditional expectations of polynomials, such as Fourier, PDE and Monte Carlo simulation.

Chebyshev interpolation for XVA calculations along with sensitivities

The dynamic Chebyshev interpolation framework for Bermudan option pricing can easily be extended to the evaluation of other types of options, such as European and barrier options. As the framework is already dynamic, we can evaluate these options along the paths. The approach is versatile enough to encompass large families of models such as stochastic volatility and models based on processes with jumps. This solves the major computational bottleneck in CVA calculations. See G., Pachon, Pötz: Fast Calculation of Credit Exposures for Barrier and Bermudan options using Chebyshev interpolation, preprint available on

As evaluations of the whole price paths for simulated underlyings can be highly expensive, often a simpler model is used. Also, sometimes American options are replaced by European options to render calculations feasible. Numerical experiments show how critical the model choice can be and that ad hoc simplifications such as replacing American by European type options can lead to severe mistakes. This highlights the utility of approaches able to deal with complex models and different option types.

Last but not least we notice that since the prices are approximated by polynomials, its sensitivities delta and gamma come basically at no additional cost!

Don't miss Glau's sessions at QuantMinds International - join us in Vienna!

QuantMinds International 2019

Share this article

Upcoming event

QuantMinds International

18 - 21 Nov 2024, London

The quant finance world comes to London

Go to site

Sign up for Quant Finance email updates