The valuation of derivatives is often highly complex and can only be carried out numerically – which requires a correspondingly high computing effort. In addition, regulations often require additional risk calculations, which further increases the calculation effort considerably.
On the other hand, methods from the field of Deep Learning / Artificial Intelligence enable a highly efficient and robust replacement of complex computational operations, such as Monte Carlo simulations.
We offer methods for the reproduction of financial mathematical calculations using Deep Neural Networks (DNN). This allows calculations to be accelerated extremely (sometimes by a factor of a million) and relieves the systems. Further analyses and scenario observations are thus becoming possible.
Derivative and risk calculations are computationally intensive
A sufficiently precise evaluation and risk determination of derivatives – such as those used for hedging – is extremely demanding even in supposedly simple cases. For example, options for early termination can only be estimated on the basis of assumptions.
While simple derivatives can still be calculated analytically (such as European options using the Black-Scholes model), somewhat more complex cases require numerical methods and simulations (e.g. American options).
Analytical risk calculations are generally very imprecise and have therefore been replaced by computationally complex simulation procedures (historical or Monte Carlo simulations). In particular, the various regulations require complex further calculations, such as the Potential Future Exposure (PFE) within the framework of the SA-CCR regulation. Risk calculations thus aggravate any possible computing capacity bottlenecks in the valuation of derivatives.
Banks with complex trading books in particular face considerable challenges in terms of computing performance. Cost-intensive hardware upgrades are sometimes carried out to meet these challenges.
In addition, compromise solutions are often chosen for easier evaluation, such as
- Calculation of derivative sensitivities using linear approximations when non-linear effects are neglected
- Assumption-based simplified models and thus acceptance of valuation inaccuracies and model errors
- Waiver of full valuation in risk calculations and stress tests and thus further inaccuracies
Despite the solutions chosen, the available time frame is often fully utilized, especially for daily calculations. Even minor technical malfunctions therefore lead to a “delay”. Further considerations – such as the systematic identification of relevant stress test scenarios – are hardly possible.
Replacement of computationally intensive procedures by Deep Learning models
The enormous increase in the amount of available data and computing power in recent years has led to a considerable expansion in the possibilities for analysing data using methods from the field of machine learning / artificial intelligence. Particularly great progress has been made in the area of deep learning. Here, multi-layered (“deep”) neural networks (DNN) are “trained” to analyse even complex and unstructured data and to recognise patterns.
Ultimately, neural networks are procedures for the highly efficient regression of multidimensional data. In this sense, AI predictions are interpolations based on the patterns learned from the data. In contrast to conventional regression methods, however, nonlinearities can be modelled without any problems. Even categorical data or other patterns (e.g. weekdays) are recognized and mapped. Depending on the architecture, the neural networks are suitable for different conditions to varying degrees.
The creation and use of the model only requires a sufficient amount of input and output data.
The former can be market data (interest rates, volatilities etc.) or derivative parameters (strike, nominal etc.). The latter are, for example, present values or risk ratios (VaR, PFE, etc.). You need to carry out the following steps
- mapping of the calculation to be replaced to an input/output table and definition of the AI’s “evaluation space”, i.e. the parameter hypercube within which evaluations are to be performed
- producing a sufficiently large amount of training data (input/output data) for the assessment space, if not already available This step can be very computationally intensive, but in principle only needs to be done once
- selection, training and validation of the DNN model with existing data
- use of the model for new calculations. Once a model is available, it can be applied without having to go through the previous steps again. The calculation performance can be increased millions of times
Use case 1: Valuation of American options
- Objective: Determination of the present value of American call options using a neural network
- Creation of training data: Several thousand parameter sets were randomly generated for a realistic evaluation area. From these, present values were calculated using the Python library QuantLib
- Model training: A DNN was trained with the training data. During the training the overfitting was monitored and the training was finished after a corresponding number of epochs.
Also randomly generated were a few thousand validation records, which were not included in the training. Present values were calculated and compared from the validation sets using both QuantLib and DNN (figure below, left); as can be seen, a high degree of accuracy was achieved.
The values for the dividend yield and risk-free rate were varied for fixed parameters (valuation date: 01.01.2016; maturity date: 01.02.2017; spot price: 100; strike price: 90; volatility: 0.02). The result can be seen below (right); the calculation for this took less than a second.
Use case 2: Calculation of the PFE of a swap
- Objective: Calculation of the potential future exposure of an interest rate swap at a confidence level of 95%.
- Creation of training data: For a realistic evaluation area thousands of parameter sets were randomly created. From these sets, the PFE values were calculated using a QuantLib-based simulation published on Github; the simulations were very computationally intensive (one minute per data set)
- Model training: A DNN was trained and the training was completed after a suitable number of epochs
Also randomly generated were a few hundred validation sets that were not included in the training. From the validation sets, PFE values were calculated and compared using both PFE simulation and DNN (figure below, left); as can be seen, a high degree of accuracy was achieved.
For fixed parameters (Float Rate: 0.02; Fixed Rate: 0.02; Mean Reversion: 0.02; Maturity: 5Y; Tenor: 1Y) the volatility was varied (see figure below, right). The required computing time could not be measured and was less than one second. In contrast, the simulation took one minute for each point!
We are happy to support you with any questions you may have.
Dr. Dimitrios Geromichalos
Founder / CEO
RiskDataScience UG (haftungsbeschränkt)
Theresienhöhe 28, 80339 München
Telefon: +4989244407277, Fax: +4989244407001