UnRisk 8

Higher Order PDEs and the Curse of Dimensionality

Higher Order PDEs and the Curse of Dimensionality When pricing multi-factor derivatives products the general consensus is that the partial differential equation (PDE) and finite difference (FDM) approaches become difficult to apply and we usually resort to other methods such as Monte Carlo or numerical integration in multiple dimensions. In this blog I will attempt to describe some of the essential problems that we encounter when approximating higher-order PDE (in the current case, two and three dimensions) by popular FD schemes; we claim that the ADE method is easy to implement and can be parallelized for shared memory systems.

The main topics are: Formulating a PDE in an unambiguous manner FD schemes for multi-factor PDEs Implementation in C++, using Boost data structures and multithreaded programming

In principle, writing down the Black Scholes PDE in n dimensions is the easy part. The complications arise when we define the corresponding boundary condition information. First, we need to either truncate the region of interest or alternatively we can transform the original region to the unit cube in n dimensions. This latter technique is general and robust and it can be used instead of, or in combination with domain truncation. Next, the specification of boundary conditions is made easier by the application of the Fichera theory (which can be seen as a generalization of the well-known Feller condition, especially for the near field and even for the far field when using domain transformation.) Having defined the PDE problem, we now choose to approximate it using finite differences. For multi-dimensional PDEs we see that the ADI method is popular, while in recent years the Splitting method has been used because of its flexibility in solving complex PDEs. Both methods are examples of MOS (multiplicative operator splitting) methods which implies that we solve a complex PDE as a sequence of simpler, one-dimensional PDEs.

The main disadvantages of MOSs are:

. They introduce splitting error; it is possible that you end up with a first-order accurate approximation to the PDE which is not what we want

. It is difficult to parallelise these schemes; another challenge is to parallelise the tridiagonal LU solver that each leg of the schemes uses. This is a major botteleneck. . The ADI method in combination with the Crank-Nicolson scheme (for time discretisation) and the Craig-Sneyd method (for mixed derivatives) can produce oscillations and inaccuracies in certain situations

. The specification of conforming boundary conditions (especially for solutions at intermediate time steps) can be problematic

The approach that we have been using is based on AOS (additive operator splitting) and it allows us to write a PDE problem as the sum of partial solutions to the PDE in question (an example of an AOS is the Alternating Direction Explicit(ADE) method which I described in a recent SSRN paper). The advantages of AOS are:

. The schemes are uniformly stable, explicit and second-order accurate

. No splitting error is introduced and no artificial boundary conditions need be defined

. The scheme is easily parallelized, for example in combination with the C++ OpenMP library

. When implementing high-dimensional PDEs we use the Boost multi_array library which reduces the cognitive burden on the developer

. The method can be used for problems involving mixed derivatives; in these cases we use the Yanenko variant for approximation these terms.

We conclude with a discussion of the accuracy of the ADE scheme and some remarks on speedup.

http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1552926

In the above article we introduce the theory underlying ADE and we use it to price European and American options. We have also generalized the method to two and three factor (basket) options. The method is accurate, easy to program and is amenable to coarse parallelism. To date, we have seen that the speedup with ADE is between 3 and 5 times that experienced with ADI or Splitting methods which are difficult to parallelise. These are the initial remarks and findings. I will report on these issues in future articles.

Stable, second order accurate schemes: part I, Equity pricing

One of the most frequently asked questions with PDE models is how to model boundaries and impose boundary conditions.

Some years ago I stumbled on the ADE method but I needed the elegant Ficher theory in combination with good schemes for convection terms (most books only deal with diffusion and the essential difficulty is when you combine them).

This working paper adresses these issues and we compare our results with other numerical methods for a range of one factor problems, as discussed in the article.

//

http://www.datasimfinancial.com/forum/viewtopic.php?t=289

//

The ADE applied to early exercise seems to be particularly fast and accurate and is kind of modified Brennan Schwartz algorithm.

Thanks very much to all those who provided feedbak and input. Of course, all errors are my own.

I welcome comments and suggestions. A forthcoming paper will deal with other 1-factor and 2-factor models.

Finite Diferences for Stochastic Models and SDEs

This blog is a piece on approximation of various stochastic models, with particular emphasis on schemes that avoid impossible solutions, for example negative volatilities.

** Comments and feedback welcome **

Some schemes we discuss are:

1 Euler-Maruyama method

2 Milstein methods (various forms)

3 Fitted Euler methods (Mirani/Duffy)

4 Predictor-Corrector methods

There are a number of popular finite difference schemes that are used the approximate the solution of SDEs. The most famous method is the Euler-Maruyama scheme and it has been applied in finance applications. It is applicable to a wide range of one-factor and n-factor models, including geometric, mean-reverting, square-root and stochastic volatility models. It is robust and easy to program but has a low order of convergence and the results that it produces are wrong in a number of cases. In particular, the methods can non-positive values and this is clearly impossible in financial applications because the quantities being modeling are typically stock prices, volatility or forward rates, for example. For this reason, a number of methods have been proposed that fix some of these inherent limitations of the Euler-Maruyama (EM) method. The most famous one is the Milstein method and it is in essence EM but with some added diffusion to compensate for the errors with the EM method. Again, the standard Milstein method can produce non-negative solutions and we have further noticed (from numerical experiments) that it is not more accurate than EM. This is well-known and a number of generalisations have been propsosed, for example the Balanced Implicit Method (BIM) and the Balanced Milstein Method (BMM). We now discuss a variation of the EM whose motivation is based on the so-called exponentially-fitted difference schemes that were developed by de Allen and Southwell in the 1950’s and elaborated by researchers such as Il’in, Emel’janov, Miller and many others (including some of my own work on convection-diffusion equations) thereafter. The idea is very simple and the method is based on fitting the drift and diffusion parameters for a general SDE with non-constant and even non-linear terms based on the fact that we know the closed-form solution for a one-factor SDE with constant coefficients. The method works quite well and in particular it gives excellent results for square-root processes. It is also possible to define a fitted Crank-Nicolson scheme and this is particularly good for mean-revertin*g models. We have not examined fitted methods for n-factor models. In general, BIM, BMM and fitted Euler schemes compensate for the fact that the Wiener increments can have both negative and positive values at each time level. They embody the structural properties of the SDE that they approximate. Finally, the Predictor-Corrector seems to be the most robust and applicable method for SDEs based on our numerical results and mathematical analysis to date. Actually the Predictor-Corrector method is explicit - we do an explicit Euler in the predictor stage of the scheme to get an intermediate value and then use this in a modified Trapezoidal rule. We have used it on a number of 1 and 2 factor SDEs and the results are very robust in general. This conclusion is based on numerical experimentation and it now a good moment to examine the scheme based on mathematical analysis. To this end, we wish to predict or have a formula that will allow us to determine the mesh-size (sometimes called h, k or deltaT) that ensures that the scheme has a unique, non-negative convergent solution. We apply the Banach fixed point theorem (BFPT): let (X, d) be a complete metric space and T: X -> X be a contraction operator. Then there is a solution of x = T(x) in X. For the current problem, we can write the Predictor-Corrector method in this for X =F (X) at each time level. The mapping F will be a contraction if we choose the mesh size k small enough and this is usually calculating the norm of the Jacobian matrix (for example, the spectral radius). I still have to some experiment to determine which norm is optimal in some sense.

The main features of the Predictor-Corrector method are:

. Functionality: it is applicable to a wide range of one-factor and n-factor linear and non-linear SDE. Good accuracy.

. Reliable: we have found it to be very robust in general. It is used in many areas of Numerical Analysis, for example PDE splitting methods

. Usable: it is relatively easy to program, especially if we have a generic framework; we only need to define the vector-valued functions for the drift and diffusion terms

If you CLICK ON DOWNLOAD you see a document on some of the above schemes (this was the work by Robbert Mirani and myself)

The "Just around the Corner" ... readers comments

In an earlier blogs I stated that multi-core PCS were on their way. Here are some readers comments.

// A reader Hi,

I read you blog post regarding muti-threaded computing. The comment is correct in saying that multi threading hardware is already here.

I am now working for a risk software vendor and I'm glad to say that we have an ALM product that can distribute scenario simulations (monte carlo) into several servers and has the ability to utilize several cpu's as well.

As a confirmation on the earlier comment muti-threading is not just around the corner... it is here (both hardware- and software-wise)!

// A reader // Here is some response from a reader I have read you latest blog about multiCPU chips. Small problem - they are not around the corner - they are widely available now. You can buy 2 cpu desktop from Dell for about $800. They have 4 cpu servers available for just little bit more. Both Intel and AMD produce these chips. The big advantage is that they use less power, and therefore require less cooling. The ones from AMD are supposed to have slightly better memory management. The servers come in flat pizza like boxes so you can stack up as many of them as you want. Our Dell rep told us last week that one of their customers is building one that will likely be on the 100 most powerful list.

Have a look at: http://www.top500.org/list/2006/06/100 #26, #27, #28 are what you are talking about in your blog.

Multithreaded and High-Performance Computing for Monte Carlo and Finite Difference Methods

Predicting the Future – Next-Generation Software Systems in Quantitative Finance

Saying that software is vitally important for applications in Quantitative Finance should come as no surprise to most people. The software industry has grown at an amazing pace since the early 1950’s and 1960’s. No longer do we need to develop applications using punch cards, paper tape and other devices but since a number of years we have had access to powerful desktop machines and modern software tools. The performance of hardware and network communications has been increasing at a phenomenal rate and this situation is likely to continue in the coming years. However, we believe that the underlying paradigms will need to change if we wish to develop larger, faster and more reliable software systems. We are witnessing some new and important developments in the related areas of software and hardware systems for application development and they will undoubtedly influence the course of software development projects for Quantitative Finance in the coming years. First, not only are ‘sequential’ desktop computers becoming more powerful but in the not-too-distant future we shall witness the entry of multi-core computers, that is computers with several CPUs on one chip. This means that multi-threaded and shared memory applications are just around the corner as it were. This is a form of fine-grained parallelism. Second, high performance computing (HPC) methods are slowly becoming more popular in mainstream developing environments. Once belonging to the realm of rocket scientists and scientific researchers, HPC is now becoming an attractive solution for several numerical problems and applications in Quantitative Finance, for example Monte Carlo simulation, Finite Difference and Finite Volume methods and high-frequency data acquisition and analysis applications. In particular, the de-facto Message Passing Interface (MPI) library can be used to implement coarse-grained parallel applications in languages such as Fortran and C++. Finally – and perhaps crucially – software engineering is maturing to the stage where we can design and implement larger systems than in the past. In particular, the slow acceptance of design and system patterns, architectural styles and domain architectures seems to imply that developers are attempting to learn design principles before they start programming. The advantages for Quantitative Finance are:

. Performance: a multi-threaded or parallel program – if properly designed and implemented – runs faster than a single-processor program. For example, a Monte Carlo simulation on an 8-processor machine is typically 10 times faster than the same program on a single processor machine

. Accuracy: this requirement has to do with a program giving desired or correct results. A parallel program can be designed in such a way that it produces – or is able to produce – more accurate results than a sequential machine in a given amount of time. A good example is the use of domain decomposition techniques for FDM

These advantages come at a cost, however. The definition of ‘developer’ will need to be deepened and extended. Prediction is difficult (especially the future) but a precondition for success in this new paradigm is the emergence of two major functions; first, the architect who has a good knowledge of Quantitative Finance and who can translate the problem domain to a high-level architecture and second – for want of a better word – the HPC developer who can design, implement and fine-tune a HPC solution that satisfies all the given requirements.

The trend is somewhat similar to what we see in other disciplines; first analyse and design the problem, then implement the solution. I prefer to pay an architect to create a CAD drawing of a house before I order 50,000 bricks and 10,000 roof slates.

Caveat: Prediction is difficult

// Here is some response from a reader I have read you latest blog about multiCPU chips. Small problem - they are not around the corner - they are widely available now. You can buy 2 cpu desktop from Dell for about $800. They have 4 cpu servers available for just little bit more. Both Intel and AMD produce these chips. The big advantage is that they use less power, and therefore require less cooling. The ones from AMD are supposed to have slightly better memory management. The servers come in flat pizza like boxes so you can stack up as many of them as you want. Our Dell rep told us last week that one of their customers is building one that will likely be on the 100 most powerful list.

Have a look at: http://www.top500.org/list/2006/06/100 #26, #27, #28 are what you are talking about in your blog.