Science in Finance VI: True Sensitivities, CDOs and Correlations

CDO1

One of the more quantie aspects of recent financial crises has been the valuation of CDOs, the highly complex credit instruments depending upon the behaviour of many, many underlyings.

Now your typical quant favours just one tool to capture the interaction of two assets, and that tool is correlation. Of course, this is a very unsubtle tool which is being used to capture the extremely subtle interactions between companies. And when you have 100 underlyings the number of correlations will be 100×99/2=4,950. All of them unstable, all of them meaningless. Yet you will often find complex derivatives being priced using such wildly nonsensical data. Sometimes, in the interests of simplicity, some instruments are priced assuming all correlations are the same. The rationale behind this might be to return some robustness, in the sense that you might be more sure of one parameter than of 4,950 of them. If only it were that simple!

Much more on correlation in a later blog.

Returning to the subject of CDOs. I conducted a simple experiment on a CDO with just three underlyings. Really just a toy model to illustrate some important issues. I started by assuming a single correlation (instead of three) to capture the relationship between the underlyings, and a ‘structural model.’ I then looked at the pricing of three CDO tranches, and in particular their dependence on the correlation. Look at the figure above, but ignore the exact numbers. First observe that the Senior Tranche decreases monotonically with correlation, the Equity Tranche is monotonically increasing, with the Mezzanine Tranche apparently being very insensitive to correlation.

Traditionally one would conduct such sensitivity experiments to test the robustness of ones prices or to assist in some form of parameter hedging. Here, for example, one might conclude that the value of the Mezzanine Tranche was very accurate since it is insensitive to the correlation parameter. For a single correlation ranging from -0.25 to +0.5 the Senior Tranche value ranges from 0.643 to 0.797, the Equity Tranche from 0.048 to 0.211, and the Mezzanine Tranche from 0.406 to just 0.415. (Remember, don’t worry about the numbers in this toy model, just look at the structure.) If you are confident in your valuation of the Mezzanine Tranche, then so will the next bank, and with competition being what it is, bid-offer prices will converge.

Such an analysis could not possibly be more misleading, such a conclusion could not possibly be more incorrect and such a response could not possibly be more financially dangerous.

Consider a more interesting, and more realistic, world in which correlation is state-dependent. Now allowing correlation to vary from -0.25 to +0.5, but not constant, and depending on ‘state,’ you will find that the Senior Tranche still varies from 0.643 to 0.797, the Equity Tranche still varies from 0.0408 to 0.211, but now the Mezzanine Tranche varies from 0.330 to 0.495, a factor of 18 in the sensitivity compared with the traditional naïve analysis. The reason is simple, inside the Mezzanine Tranche structure there is a non-monotonic sensitivity to correlation which is masked when calculating the value; sometimes more correlation is good, sometimes more correlation is bad. (For the Senior Tranche correlation is always bad, for the Equity Tranche correlation is always good.)

Why on earth people thought it a good idea to measure sensitivity to a parameter that has been assumed to be constant escapes me still.

The moral of this example is simple, there is far more risk inside some of these instruments than you could ever hope to find with classical analyses. Stop using such convoluted models, use more straightforward models and start thinking about where your models’ sensitivities really lie. Your models can fool some of the people all of the time, and all of the people some of the time, but your models cannot fool all of the people all of the time.

P