All New Wilmott Jobs Board                     (b)

Faust, Economics, Psychology, and Models

It has now become clear that a “Faustian bargain” made by finance academics has broken down. What bargain? The bargain was the adoption of the holy dictum of “no arbitrage” for financial modeling, which meant the finance department didn’t have to understand or cope with economics. Ditto for psychology. However, traders and investors are acutely aware of economics and psychology. Questions: Is there some sort of disconnect? Are we paying the price? Answer to both questions: Yes.

I propose that we review this Faustian bargain.

Here are the basics for financial models. No financial model has the status of a physical law (regardless of who is pushing the model, regardless of whatever mathematical framework is being used, regardless of how sophisticated it is, and regardless of whether fat tails are in it or not). Any financial model is really only a phenomenological construct. The parameters that are needed to specify the model are “implied”. In practice this means that a model, intended to price some security, first prices other securities or quantities in accord with market prices. Other parameters are determined by fits to historical data, and/or using “fundamentals”.

But this procedure, which the finance industry has used since the beginning of the “Quant Age” in the 1980’s, has now broken down for many securities. Models that are constructed using the best information, using the best attempts at consistency, and using conservative assumptions on the fundamentals, cannot describe the insanely low market prices for many securities. This is a huge problem. It cannot be said too strongly – the implied parameter approach used to specify model parameters is now dysfunctional in some markets.

Whose fault is this? Does the fault lie with the Faustian bargain?

Let’s start with economics. It is clear now that market prices do reflect economics – e.g. the recession. For example, long-term debt clearly has a component that is ruled by economics (and not just default). Spreads for bonds are now so high that they imply unbelievable default rates. The Fed stress tests are macro-economically driven. So why aren’t economic variables found in models for bonds? Answer: The Faustian bargain.

Actually, there is at least one model that started to address the issue of economics and financial modeling, the “Macro-Micro” model. Also, factor models provide a framework for including many variables. I believe such models should be taken more seriously and made more explicit. This is a daunting challenge, but I believe we can no longer avoid it. We need to get away from the Faustian bargain.

What about psychology? Market prices are now at least partially ruled by psychology, involving e.g. fear and damage control. The so-called “investor rationality” has disappeared (actually this statement is just a tautology that re-expresses the breakdown of the financial axioms; investors individually under the circumstances are actually being very rational). Psychology has always been instrumental in the markets, but now psychology has broken the markets. However, we have no “psychology” parameters in models. Is this a fundamental error? Yes, I think so. Can we get away from the Faustian bargain here? I don’t know. It is a humbling experience.

In this space of fog, there is at least one clear point. Can’t we just blame the quants? Here the answer is clear: “Nope”. Quants just implement the general religious dictum provided by the academics (naturally with bells and whistles). For the record, the recent catastrophic mortgage meltdown is not the first time quants have been blamed. I remember an incident once reported to me where a particularly obnoxious mortgage trader got up in front of a group meeting and demanded that the head quant apologize for his model, which the trader accused of causing mortgage-trading losses! Ugh.

What does the future hold? The banks have argued that we cannot mark-to-market for securities if there is no market for those securities. They certainly have a point. So they argue that we should mark-to-model. Uh-oh. No more constraints on the models from the market? What is going to constrain the models now? Certainly not “model validation”, which doesn’t address the fundamental problem - as I said no model is “valid” in the sense of physics. What about “parameter reasonableness” criteria? The answer would be “Yes” for some parameters, but “No” for other parameters that depend on the broken model vs. market consistency. We are in uncharted waters. I can foresee the emergence of two classes of models: #1 for traders that need to trade on the market, and #2 for capital determination and regulatory reporting. Confusion will reign.

Now by no means do I want to imply that models are useless. We need models. Nor do I want to imply that we abandon implied parameters. We want to have models that describe market prices, if possible. But current models may be too narrowly focused. At least with present horrible market conditions, and maybe in general, I believe that we need, somehow, to include economics and psychological constructs in the models.

Maybe someday with a return of confidence, the blue skies will re-emerge, investors will again become “rational”, and the current models relying on the Faustian bargain will again have reasonable implied parameters that do describe the traded market prices. And maybe not.

In any case, we should keep in mind that even if the good times return, the entire financial system could well exhibit systemic fragility again – that is, it could break again.

Faust was a winner for a long time. However, Faust had his problems. Where did he wind up? For the quant modelers reading this, I recommend that for background you get a CD or DVD of the opera “La Damnation de Faust” by Berlioz, and listen closely. Then shut off the CD player, go to your local University, look up a friendly Economics professor, and then find out where the Psychology Department is located. That’s where I’m headed.



1. Fed SCAP stress tests:

2. Market vs. Model: Andrew Davidson Industry Insight - Proposed FSP FAS 157-e:

3. Bond spreads and defaults:

4. Macro-Micro Model: Chapters 47 – 52, my book:

5. Faust:


© 2009 Jan W. Dash. All rights reserved.

Models, Phenomenology, and All That

There is a profound misunderstanding by some people of the fundamental nature of models, and more generally of quantitative finance and risk management, from accountants to professors to players in the world of finance to the general public. A better analysis and attribution of some causes of the current financial crisis – and perhaps helping to prevent future such crises - would be obtained by a better understanding of what financial models really are. Emanuel Derman and Paul Wilmott have a good Manifesto, to which I subscribe and effective already pre-signed in my book. Below are some quotes from the book (ref) that indicate my Philosophy of Models, the basic lines of which I have preached ever since coming to the Street in the late 80’s, and written long before the current crisis. The bottom line is that, in spite of the fact that Models, Quantitative Finance, and Risk Management may resemble “hard science”, and while we should and do try to be as scientific as possible, the real deal is that what we have always had is what physicists call “phenomenology”. This does not mean that the models are useless. We always need qualitative judgment, but we also need models. What does all this mean? What are some implications? Read on for some thoughts.

Ref: Excerpts from Quantitative Finance and Risk Management, A Physicist’s Approach: Pp. 7-8 (Ch.2); Pp. 415-420 (Ch. 32); Pp. 421-424 (Ch. 33); footnote 9, page 508 (Ch. 42).

Why is Quantitative Finance not a Science?

In science there is real theory in the sense of Newton's laws (F = ma) backed by a large collection of experiments with high practical predictive power and known domains of applicability (for Newton’s laws, this means objects not too small and not moving too fast).

In contrast, financial theoretical "postulates", when examined closely, turn out to involve assumptions, which are at best only partially justifiable in the real world. The financial analogs to scientific "experiments" obtained by looking at the market are of limited value. Market information may be quite good, in which case not much theory is needed. If the market information is not very good, the finance theory is relatively unconstrained. Finance computer systems are always incomplete and behind schedule (this is a theorem).


Quantitative Finance is Not Science but Phenomenology

The situation characterizing quantitative finance is really what physicists call "phenomenology". Even if we could know the "Newton laws of finance", the real world of finance is so complex that the consequences of these laws could not be evaluated with any precision. Instead, there are financial models and statistical arguments that are only partially constrained by the real world, and with unknown domains of applicability, except that they often break when the market conditions change in an extreme fashion. The main reason for this fragility is that human psychology and macroeconomics are fundamentally involved. The worst cases for risk management, such as the onset of collective panic or the potential consequences of a deep recession, are impossible to quantify uniquely—extra assumptions tempered by judgment must be made.


What About Uncertainties in the Risk Itself?

A characteristic showing why risk management is not science deals with the lack of quantification of the uncertainties in risk calculations and estimates. Uncertainty or error analysis is always done in scientific experiments. It is preferable to call this activity "uncertainty" analysis because "error" tends to conjure up human error. While human error should not be underestimated, the main problem in finance often lies with uncertainties and incompleteness in the models and/or the data. Risk measurement is standard, but the uncertainty in the risk itself is usually ignored.

In finance, there is too often an unscientific accounting-type mentality. Some people do not understand why uncertainties should exist at all, tend to become ill tempered when confronted with them, and only reluctantly accept their existence. The situation is made worse by the meaningless precision often used by risk managers and quants to quote risk results. Quantities that may have uncertainties of a factor of two are quoted to many decimal places. False confidence, misuse and misunderstanding can and does occur. A fruitless activity is attempting to explain why one result is different from another result under somewhat different circumstances, when the numerical uncertainties in all these results are unknown and potentially greater than the differences being examined.


Summary of Model Risk

We start with the obvious comment that models are now an indispensable part of modern finance. Securities and derivatives require model pricing. Therefore, models are indeed indispensable. Finance could not live without them.

Nonetheless, in spite of the best efforts of many talented and smart people, model risk is constantly present at some level and is due to many causes. One risk is the variability in model assumptions, none of which can be proved in any rigorous way . Some important effects on prices can be modeled only imperfectly, if at all. No financial model has the status of a “law of physics”, even if physics-based diffusion models and other concepts are used. Further, there is no “best” model, regardless of whose ego is involved. Different firms can and do have different models for the same instrument.

Model risk is hidden unless model-to-model or model-to-market comparisons are made. The risk is highest for illiquid, long-dated options. For highly liquid instruments, models are standardized with slight variations. Substantial losses due to model risk have occurred even for plain-vanilla products, however .

Model risk includes the risk of using approximate or inappropriate parameter types, or using the model in inappropriate parameter regimes. Models are used in practice to parameterize securities in some approximate way, and are usually only to be trusted for some short extrapolation from the region where market data are available. These parameters include maturity length, strike values for options, etc.

The intentional use of inaccurate parameter values is a separate problem .

Numerical approximations are unavoidable and are a function of available time and resources, but they can lead to difficulties.

Part of model risk lies in the pitfalls of software development. See Ch. 34. A host of mundane but important issues exists: coding errors, computer malfunctions, misinterpretations, communication snafus, inconsistencies, etc. Anyone who wants to get an idea of the difficulty is invited to sit down at the computer and give it a try .

Model-generated hedging predictions are more problematic than pricing, since hedging involves taking differences of prices under changing market conditions. Models differ more in the hedges than in the prices.

Model risk issues have arisen in a number of contexts. The interested reader is invited to consult the literature and the references .


Model Risk and Risk Management

Models stand at the cornerstone of risk management. Models characterize the behavior of financial instruments under different possible environments. This information is used to determine the risk of these instruments, and thus of departments, and ultimately of the corporation with respect to the markets. We have exhibited a variety of model calculations. We need to understand the model limitations. These limitations translate into a risk associated with the very models used for assessing risk. Model risk results from model limitations.


Liquidity Model Limitations

There is a variety of other problems not included in models. Effects related to supply and demand, the trading volume, and the time needed to sell a security are lumped together into "liquidity", and there are no good way to model these effects. Often they are just left out of the model price.

Bid-offer spreads are a related issue. If there is only a one-sided market so that (for example) the selling price is not known, then these spreads must be estimated independent of the model algorithm.


Which Model Should We Use?

Because there is no real theory of finance in the sense of physics, financial models are not unique. Different institutions, especially for illiquid financial products, often use different models. If one model is used in place of another, the differences in the values of the securities and the differences in the sensitivities with respect to movements in the underlying variables become an issue in risk management. For example, if one model reports the interest-rate dependence of a partially hedged position is near zero while another model just as sophisticated and defended with at least as much exuberance reports that the interest-rate dependence of the same position is large, which statement do you believe?

Sometimes a proprietary desk model is used for trading, and another model with simpler assumptions but widely used on the Street is used for corporate risk management reporting. Which model should be used to measure risk? The real answer is that there is model risk. Different models give different results. Therefore, there is an uncertainty in risk reporting due to the very existence of different models. A corporate goal should be the quantification of this model risk, creating an uncertainty of risk management itself.


Psychological Attitudes towards Models

The psychological attitudes toward models are not to be ignored. People who do not understand the limitations of models ask for the "best" model, and some people who should know better may believe they have the "best” model. Some people trust the models to such an extent that if their model disagrees with the market they assume the market is "wrong" and will eventually agree with the model. Sometimes this attitude pays off and sometimes it results in disaster. Sophisticated players understand and even fear the limitations of the models, sometimes using them only as a guide in difficult markets for illiquid products.


Model Risk, Model Reserves, and Bid-Offer Spreads

Models used for risk management are themselves risky to some extent. A corporate reserve could be taken to account for this model risk. This is difficult to convey to accountants, who want to know exactly how much the model risk is and exactly when or under which conditions they should apply the reserve. Because model risk will really show up when relatively illiquid positions are sold in difficult market conditions under pressure at someone else's model price, the risk is hard to quantify. Still, model risk is not zero and may be very large.

Alternatively, if known, model risk can be used to estimate part of the bid-offer spread for illiquid products.


Model Quality Assurance

The best way to quantify model risk is through a model "quality assurance" (QA) program (c.f. Ch. 33). Model QA now exists in most large financial institutions, although model risk is generally only determined in an incomplete fashion. Model assumptions and procedures are documented. Sensitivities to different parametric assumptions can be examined. Models can be assessed and compared.


Models and Parameters

Because there is no financial model that proceeds from first principles that are unambiguously correct, models are largely driven by parameters. These parameters are chosen through a combination of somewhat conflicting goals. The parameters are chosen such that the model into which the parameters are placed produces prices that, at least approximately, fit selected market data. The difficult problem is for cases where there is little or no market information. Models differ partially because the market constraints placed on the models can be chosen in different ways. The number of parameters is a compromise between fitting the known market prices and unwieldy complexity.

Having chosen the parameters to fit the market approximately, the models can be viewed essentially as providing an extrapolation or interpolation methodology. Thus, if a deal comes up that has parameters not currently quoted in the market, which is often the case for over-the-counter deals, the model is used to derive a price for that deal. The models, through extrapolation or interpolation, price illiquid instruments in a portfolio.

It should be emphasized the types and numbers of parameters in reality form an integral part of a model. In a profound sense, the parameters cannot be separated or isolated from the assumption of the underlying dynamics and the implementation of the mathematics through some computer algorithm. For example, if one volatility is used to describe a diffusion process rather than several volatilities, this is a model assumption. In fact, some models are just shells into which complex parametric functions are inserted. The Black-Scholes equity option formula is currently used in practice with a breathtaking richness of the parameterization of the volatility "surfaces" describing different options including “skew” effects. The volatilities cannot be separated from the assumption of simple diffusion and the algorithms used.


Too Much Mathematical Rigor in Finance?

… following Feynman (ref: Feynman and Hibbs), it is difficult see the utility of a full-court press for rigor when financial models are only approximate, i.e. various assumptions behind the models are manifestly violated in the real world.

There is, moreover, a serious case against too much mathematical rigor in finance. Rigor can hide irrelevance. Rigor teaches us nothing new of practical importance. Rigor can be counterproductive because it makes the subject appear harder than it really is. The worst is that rigor gives a false sense of model validity.

The use of excessive rigor in finance parallels physics in the 1960’s for the mathematically rigorous axiomatic field theory. One paper (Gell-Mann et. al) put the situation in perspective: “In particular, the contribution of axiomatic field theory to calculations has been less than any pre-assigned positive number, however small”.

© 2009 Jan W. Dash. All rights reserved.


New Paper: Multivariate Integral Perturbation Techniques - I (Theory)

Attached is my new paper MULTIVARIATE INTEGRAL PERTURBATION TECHNIQUES - I (THEORY). The paper introduces the theory for a perturbation method for evaluating an N-dimensional multivariate Gaussian integral, breaking it down into a sum of one-dimensional integrals. Numerical aspects are being examined, and will appear in a forthcoming paper.

This paper is now published: International Journal of Theoretical & Applied Finance, Vol 10, No. 8, pp. 1287-1304 (Dec. 2007)

Here is the abstract:

ABSTRACT: We present a quasi-analytic perturbation expansion for multivariate N-dimensional Gaussian integrals. The perturbation expansion is an infinite series of lower-dimensional integrals (one-dimensional in the simplest approximation). This perturbative idea can also be applied to multivariate Student-t integrals. We evaluate the perturbation expansion explicitly through 2nd order, and discuss the convergence, including enhancement using Padé approximants. Brief comments on potential applications in finance are given, including options, models for credit risk and derivatives, and correlation sensitivities.

Note: If the download button doesn’t work, the file URL is below (copy and paste it using CTRL-C and CTRL-V). It may be necessary to refresh the screen. Multivariate Integral Perturbative Techniques I Theory Sept06 R2 posted.pdf