Almost fifteen years after the seminal VaR debate between academic P Jorion and practitioner N Taleb, a much more humble follow-up (courtesy of Financial World mag).
VaR not guilty
Over-reliance on Value-at-Risk (VaR) and a misunderstanding of its statistical accuracy were mistakes made by bank management in the run-up to the crash. However, to suggest that a mathematical measurement tool, as opposed to, say, incompetent management or poor loan origination standards, was a prime cause of the crash is a gross abrogation of responsibility. Pablo Triana (How VaR put banks on road to ruin, FW May) suffers from the same conceptual misunderstanding of VaR that bankers had, and assigns a bigger role to a statistical model than it deserves. VaR was not “invented by Wall Street in the late 1980s” – it was introduced by JPMorgan in 1994. Second, there were two applications for VaR: market VaR and credit VaR. One version used past returns to model the future, but the standard version allowed the user to input the volatility parameter and modify this as it wished. Market VaR is not significantly more inaccurate than modified duration. Triana says VaR treated US Treasuries the same as collateralised debt obligation (CDO) tranches; but it is a simple statistical model and does what it is told. Most credit VaR models use the credit rating transition probability as their main input, so if a CDO tranche is assigned a AAA rating, statistically over the next 12 months it has the same default risk as a US Treasury. Of course, we all know this is nonsense, because a CDO exhibits greater credit risk than a Treasury, so the problem lies not with VaR but the use made of it. Third, and most importantly, Basel rules, not VaR, drive the calculation of bank capital and (indirectly) the level of leverage. Basel 2 allowed regulatory capital to be calculated according to credit rating and, in one version of it, using the bank’s own statistical data. VaR does not impact regulatory capital rules, so blaming it for high leverage of Wall Street banks is unfair, inaccurate and takes the focus away from the real culprits: management. The real problem with VaR, which Triana could have pointed out, was that the most common version (the variance-covariance approach) assumed a lognormal distribution for volatility and was usually calculated at a 90 per cent or 95 per cent confidence interval. Extreme market crashes do not follow a lognormal pattern, and crucially occur with much greater frequency than such a confidence level implies; which is why VaR could never hope to capture severe market corrections. Irresponsibly, senior bank management was frequently unaware of this. Many failed banks, including HBOS, Northern Rock and Bradford & Bingley, did not use credit VaR, at least not exclusively at the expense of other risk measurement methodologies. So their failure must be blamed on something else. Ultimately, one has to look at incompetent bank management.
Professor Moorad Choudhry, London Metropolitan Business School
VaR in the dock
As someone who has been writing financial books for a while, including one on Value at Risk (VaR), I was puzzled by Moorad Choudhry’s letter in the June issue of FW. It misrepresents key aspects of VaR. Contrary to his assertions, VaR was invented in the late 1980s: as has been amply documented by well-known sources, VaR made its formal debut inside JPMorgan around 1989-90, having been conceived and fine-tuned in prior years (in fact, Bankers Trust had developed something similar to VaR even earlier). What happened in 1994, well after the tool had been developed and had begun to be used, is that JPMorgan (through the Riskmetrics conduit) shared its VaR methodology with the world; the main goal of such public release was most likely to entice regulators into embracing VaR as the officially sanctioned risk management and capital-setting tool. VaR is, of course, used for regulatory capital purposes. In 1995, the Basel Committee decided to allow banks to use VaR for the calculation of the market risk capital charge. Such allowance has not been rescinded to this day. In fact, other regulators joined the bandwagon in the meantime, such as the SEC in its devastatingly fateful 2004 ruling that embraced VaR. Choudhry’s comment that “VaR does not impact capital market rules” is simply without foundation. VaR can, and did, lead to poisonous leverage, courtesy of its monumental design flaws. The central question should thus be: why continue using a flawed tool that leads to crisis? Following the path trodden by VaR-defenders, Choudhry skirts the issue and sheepishly blames traders and bankers for misunderstanding poor old VaR. Such nonsense is typically promoted as a way to preserve the tool (and those who measure it and teach it) within financeland. VaR was used exactly the way it was always intended to be used. Are we really expected to accept that those who 1) created the model in the first place, 2) have internally fine-tuned it for two decades, and 3) have spent years lobbying for its adoption by policy-makers, have forgotten that VaR is based on normality, focusing on the selective past and trusting correlations? What Choudhry fails to appreciate is that those supposedly not-understood characteristics are most likely the very reason why VaR was promoted by banks in the first place.
Pablo Triana, author of Lecturing Birds on Flying: Can Mathematical Theories Destroy the Financial Markets?