Sunday, February 12, 2012

Structured Finance Ratings: A Run Time Pathology in a Weakly-Typed Financial Sector

This entry explains one reason why the financial system collapsed. I realize the title is not catchy and if you have stumbled onto this page and are now wondering "WTF?" or "who the hell is this nutter?", I'm not sure how best to persuade you to read on. Except perhaps to say that the popular explanations have already been bandied about and if you, like myself, find them a little thin then perhaps this may be of interest.
          
Following some feedback from a reader I decided to split this post off from my response to Professor Jarrow's paper, a response you can read here should you wish. Despite some differences of opinion I was praising Professor Jarrow for prodding readers to at least consider what probability they are talking about. Here I extend those comments and explain how things were able to go south so quickly. I claim it was a failure in a vague protocol between rating agencies and investors, that this protocol was a rejection of theory in many respects, and that in computing terms it represented either very weak typing or at best, a nasty little implicit conversion that too many people forgot about.

Sorry if that isn't a good sound bite and if you prefer a cover story about allegedly complex mathematics gone awry Ian Stewart or countless others can provide it. He is wrong though, and his writing is a mere reverberation of a story peddled by a small cabal of quants who had nothing to do with the businesses involved and therefore, assume it had something to do with fat tails or the Normal Copula. Dear Ian, I love your other writing. Please speak to someone who actually worked in the relevant businesses.

Now as I mentioned Jarrow was warning about conflation between real world and market implied probability, between "P" and "Q" probabilities, to borrow from standard terminology in mathematical finance. I wish to dig a little deeper however, into the use of probability and in particular a more subtle conflation that might plausibly explain the crisis. That sloppiness comprises conflation between P-probabilities and an entirely different category I label R-probabilities. As this is not standard terminology, here is a summary:

    P-Probabilities       Estimates of "true" probability attempting to exploit all relevant information
    Q-Probabilities      Market implied risk-neutral probability
    R-Probabilities      Estimates of probability deliberately eschewing market information

As anyone familiar with the textbooks will recognize, my use of Q and P reflects standard notation. However R-probabilities deserve some comment. An example is actuarial probability, almost exclusively the probabilities used in rating agency models - though actuaries seem confused on this point and sometimes use the P label. I'm reserving P-probability for the thing you find in theorems about investment and betting. There is really no confusion to be had. You can shove P-probability into your portfolio optimization problem. You cannot put R-probabilities in.

Let's get one banality out of the way. Real world P-probability will always be aspirational insofar as we will never know it with certainty. That is beside my point, however, because one can at least aspire to the appropriate thing for the problem at hand. What is at hand? It usually has something to do with portfolio allocation, investment decisions, counterparty risk or any kind of decision making under uncertainty. Professor Jarrow, having contributed an enormous amount to this field, can confirm that there are many tools available to us if we use our best, Honest Abe guess of P-probability. There is a dearth of theorems, in comparison, concerning R-probability.

Odd, in some respects. After all R-probabilities are ubiquitous. Ratings, to harp on that example, were very much at the center of fixed income. Vendors of R-probabilities comprised a much bigger market than vendors of any other kind. So perhaps we need a branch of finance theory devoted to the topic, if finance is to be the study of actual markets and actual people.  It would be a strange area of theory, admittedly. R-probability refers to a broad class of probability estimates that deliberately and explicitly exclude information known to be relevant to the estimation.

And it is why, incidentally, you cannot blame the textbooks for the crisis. One is free to go from R back through P, as it were, by re-introducing the missing variables but, absent this retrofitting, we shouldn't expect any theorems written explicitly about R-probability. There is not a great deal to say, perhaps, about the broad category of "probabilities which explicitly restrict the filtration for no reason other than convenience, conceptual simplicity or business considerations". So financial theory is silent on these things called ratings. To make use of that dusty old textbook you must estimate P-probability using all available information (of which R and Q-probabilities might be a subset).

I suppose you can think of ratings as similar to those dehydrated foods you take camping. They certainly contain substance, but if you don't add water you are going to struggle with digestion. The water you have to add back in is the information deliberated extracted, as it were: market prices.  I am not introducing the category R-probability as a facile jab. On the contrary is plain for all to see, and explicitly written in the documentation, that rating agencies almost never take market prices or implied probabilities into consideration. That is a matter of policy. So ratings are R-probabilities. No complaint there and it is a useful service. As useful as dehydrated veal masala which actually tastes good if you walk far enough before sitting down to eat.

I am merely pointing out that R-probabilities are not, in the absence of this rehydration, relevant to investing. Investors were investing and they generally appreciated that. Yet if the economy is thought of as one big computer program they were part of a pathology arising from lack of type checking, if you will. In a "strongly typed" financial system the rating probabilities would belong to a different type, say "R-probability". That would indicate "an aspiration to the best possible estimate of probability constrained by an explicit and conscious decision to exclude market prices". That kind of explicit care might have prevented things turning south, because it would have forced the rating agencies, or their customers, to think though the implications of multiplied R-probabilities.

As an aside my point skirts around some philosophical quagmires related to what P is and I merely suggest that if we were careful, that portfolio optimization tool in Excel or your head would be strongly typed. It would insist on P-type probabilities as arguments. So if you sent the output from a rating model into some kind of probabilistic tool there would be a compiler error. The compiler would have thrown, incidentally, a very long time ago when R-probabilities were first "misused" in this manner. But so as not to make a caricature of my own argument I accept that institutional investors got around this in other ways, by adding a grain of salt. 

In the interpretation of ratings there was, if you like, an implicit conversion from R-probability to P-probability (nudging it a bit) or at the very least, a cautious interpretation of the optimization results (whether the "optimization" was explicit or intuitive). Yet I claim that the financial system collapsed, in part, because this implicit conversion from R-probability to P-probability performed by institutional investors became largely irrelevant once R-probabilities were multiplied together in CDOs. Had this conversion been more explicit, and more conscious, the protocol between rating agencies and institutional investors might not have broken down quite so spectacularly. 

In passing we ask why do rating agencies and actuaries not even aspire to objective probability, the thing that is useful for investment decisions? On the rating side, over a billion in fee revenue for structured finance ratings in 2006 might have something to do with it, because the attractiveness of ratings to banks is roughly proportional to their departure from the market prices. But focusing on the evils of the rating agencies rather misses the point: the communication between rating agencies and investors was vague, in the probabilistic category sense, and that just wasn't good enough when things got just a little bit complex. That is the P-R conflation story, I claim, and it ultimately rests with the investors.

To see why one needs to consider CDOs but before that, I suggest a brief diversion. The P-R lesson had already been learned elsewhere.

                      P,Q and R probabilities in racetrack betting markets

The racetrack provides an analogy to credit markets, especially structured credit markets. At the track there is historical form data and actuarial work to be done but it is certainly not the end of the story. No savvy bookmaker will ignore the the market for the upcoming race in arriving at their own P-probability estimates, whether those estimates exist in a computer or their gut. They will glance at the other bookie's boards and make adjustments to their prices (which is not to conflate their prices with their subjective assessment of probability, merely to note that the latter informs the former).

The lesson of the racetrack is not P and Q, but P and R.  The explicit, conscious handling of P and R categories separates the very best from the good. The most successful, statistically careful professional punters are those who have made hundreds of millions, not hundreds of thousands, incidentally. The point is that they include the market prices in their (logistic, probit) regressions, even though throwing Q into the prediction of P might seen philosophically messy to some. Messy or not, the best gamblers aspire to P-probabilities and have learned from experience that simple rules using R-probabilities don't lead to near-optimal bet allocation.

To some the explicit creation of P-probability (using Q) seems unnatural or unnecessary, since one can surely prepare probabilities independent of the market and then bet accordingly. This counter argument is analogous to one we here in financial markets (and the ad-hoc use of ratings or fundamental information). It is suggested that the sign of the trade is the most important thing, for example, and that will not change whether you use the market as a regressor or not. And while a gambler who plugs R-probabilities directly into his portfolio optimization and overbets is a straw man, he need not go broke if sensible rules are applied (not betting too much). Some R-probability users will do rather well, in fact, and as they might never know what extra returns they are missing you'd be unlikely to win the P-R argument with them.

Indeed the fuss over R versus P is almost a fine point until you start multiplying R-probabilities together. At the racetrack nobody takes multiplied R-probabilities seriously. The morning line is a prominent example of an R-probability for it is created in the same vacuum as ratings. Typically you cannot multiply the morning line odds together and expect someone to trade an exacta anywhere near the result.  In the investment sector you have a better chance, tragically. The slightly awkward, tacit use of R-probabilities sounds reasonable. It is called "taking the ratings with a grain of salt" and it is presumed sufficient for the same reason that Andy Beyer or Don Scott's approximate rules for betting seemed good a few decades ago.

Side note: I'm exercising a tiny amount of license when I say "multiply" in either structured finance or the racetrack analogy. This does not imply an ignorance of the distinction between sampling with and without replacement, a slavish adherence to independence, which would be just as laughable. It does not imply that reality is precisely represented by the Normal Copula model, or the Harville trifecta formula for that matter. 

                           The absurdity of multiplied R-probabilities 

As with the racetrack, things start to get more interesting when one considers that grain of salt in the structured finance sphere, the rough equivalent of the quinella, exact and sometimes trifecta markets. How does one choose the grain size? You can defend any information, any loose methodology, and for that matter any soft nonsense on the grounds that it "can be taken with a grain of salt" but does this stand up when rating agency models take R-probabilities and straightforwardly multiply them together?

Might it not be the case that investors, looking at ratings as a black box, might mistakenly apply the same grain of salt across the board, or something close to it, when in fact the only grain of salt one can sensibly add to structured finance ratings involves a complete reconstruction of the model? Only by diving into the guts can we can clean out the R-probabilties, replace them by P-probabilities, and then bravely, cautiously multiply them together. It is error prone, difficult, and brings the rating agency model into question but it is better than guaranteed disaster. If R-probabilties are implicitly conflated with P-probability in all the working parts, you can't just add salt afterwards. All the salt in the shaker might not help.

That is, as a side note, is why your morally ambiguous, regretful author was able to present a capital model for a class of derivative product company to the two largest rating agencies. It is why they adopted it as part of their standard methodology, more or less. It is one reason some of the most highly leveraged companies in the world (levered hundreds of times against equity invested) could possibly receive AAA ratings (though I add, they were not those directly playing roles in the collapse). The popular press has attributed this to poor correlation estimation. A conflation of P and R correlation is the more accurate assessment, given that implied correlation was always very high. In fact the rating agencies deliberately excluded implied correlation as well, as they went about multiplying R-probabilities. They still do because that is thought to be "consistent". It is consistent, but in conjunction with the way ratings are interpreted by institutional investors, a recipe for catastrophe.

Conflation of R and P is a simple type error and a failed handshake between agency and investor - ultimately the investor's fault. It was successfully glossed over for many years but tripped a run-time debacle when the products got more complicated. It is not a question of degree, importantly, since structured securities can be structured into other ones ad infinitum. Indeed under the mild assumption that market implied probabilities and correlation exceed rating agency probabilities (for some class of assets) one can structure a company that is arbitrarily safe (as judged by rating agency models) arbitrarily leveraged, and at the same time arbitrarily profitable for the equity investors. Any discrepancy between P and R can be multiplied arbitrarily many times.

It is left as an exercise for the incredulous reader to prove the following lemma on probability conflation. For any company satisfying a capital adequacy test and/or counterparty risk test characterized by multiplied R-probabilities (for example, most rating agency methodologies) there exists another hypothetical structured finance company that also passes the very same criteria but exhibits twice the leverage and would ordinarily be assessed by the market as twice as risky.


        Suggestion #1: Vendors should specify the category of probability they produce  

The important point to take from Jarrow's paper is the inadequacy of the word probability. Of course that is well acknowledged in the financial literature but nonetheless, I can't help but wonder if we are short one probability label, given that typically only P and Q are bandied about. Since finance is the study of markets (including opaque, dysfunctional markets where participants eschew the basic theory) it might be useful to introduce models for what people actually do, and models for the quasi-probabilistic messages that get sent back and forth - whether or not they are sensible. Therefore I make no apology for introducing the terminology "R-probability", and appeal to the pigeon-hole principle. If there are three types of probability in general use and only two labels in play, then one of them has gone without!

I believe regulators too should acknowledge the reality of the marketplace, as depressing as this all is. At minimum, regulations should acknowledge that there are many suppliers of probabilities (other than rating agencies) whose estimation procedures deliberately, explicitly, or as a matter of convenience leave out important information known to be highly relevant. I propose a mandatory labeling scheme for any probabilities, perhaps worthy of repetition:

    P-Probabilities      Estimates of probability attempting to exploit all relevant information
    Q-Probabilities     Market implied risk-neutral probability
    R-Probabilities     Estimates of probability deliberately eschewing market information

     Suggestion #2: Multiplied or recycled R-probabilities should contain an explicit warning

I further propose that any probability derived from the multiplication of R-probabilities contain a mandatory, stern warning. There are perfectly legitimate reasons to multiply Q and P probabilities. There is absolutely nothing in the theory of finance that recommends recycled R-probabilities in any investment context. Yet rating agencies took their very own R-probabilities, which they knew full well deliberately ignored the most relevant market information, and fed them right back into models for CDOs and CDOs of CDOs. This should not be allowed to happen again. But we should also be on the lookout for the same class of error in the provision of other R-probabilities.

Saturday, February 11, 2012

Another Extremely Fragile Definition On the Way

A new podcast gives us some insight, and quite possibly the entire plot, of Nassim Taleb's new book "Anti-fragility" due out later this year. When I say plot, I mean paragraph, quite possibly. And when I say paragraph I mean sentence. When I say sentence, I mean word. The word is "anti-fragility".

What struck me about this fascinating interview covering diverse topics (such as the effects on bone mass of interstellar travel - use it or lose it buddy) was the enormous effort Taleb had gone to in designing this new word. It is essential work, for as noted in The Black Swan and Fooled by Randomness, there is sometimes a dearth of words in the English language. For instance, there is no single word for anti-intellectual intellectual, someone who adopts populist tactics to attack anything remotely rigorous.

Yet sometimes there are many words for carefully delineated concepts related to missing data, bayesian estimation, subtle biases, sampling curiosities, statistical paradoxes and so forth. These are better replaced with a single word, like "silent evidence", that most people can at least remember, even if it isn't clear to anyone precisely what we are supposed to do with it.

Taleb's spectacular success in coining words and phrases rests on a massive pool of unconsidering humans who are suckers for pathetically thin observations like "use it or lose it". This massive reservoir was the topic of Maureen Tkacik's essay on Taleb's original promoter, Malcolm Gladwell. And it is not drying up. That might have something to do with a species known as the journalist, perhaps the only sector of society with no training in anything whatsoever who have nothing better to do than humor wafer thin piffle like the following:

"Then we get to the idea ... You take a car. You drive it against the wall at a tenth of a mile per hour, a hundred times ...er... or a thousand times. Have you done any harm, no. A tenth of a mile per hour won't harm you. The car will have some damage but it won't harm you. Now, drive the same car once at a hundred miles per hour...." 


This kind of thing gets a sympathetic giggle from interviewer Ross Roberts. Or perhaps he was just trying to keep the conversation going. You can almost hear the wheel's spinning in the guy's head, however, as he tries desperately to squeeze something profound out of the interview. But Taleb presses on just in case we didn't get the point, or appreciate his "universal notion of anti-fragility".

"This coffee cup I have on my desk has suffered a lot of shocks... but if I let it fall to the floor it will break ... you see?" 

Well yes Taleb, I think we do see, but I suggest those airline engineers continue to check for tiny fractures in turbines and other accumulated damage nonetheless.  You have to admire the boldness though, for not even Gladwell would dare repeat the same concept quite so many times. He would at least wrap it in a fascinating anecdote, or choose the three out of five hundred cases in a medical study that went against the trend (the exceptions that prove the rule). In Gladwell's defense he never claims a long gestation period whereas Taleb makes a point of his labor. Taleb carries metaphors on his back for decades for us - a weary selfless journey. It is kind of Taleb to finally dump these on us, though it might be the case that trading equity options for all those years has caused brain damage. Perhaps it causes one to fixate on unbelievably inane triviality like the fact that a payoff for a call option is non-linear. Use it or lose it.

But what of Taleb's previous attempts to create universal concepts? The Extremistan/Mediocristan distinction was laid before us in The Black Swan and was interpret as meta-guidance for applied mathematicians. Actually it was meta-nonsense for people who had never applied mathematics, never intended to apply mathematics, but wanted to think deep thoughts about other people who applied mathematics ... but let's set that aside and examine it on it's merits. Taleb presented a perfectly clear instruction manual. The world has linear and non-linear phenomena. But mathematicians have to be careful, he argued, not to use the wrong kind of mathematics. You wouldn't want to use linear mathematics to model a non-linear problem best treated with fractals, power laws and so forth now, would you? And I suppose you wouldn't want to use fractals to model a linear system either - though Taleb is highly skeptical that any exist so that is a decidedly less important topic.

Take web page popularity. It follows a Zipf distribution, landing it squarely in the groovy world of power laws. So it would be a massive mistake to try to apply linear mathematics to it, right? To me more precise I'd say it would be a massive mistake, a violation of a universal principal no less, to apply Linear Algebra to it, much less a singular value decomposition which is about as close to the epicenter of "linear mathematics" (to humor that ridiculous phrase) as one could surely get. Strange then, that Larry Page sought fit to do so when inventing Page Rank, the algorithm distinguishing Google from Yahoo that powered the greatest commercial success in recent history.

What are we to make of the non-linear/linear meta-advice, and for that matter the rest of Taleb's ranting?  Use it or lose it, one presumes, but I just quite get my head around the former possibility. Maybe that's because I've been banging my head against The Black Swan to gently, for too long. It's non-linear. One big whack and everything will be clear.