Friday, October 19, 2012

Why diversification doesn't work


You're standing in your canoe, on a beautiful Canadian lake, taking photos of the wildlife, occasionally fishing. Why standing, not sitting? Well, you've read about those disturbing studies that show how sitting is really bad for your long term health; how every hour of television viewing, for example, takes about 20 minutes off your life expectancy, and why the same is probably true for sitting at the computer, sitting reading a book, whatever. So you're standing and that's OK because you're balanced and stable, with your weight distributed uniformly.

Of course, anyone with even a few minutes of experience in a canoe knows this isn't as safe as it seems. What really matters isn't how well-balanced you are when the canoe rests peacefully, but what happens when a few waves come along, kicked up by rednecks passing in a souped-up bass trawler (I lived in rural Virginia for several years, so I know the experience). As you shift your stance to stay upright, and the boat shifts, that balanced distribution vanishes and you can easily tip. Stability demands balance in the midst of the boat's dynamics, not only in the static peace beforehand.

As it turns out, this same lesson applies to investment portfolios -- a new paper in Nature Scientific Reports shows just how important this insight may be.

Famously, of course, Harry Markowitz introduced the idea of diversification into investing back in the 1950s (at least he formalized the idea, which was probably around long before). Using information on the mathematical correlations between the returns of the different stocks in a portfolio, you can choose a weighted portfolio to minimize the overall portfolio of volatility for any expected return. This is maybe the most basic of all results in mathematical finance.

But it doesn't work; it suffers from the same problem as the balanced man in the canoe. This is clear from any number of studies over the past decade which show that the correlations between stocks change when markets move up or down. If the market suddenly plunges downward, you would hope that your well-diversified portfolio, invested as it is in stocks that tend to move unlike one another, would be OK. But when markets move significantly down (or up), it turns out, the correlations are no longer what they were. Trending markets induce strong correlations among stocks that aren't there beforehand, and aren't obvious from long-term averages. So the risks to a portfolio are actually much larger than the simple diversification analysis suggests -- just as the risk of a canoe tipping is much more than it seems to a man standing balanced on a peaceful lake.

The new paper by physicist Tobias Preis and colleagues makes this point with probably the largest data set used so far, looking at the stocks in the DJIA over about 70 years. It's a fairly simple analysis (modulo some nitty gritty details). Roughly, they look at the correlations between different stocks in the DJIA and see how these correlations depend on the recent average return of the DJIA. Are the correlations stable? Or do they go up as the market begins to move? The figure below showing the average correlation coefficient versus the return indicates that the result is clearly the latter: a trending market, in either direction, induces significant correlations among the DJIA stocks.


One of the interesting things here is that this link holds on many different timescales, from 10 days up through two months. The worrying thing for an investor, of course, is that these correlations make the risks of large losses significantly larger than they would appear to be on the basis of long-term correlations alone. As the authors conclude:
... a “diversification breakdown” tends to occur when stable correlations are most needed for portfolio protection. Our findings, which are qualitatively consistent with earlier findings42, 44 but quantitatively different, could be used to anticipate changes in mean correlation of portfolios when financial markets are suffering significant losses. This would enable a more accurate assessment of the risk of losses.
 As any canoeist knows, dynamics really matter.

Wednesday, October 17, 2012

The future of economics?

Ali Wyne at the blog "big think" asked eight notable young (under 40) economists about the future of their profession and key topics for future research. Their responses make for interesting but not really surprising reading. The research frontier, in their eyes, faces its key challenges in 1) understanding the nature of economic development and growth, and how the world's poor can be brought out of poverty, 2) learning how our growing understanding of human behavioral psychology can be used to replace the inadequate framework of rationality in economics, 3) gaining a much better perspective on macroeconomics, including bubble and herding phenomena, 4) building a new theoretical perspective to handle the vast influence of new information technology on human economic decisions and 5) learning to deal with massive data.

All in all, these seem like worthwhile goals and I'm encouraged that at least two of the economists make semi-explicit their view that economics dearly needs to explore new kinds of models going beyond the equilibrium framework.

I'm also struck, however, by something a little more depressing, which is the rather narrow, conservative scope expressed in their comments. Perhaps this is to be expected from young economists hoping to find stable jobs for coming decades, but not one of them even mentions the need for a deeper understanding of the nature and long term consequences of economic growth. Such growth is -- still -- simply assumed to be an absolute good to be pursued always and as rapidly as it can be. Given the alarming picture painted by studies such as this one -- in Nature a few months ago, it reviewed how human economic growth has significantly altered virtually all global biological and geophysical processes -- you might think that young economists would be scrambling to develop ideas about human society in a post-growth world, or at least one in which growth has to be strongly constrained and managed.

That seems to be a step too far. The idea of growth forever, unconstrained by any physical laws or biological realities, still seems to be a core belief even of the next generation of economists.

Wednesday, October 10, 2012

Stability through simplicity

I gave a talk last week at Oppenheimer Funds in NYC. I met some great people there, really creative and open minded. I spoke on the general theme of this blog -- natural instabilities in finance and economics and ideas we need to understand them.

One question asked afterwards was "what can we do as far as regulations to cope with these kinds of instabilities?" As I recall, my answer was pretty lame. I tried to suggest (vaguely) that the answer probably lies not with highly complex regulations, but with simpler ones, but I didn't say much more. I've been thinking about that point since, and thought it might be worth writing a few things down.

Essentially, the first lesson I think we should draw, once we acknowledge the existence of pervasive instabilities in finance, is the need to deal with persisting uncertainty. We will never understand the terrain so well that we can reduce the future to a set of known possibilities to which we can assign specific probabilities (which standard economics assumes we can). We can and should work hard to explore the space of what might happen, and so gain some forewarning of dangers, but we will still encounter surprises and we should expect to do so. So our approach to regulation ought to be centered on that premise -- that we face a world of uncertainty.

I should have read Andrew Haldane's wonderful essay The Dog and the Frisbee before giving my talk, but I only got around to that this morning. He makes some hugely important points on this very topic. The essay is one extended argument for why financial regulation is now too complex, and why our best hope at achieving financial stability in the future probably lies in a vast simplification of the regulatory apparatus and system of rules.

He begins from the observation that, in many settings where decision making involves weighing up many conflicting factors, simple rules often out-perform more complex ones:
Among physicians diagnosing heart attacks, simple decision trees beat a complex model. Among detectives locating serial criminals, simple locational rules trump complex psychological profiling. Among investors picking stocks, simple passive strategies outperform complex active ones. And among shopkeepers understanding spending patterns, repeat purchase data out-predict complex models.

The general message here is that the more complex the environment, the greater the perils of complex control. The optimal response to a complex environment is often not a fully state-contingent rule. Rather, it is to simplify and streamline (Gigerenzer (2010)). In complex environments, decision rules based on one, or a few, good reasons can trump sophisticated alternatives. Less may be more.

In complex environments, tallying strategies have been found to be superior to risk-weighted alternatives. Take avalanche prediction. Avalanches are difficult to predict, as they are drawn from a fat-tailed (Power Law) distribution. Yet simple tallying of a small number of avalanche indicators has been found capable of predicting over 90% of historical accidents. It has also been found to be superior to more complex decision methods (McCammon and Hägeli (2007)).
Haldane goes on at length to consider this in the context of financial regulation, where the legal framework has really exploded in complexity over the past few decades. The pursuit of ever-more complex models to assess risks, whether used by regulators or by banks and financial institutions independently, has led to a proliferation of models that create an overwhelming fog of complexity and can blind us all to obvious risks:
During the 1990s, the bluntness of the risk judgements embodied in Basel I came increasingly to be questioned – and arbitraged. Basel I was perceived as lacking risk-sensitivity, at least by comparison with the new wave of credit and market risk models emerging at the time. Change came in 1996 with the Market Risk Amendment. This introduced the concept of the regulatory trading book and, for the first time, allowed banks to use internal models to calculate regulatory capital against market risk. ...With hindsight, a regulatory rubicon had been crossed. This was not so much the use of risk models as the blurring of the distinction between commercial and regulatory risk judgements. The acceptance of banks’ own models meant the baton had been passed. The regulatory backstop had been lifted, replaced by a complex, commercial judgement. The Basel regime became, if not self-regulating, then self-calibrating.

The ink was barely dry on Basel II when the financial crisis struck. This exposed gaping holes in the agreement. In the period since, the response has been to fill the largest of these gaps, with large upwards revisions to the calibration of the Basel framework. Agreement on this revised framework, Basel III, was reached in 2010. In line with historical trends the documents making up Basel III added up to 616 pages, almost double Basel II. ... The length of the Basel rulebook, if anything, understates its complexity. The move to internal models, and from broad asset classes to individual loan exposures, has resulted in a ballooning in the number of estimated risk weights. For a large, complex bank, this has meant a rise in the number of calculations required from single figures a generation ago to several million today (Haldane (2011)).

Taking all of this together, the parameter space of a large bank’s banking and trading books could easily run to several millions. These parameters are typically estimated from limited past samples. For example, a typical credit risk model might comprise 20-30 years of sample data – barely a crisis cycle. A market risk model might comprise less than five years of data – far less than a crisis cycle.

Viewed over an historical sweep, this pattern is even more striking. Contrast the legislative responses in the US to the two largest financial crises of the past century – the Great Depression and the Great Recession. The single most important legislative response to the Great Depression was the Glass-Steagall Act of 1933. Indeed, this may have been the single most influential piece of financial legislation of the 20th century. Yet it ran to a mere 37 pages. The legislative response to this time’s crisis, culminating in the Dodd-Frank Act of 2010, could not have been more different. On its own, the Act runs to 848 pages – more than 20 Glass-Steagalls. That is just the starting point. For implementation, Dodd-Frank requires an additional almost 400 pieces of detailed rule-making by a variety of US regulatory agencies.
Haldane ends the essay with some exploration of how we might reverse this trend, and manage to simplify regulations. I won't go into detail other than to second his suggestion that reducing the complexity of the financial system itself ought to be a principle target of such simplified regulations. But even before that, changing the mindset of financial economics is the first task. That mindset still remains fixated on the endless pursuit of optimal strategies by long calculations over risk weighted alternatives, when in reality we rarely know the risks or even the alternatives with much accuracy.

An important consequence of thinking in this world of known risks and optimal solutions is that we end up accepting some dubious arguments that we can achieve the best of all possible worlds with the right pricing mechanism. With bad assumptions, in other words, we fall into the trap of believing the economists' standard models, when they really have little to do with the real world. If we break free of this illusion, and face up to living in a world with real uncertainty, we may return to an era in which old-style regulations and prohibitions against certain activities make perfect sense, and dreams of perfect pricing mechanisms become evident as the fantasies they are:
Over the past 30 years or so, the regulatory direction of travel has been towards pricing risk in the financial system, rather than prohibiting or restricting it. In the language of Weitzman, regulators have pursued price over quantity-based regulation (Weitzman (1974)). That makes sense when optimising in a risky world.

It may make less sense when optimising in an uncertain world. Quantity-based restrictions may be more robust to mis-calibration. Simple, quantity-based restrictions are the equivalent of a regulatory commandment: “Thou shalt not”. These are likely to be less fallible than: “Thou shalt provided the internal model is correct”. That is one reason why Glass-Steagall lasted for 60 years longer than Basel II. Quantity-based regulatory solutions have gained currency during the course of the crisis. In the US, the Volcker rule is a quantity-based regulatory commandment: “Thou shalt not engage in proprietary trading”. In the UK, the Independent (“Vickers”) Commission on Banking has also proposed structural, quantity-based reforms: “Thou shalt not co-mingle retail deposit-taking and investment banking”.

Yet even these notionally simple, structural proposals run some risk of backdoor complexity. For example, the consultation document accompanying Volcker already runs to 298 pages. Were these proposals to become mired in detail, they risk sinking, like the Tower of Basel, into the swamp. This is not because these proposals go too far but because they may not go far enough. These reform efforts have too many commas, semi-colons and sub-clauses. They would benefit from a few more full stops.

Friday, October 5, 2012

wisdom of crowds

My next Bloomberg column comes out this weekend is now out here. I wanted to give readers some further detail on the experiments I wrote about in the column, experiments designed to test how social influence affects the Wisdom of Crowds phenomenon. I actually wrote about the experiments in this post last year, and that post gives quite a lot of detail.

I think it is the most illuminating set of experiments I have seen on this phenomenon. Most important in the current environment, it's pretty clear I think that one can't look to the wisdom of crowds as a mechanism to enforce any kind of "wisdom" on the part of the financial markets. (This doesn't mean they're always wrong either, of course.)