Wednesday, March 20, 2013

Third (and final) excerpt...

The third (and, you'll all be pleased to hear, final!) excerpt of my book was published in Bloomberg today. The title is "Toward a National Weather Forecaster for Finance" and explores (briefly) the topic of what might be possible in economics and finance in creating national (and international) centers devoted to data intensive risk analysis and forecasting of socioeconomic "weather."

Before anyone thinks I'm crazy, let me make very clear that I'm using the term "forecasting" in it's general sense, i.e. of making useful predictions of potential risks as they emerge in specific areas, rather than predictions such as "the stock market will collapse at noon on Thursday." I think we can all agree that the latter kind of prediction is probably impossible (although Didier Sornette wouldn't agree), and certainly would be self-defeating were it made widely known. Weather forecasters make much less specific predictions all the time, for example, of places and times where conditions will be ripe for powerful thunderstorms and tornadoes. These forecasts of potential risks are still valuable, and I see no reason similar kinds of predictions shouldn't be possible in finance and economics. Of course, people make such predictions all the time about financial events already. I'm merely suggesting that with effort and the devotion of considerable resources for collecting and sharing data, and building computational models, we could develop centers acting for the public good to make much better predictions on a more scientific basis.

As a couple of early examples, I'll point to the recent work on complex networks in finance which I've touched on here and here. These are computationally intensive studies demanding excellent data which make it possible to identify systemically important financial institutions (and links between them) more accurately than we have in the past. Much work remains to make this practically useful.

Another example is this recent and really impressive agent based model of the US housing market, which has been used as a "post mortem" experimental tool to ask all kinds of "what if?" questions about the housing bubble and its causes, helping to tease out better understanding on controversial questions. As the authors note, macroeconomists really didn't see the housing market as a likely source of large-scale macroeconomic trouble. This model has made it possible to ask and explore questions that cannot be explored with conventional economic models:
 Not only were the Macroeconomists looking at the wrong markets, they might have been looking at the wrong variables. John Geanakoplos (2003, 2010a, 2010b) has argued that leverage and collateral, not interest rates, drove the economy in the crisis of 2007-2009, pushing housing prices and mortgage securities prices up in the bubble of 2000-2006, then precipitating the crash of 2007. Geanakoplos has also argued that the best way out of the crisis is to write down principal on housing loans that are underwater (see Geanakoplos-Koniak (2008, 2009) and Geanakoplos (2010b)), on the grounds that the loans will not be repaid anyway, and that taking into account foreclosure costs, lenders could get as much or almost as much money back by forgiving part of the loans, especially if stopping foreclosures were to lead to a rebound in housing prices.

There is, however, no shortage of alternative hypotheses and views. Was the bubble caused by low interest rates, irrational exuberance, low lending standards, too much refinancing, people not imagining something, or too much leverage? Leverage is the main variable that went up and down along with housing prices. But how can one rule out the other explanations, or quantify which is more important? What effect would principal forgiveness have on housing prices? How much would that increase (or decrease) losses for investors? How does one quantify the answer to that question?

Conventional economic analysis attempts to answer these kinds of questions by building equilibrium models with a representative agent, or a very small number of representative agents. Regressions are run on aggregate data, like average interest rates or average leverage. The results so far seem mixed. Edward Glaeser, Joshua Gottlieb, and Joseph Gyourko (2010) argue that leverage did not play an important role in the run-up of housing prices from 2000-2006. John Duca, John Muellbauer, and Anthony Murphy (2011), on the other hand, argue that it did. Andrew Haughwout et al (2011) argue that leverage played a pivotal role.

In our view a definitive answer can only be given by an agent-based model, that is, a model in which we try to simulate the behavior of literally every household in the economy. The household sector consists of hundreds of millions of individuals, with tremendous heterogeneity, and a small number of transactions per month. Conventional models cannot accurately calibrate heterogeneity and the role played by the tail of the distribution. ... only after we know what the wealth and income is of each household, and how they make their housing decisions, can we be confident in answering questions like: How many people could afford one house who previously could afford none? Just how many people bought extra houses because they could leverage more easily? How many people spent more because interest rates became lower? Given transactions costs, what expectations could fuel such a demand? Once we answer questions like these, we can resolve the true cause of the housing boom and bust, and what would happen to housing prices if principal were forgiven.

... the agent-based approach brings a new kind of discipline because it uses so much more data. Aside from passing a basic plausibility test (which is crucial in any model), the agent-based approach allows for many more variables to be fit, like vacancy rates, time on market, number of renters versus owners, ownership rates by age, race, wealth, and income, as well as the average housing prices used in standard models. Most importantly, perhaps, one must be able to check that basically the same behavioral parameters work across dozens of different cities. And then at the end, one can do counterfactual reasoning: what would have happened had the Fed kept interest rates high, what would happen with this behavioral rule instead of that.

The real proof is in the doing. Agent-based models have succeeded before in simulating traffic and herding in the flight patterns of geese. But the most convincing evidence is that Wall Street has used agent-based models for over two decades to forecast prepayment rates for tens of millions of individual mortgages.
This is precisely the kind of work I think can be geared up and extended far beyond the housing market, augmented with real time data, and used to make valuable forecasting analyses. It seems to me actually to be the obvious approach.
 

No comments:

Post a Comment