Friday, January 25, 2013

Is finance different than medicine?

Readers of this blog know that I generally write about the risks of finance -- risks I think are systematically underestimated by standard economic theories. But finance does of course bring many benefits, mostly when used in its proper role as a means to insure against risks. I just happened on this terrific paper (now nearly a year old, not breaking news) by two legal scholars from the University of Chicago who propose the idea of an FDA for finance -- a body that would be charged with approving new financial products before they would enter the market. It's a sensible proposal, especially given the potential risks associated with financial products, which are certainly comparable to the risks of new pharmaceutical products.

But beyond this specific idea, the paper gives a great discussion of the two fundamentally opposing uses of derivatives and other new financial products -- in acting as insurance, to spread risks in a socially beneficial way, or to act as mechanism for gambling, to increase the risks faced by certain parties in a way that does not bring social benefits. The following two paragraphs give the flavor of the argument (which the full paper explores in great detail):
Financial products are socially beneficial when they help people insure risks, but when these same products are used for gambling they can instead be socially detrimental. The difference between insurance and gambling is that insurance enables people to reduce the risk they face, whereas gambling increases it. A person who purchases financial products in order to insure themselves essentially pays someone else to take a risk on her behalf. The counterparty is better able to absorb the risk, typically because she has a more diversified investment portfolio or owns assets whose value is inversely correlated with the risk taken on. By contrast, when a person gambles, that person exposes herself to increased net risk without offsetting a risk faced by a counterparty: she merely gambles in hopes of gaining at the expense of her counterparty or her counterparty’s regulator. As we discuss below, gambling may have some ancillary benefits in improving the information in market prices. However, it is overwhelmingly a negative-sum activity, which, in the aggregate, harms the people who engage in it, and which can also produce negative third-party effects by increasing systemic risk in the economy.

This basic point has long been recognized, but has had little influence on modern discussions of financial regulation. Before the 2008 financial crisis, the academic and political consensus was that financial markets should be deregulated. This consensus probably rested on pragmatic rather than theoretical considerations: the U.S. economy had grown enormously from 1980 to 2007, and this growth had taken place at the same time as, and seemed to be connected with, the booming financial sector, which was characterized by highly innovative financial practices. With the 2008 financial crisis, this consensus came to an end, and since then there has been a significant retrenchment, epitomized by the passage of the Dodd-Frank Act, which authorizes regulatory agencies to impose significant new regulations on the financial industry.
Of course, putting such a thing into practice would be difficult. But it's also difficult to clean up the various messes that occur when financial products lead to unintended consequences. Difficult isn't an argument against doing something worthwhile. The paper I mentioned makes considerable effort to explore how the insurance benefits and the gambling costs associated with a new instrument might be estimated. And maybe a direct analogy to the FDA isn't the right thing at all. Can a panel of experts estimate the costs/benefits accurately? Maybe not. But there might be sensible ways to bring certain new products under scrutiny once they have been put into wide use. And products needn't be banned either -- merely regulated and their use reviewed to avoid the rapid growth of large systemic risks. Of course, steps were taken in the late 1990s (by Elizabeth Warren, most notably) to regulate derivatives markets much more closely. Those steps were quashed by the finance industry through the action of Larry Summers, Alan Greenspan and others. Had there been something like an independent FDA-like body for finance, things might have turned out less disastrously. 

Forbes (predictably) came out strongly against this idea when it was published (over a year ago), with the argument that it goes against the tried and true Common Law notion that "anything not specifically banned is allowed." But that's just the point. We specifically don't allow Bayer or GlaxoSmithKline create and market new drugs without having extensive tests to give some confidence in their safety. Not only that, once approved those drugs can only be sold in packages containing extensive warnings of their risks. Why should finance be different?

Monday, January 14, 2013

Peter Howitt... beyond equilibrium

Mostly on this blog I've been arguing that current economic theory suffers from an obsession with equilibrium concepts, especially in macroeconomics, or in models of financial markets. Most of the physical and natural world is out of equilibrium, driven by forces that are out of balance. Things never happen -- in the oceans or atmosphere, in ecosystems, in the Earth's crust, in the human body -- because of equilibrium balance. Change always comes because of disequilibrium imbalance. If you want to understand the dynamics of almost anything, you need to think outside of equilibrium.

This is actually an obvious point, and in science outside of economics people generally don't even talk about disequilibrium, but about dynamics; it's the same thing. Equilibrium means no dynamics, rest, stasis. It can't teach you about how things change. But we do care very much about how things change in finance and economics, and so we need models exploring economic systems out of equilibrium. Which means models beyond current economics.

The need for disequilibrium economics was actually well accepted back in the 1930s and 40s by economists such as Irving Fisher in the US and Nicholas Kaldor in England. Then in the 1950s, with the Arrow-Debreu results, and later with the whole Rational Expectations hysteria, it seems to have been forgotten. It's curious I think that really good economists, clear thinking people who are trying to address real world issues, often have no choice but to try to understand episodes of dramatic change (bank runs, bubbles, liquidity crises, leverage cycles) by torturing equilibrium models into some form that reflects these things. The famous Diamond-Dybvig model of bank runs is a good example. The model is one with multiple equilibria, one of which is a bank run. This is indeed insightful and useful, essentially showing that some sharp break can occur in the behaviour of the system, and also offering some suggestions about how runs might be avoided with certain kinds of banking contracts. But isn't it at least a little strange to think of a bank run, a dynamic event driven by amplification and contagion of behaviour, as an "equilibrium"?

I'm not alone in thinking that it is a little strange. Indeed, by way of this excellent collection of papers maintained by Leigh Testfatsion, I recently came across an excellent short essay by economist Peter Howitt which makes arguments along similar lines, but in the area of macroeconomics. The whole essay is worth reading, much of it describing in practical terms how, in his view, central banks have in recent decades moved well ahead of macroeconomic theorists in learning how to manage economies, often using tactics with no formal backing in macroeconomic theory. Theory is struggling to keep up, which is probably not surprising. Toward the end, Howitt makes more explicit arguments about the need for disequilibrium in macroeconomics:
The most important task of monetary policy is surely to help avert the worst outcomes of macroeconomic instability – prolonged depression, financial panics and high inflations. And it is here that central banks are most in need of help from modern macroeconomic theory. Central bankers need to understand what are the limits to stability of a modern market economy, under what circumstances is the economy likely to spin out of control without active intervention on the part of the central bank, and what kinds of policies are most useful for restoring macroeconomic stability when financial markets are in disarray.

But it is also here that modern macroeconomic theory has the least to offer. To understand how and when a system might spin out of control we would need first to understand the mechanisms that normally keep it under control. Through what processes does a large complex market economy usually manage to coordinate the activities of millions of independent transactors, none of whom has more than a glimmering of how the overall system works, to such a degree that all but 5% or 6% of them find gainful unemployment, even though this typically requires that the services each transactor performs be compatible with the plans of thousands of others, and even though the system is constantly being disrupted by new technologies and new social arrangements? These are the sorts of questions that one needs to address to offer useful advice to policy makers dealing with systemic instability, because you cannot know what has gone wrong with a system if you do not know how it is supposed to work when things are going well.

Modern macroeconomic theory has turned its back on these questions by embracing the hypothesis of rational expectations. It must be emphasized that rational expectations is not a property of individuals; it is a property of the system as a whole. A rational expectations equilibrium is a fixed point in which the outcomes that people are predicting coincide (in a distributional sense) with the outcomes that are being generated by the system when they are making these predictions. Even blind faith in individual rationality does not guarantee that the system as a whole will find this fixed point, and such faith certainly does not help us to understand what happens when the point is not found. We need to understand something about the systemic mechanisms that help to direct the economy towards a coordinated state and that under normal circumstances help to keep it in the neighborhood of such a state.

Of course the macroeconomic learning literature of Sargent (1999), Evans and Honkapohja (2001) and others goes a long way towards understanding disequilibrium dynamics. But understanding how the system works goes well beyond this. For in order to achieve the kind of coordinated state that general equilibrium analysis presumes, someone has to find the right prices for the myriad of goods and services in the economy, and somehow buyers and sellers have to be matched in all these markets. More generally someone has to create, maintain and operate markets, holding buffer stocks of goods and money to accommodate other transactors’ wishes when supply and demand are not in balance, providing credit to deficit units with good investment prospects, especially those who are maintaining the markets that others depend on for their daily existence, and performing all the other tasks that are needed in order for the machinery of a modern economic system to function.

Needless to say, the functioning of markets is not the subject of modern macroeconomics, which instead focuses on the interaction between a small number of aggregate variables under the assumption that all markets clear somehow, that matching buyers and sellers is never a problem, that markets never disappear because of the failure of the firms that were maintaining them, and (until the recent reaction to the financial crisis) that intertemporal budget constraints are enforced costlessly. By focusing on equilibrium allocations, whether under rational or some other form of expectations, DSGE models ignore the possibility that the economy can somehow spin out of control. In particular, they ignore the unstable dynamics of leverage and deleverage that have devastated so many economies in recent years.

In short, as several commentators have recognized, modern macroeconomics involves a new ‘‘neoclassical synthesis,’’ based on what Clower and I (1998) once called the ‘‘classical stability hypothesis.’’ It is a faith-based system in which a mysterious unspecified and unquestioned mechanism guides the economy without fail to an equilibrium at all points in time no matter what happens. Is there any wonder that such a system is incapable of guiding policy when the actual mechanisms of the economy cease to function properly as credit markets did in 2007 and 2008?
Right on, in my opinion, although I think Peter is perhaps being rather too kind to the macroeconomic learning work, which it seems to me takes a rather narrow and overly restricted perspective on learning, as I've mentioned before. At least it is a small step in the right direction. We need bigger steps, and more people taking them. And perhaps a radical and abrupt defunding of traditional macroeconomic research (theory, not data, of course, and certainly not history) right across the board. The response of most economists to critiques of this kind is to say, well, ok, we can tweak our rational expectations equilibrium models to include some of this stuff. But this isn't nearly enough.

Peter's essay finishes with an argument as to why computational agent based models offer a much more flexible way to explore economic coordination mechanisms in macroeconomics on a far more extensive basis. I cannot see how this approach won't be a huge part of the future of macroeconomics, once the brainwashing of rational expectations and equilibrium finally loses its effect. 

Steve Keen on "bad weathermen"

I've made quite a lot of the analogy between the dynamics of an economy or financial market and the weather. It's one of the basic themes of this blog, and the focus of my forthcoming book FORECAST. I don't pretend to be the first one to think of this at all. I know that the head of the Bank of England Mervyn King has talked about this analogy in the past, as have many others.

But the idea now seems to be gathering more popularity. Steve Keen even writes here specifically about the task of economic forecasting, and the entirely different approaches used on weather science, where forecasting is now quite successful, and in economics, where it is not:
Conventional economic modelling tools can extrapolate forward existing trends fairly well – if those trends continue. But they are as hopeless at forecasting a changing economic world as weather forecasts would be, if weather forecasters assumed that, because yesterday’s temperature was 29 degrees Celsius and today’s was 30, tomorrow’s will be 31 – and in a year it will be 395 degrees.

Of course, weather forecasters don’t do that. When the Bureau of Meteorology forecasts that the maximum temperature in Sydney on January 16 to January 19 will be respectively 29, 30, 35 and 25 degrees, it is reporting the results of a family of computer models that generate a forecast of future weather patterns that is, by and large, accurate over the time horizon the models attempt to predict – which is about a week.
Weather forecasts have also improved dramatically over the last 40 years – so much so that even an enormous event like Hurricane Sandy was predicted accurately almost a week in advance, which gave people plenty of time to prepare for the devastation when it arrived:

Almost five days prior to landfall, the National Hurricane Center pegged the prediction for Hurricane Sandy, correctly placing southern New Jersey near the centre of its track forecast. This long lead time was critical for preparation efforts from the Mid-Atlantic to the Northeast and no doubt saved lives.

Hurricane forecasting has come a long way in the last few decades. In 1970, the average error in track forecasts three days into the future was 518 miles. That error shrunk to 345 miles in 1990. From 2007-2011, it dropped to 138 miles. Yet for Sandy, it was a remarkably low 71 miles, according to preliminary numbers from the National Hurricane Center.

Within 48 hours, the forecast came into even sharper focus, with a forecast error of just 48 miles, compared to an average error of 96 miles over the last five years.

Meteorological model predictions are regularly attenuated by experienced meteorologists, who nudge numbers that experience tells them are probably wrong. But they start with a model of the weather than is fundamentally accurate, because it is founded on the proposition that the weather is unstable.

Conventional economic models, on the other hand, assume that the economy is stable, and will return to an 'equilibrium growth path' after it has been dislodged from it by some 'exogenous shock'. So most so-called predictions are instead just assumptions that the economy will converge back to its long-term growth average very rapidly (if your economist is a Freshwater type) or somewhat slowly (if he’s a Saltwater croc).

Weather forecasters used to be as bad as this, because they too used statistical models that assumed the weather was in or near equilibrium, and their forecasts were basically linearly extrapolations of current trends.
How did weather forecasters get better? By recognizing, of course, the inherent role of positive feed backs and instabilities in the atmosphere, and by developing methods to explore and follow the growth of such instabilities mathematically. That meant modelling in detail the actual  fine scale workings of the atmosphere and using computers to follow the interactions of those details. The same will almost certainly be true in economics. Forecasting will require both lots of data and also much more detailed models of the interactions among people, firms and financial institutions of all kinds, taking the real structure of networks into account, using real data to build models of behaviour and so on. All this means giving up tidy analytical solutions, of course, and even computer models that insist the economy must exist in a nice tidy equilibrium. Science begins by taking reality seriously.

Unlimited growth... why the idea is just silly

I wrote a while back on some of the inconsistencies in the belief -- still professed by many economists -- in the absence of any ultimate limits to economic growth continuing indefinitely into the future. It is my impression that economists generally find such talk irritating, perhaps because the discussion of limits naturally involves considering factors from physics or biology, thereby taking the discussion outside of the realm of pure economics. Peruse the comments on posts such as this one, and you will certainly think that economists tend to look down rather scornfully on anyone so intellectually limited as to take the idea of limits to growth seriously.

But this may be misleading, and I certainly don't want to give the impression that all economists react this way. Indeed, I suspect that there is a rapid change going on in the economics profession and that many economists are busy reassessing earlier views and taking the idea of limits very seriously. Two things to mention here. First, in email correspondence, economist William Brock of the University of Wisconsin suggested to me that many economists do indeed take biologically founded limits to growth seriously and try hard to argue down those who would dismiss them so casually. He mentioned, in particular, the Beijer Institute in Sweden to which he is linked:
"All of us Beijer economists and many (most?) others around the world recognize and worry about the serious limits on economic growth imposed by a finite Earth.  I've noticed in your blog that you comment a lot about economists still preaching unlimited growth, etc.  Beijer economists would sharply disagree and they go after such naive economists. As you can see many of the Beijer Fellows are top dogs in the profession, e.g. Kenneth Arrow not only won the Nobel at an extremely early age but is widely recognized as probably the best economist next to Keynes of this century. Partha Dasgupta is also one of the world's very best economists and has done lots of work with Maler and Arrow on sustainable growth where "growth" is defined sensibly.  GNP is mostly used by politicians and its inadequacies as a sensible measure of "growth" was exposed long long ago, e.g. by Nordhaus and Jim Tobin in their proposed measure promoted in the 70's." 
This is good to hear, and I hope this attitude spreads far and wide. I hope also that young economists in particular will increasingly think for themselves, read widely outside of economics, and get past the naive ideas of unlimited growth as expressed in Paul Romer's endogenous growth theory of 1990 and variations thereof.

On that note, I stumbled this afternoon over this wonderful paper on economic growth by Andrew Sutter, a writer and academic I happened to meet at a conference in Brussels last year. I believe Sutter's background is economics, but he writes beautifully and moves with elegance and ease from physics to classical philosophy to politics and art, and is as fun to read as he is convincing. He also gives a very useful potted history of theories of growth, from which I learned quite a lot. Andrew makes two very general important points.

First, he argues that much of what makes people (and many economists) cling to the idea of unlimited growth has nothing whatsoever to do with science or real insight into how human societies change:
Growth-based policies were born of the need for post-war reconstruction, the ideological struggles of the Cold War, and the need to employ the "baby boom" generation born in the first decade of peace. What's keeping them alive a half-century later, particularly in countries that are already wealthy?

Several factors. The Darwinian struggle played out in GDP "league tables" fits in with the drama of markets and competition that took over our public discourse just as anti-Communism was becoming moot. There is also the conventional wisdom that growth is necessary to maintain pension systems as the Baby Boomers retire (though, just as with other types of government revenue, maybe contributions and collections are more directly relevant). And news about growth, at both the corporate and national levels, supplies shots of optimism and fear to rank-and-file investors in financial markets, allowing the professionals to rake in much more money than is at stake in real goods and services.

More ancient and enduring than all of these is the dream of man's domination of nature. On a practical level, this dream serves the rationales mentioned above: countries compete to be the "most innovative;" "productivity improvements" are sought to allow aging populations to maintain GDP growth; and every technology trend gives birth to new start-ups upon which to gamble. On a mythological level, the dream is both the deus (or at least Titan) ex machina to clean up the messes caused by prior years of growth and technological interventions (e.g., Bush 2002), and an expression of our deepest nature (Phelps 2009).
The second important point Andrew makes is that a deep cause of our trouble to think about growth clearly is the distortion of the concept of value in economics. We've become accustomed to thinking, for example, that innovation is by its nature good; this is at least how it is measured in GDP. Medicines for children? Automatic weapons for dictators? Doesn't matter: both count positively towards GDP. We have more or less ceased talking seriously about the differences between good and bad innovations. But of course some innovations are better than others, regardless of how crude measures such as GDP may tally up their value. Andrew in his paper suggests how we might start growing a different perspective, derived in part from ideas going back to Aristotle. Absolutely brilliant. Have a read. 

Sunday, January 6, 2013

Inequality, chemistry and crime

My latest column in Bloomberg (published Sunday night in the US, I think) looks a little at the link between socioeconomic inequality and crime. That there is such a link has been established rather convincingly by statistical analyses across many nations spanning decades. This was all laid out convincingly in the 2009 book The Spirit Level by Richard Wilkinson and Kate Pickett. See also this short TED talk by Wilkinson which covers the main points very clearly.

But I also referred (rather vaguely) in my column to two other lines of research that deserve some discussion. One is an idea in criminology known as "routine activity theory" which has increasingly become a branch of applied computational social science. In effect, it views crime as a kind of natural social chemistry taking place between potential offenders and targets, and has a good track record in accounting for empirical patterns of crimes of all kinds by considering simple factors such as patterns of human movement on streets, of street lighting, of building architecture and so on. It more or less gives up on thinking about the deep psychology of crime and its motivation. In general, this perspective asserts, you tend to get more crime where you have more targets coming together more frequently with more potential offenders, just as you get more chemical reactions whenever potential reactant molecules come into contact more frequently. I've written about this a little before in New Scientist (a copy of that article is on the web here).

The empirical success of routine activity theory is sometimes taken as evidence that factors such as poverty and social inequality don't matter in crime, as the theory doesn't explicitly consider such factors. This is a serious misunderstanding. After all, the theory focuses on the factors that bring together potential or motivated offenders with potential targets, and doesn't attempt to explore the factors that might give a person the motivation to commit crime in the first place. Here the analogy to chemistry is again constructive, I think. In chemistry, reactants coming together isn't enough. They generally need sufficient energy (or a catalyst) to help them overcome an energy barrier (caused by repulsive forces between molecules) before they can react. It's entirely plausible to think of prevailing social norms as a kind of energy barrier to the commission of crimes, as these forces tend to persuade people to avoid crime. In general, we should expect high levels of trust and strong social norms to suppress and discourage crime, much as a lowering the temperature of a chemical solution makes reactions go more slowly. (Low temperature increases the effective size of energy barriers to reaction, which must be overcome by collisions between molecules; these are stronger at higher temperatures.)

Analogies aside, the point is that routine activity theory actually fits together quite well with the existence of larger social factors that influence overall probabilities of crime. The factors considered in the theory can determine detailed patterns of crime, even as the larger global factors influence overall rates. The work considered by Wilkinson and Pickett strongly suggests that social inequality is one of the most important global factors encouraging crime (and other measures of social dysfunction).

The second body of work I wanted to add a little detail on looks at what creates high levels of inequality in the first place. On the drivers of inequality, of course, there is a huge literature in social science and economics and I don't pretend to be an expert on it. However, there is an interesting perspective motivated by physics that I think should be much more widely known. This paper from several years ago by Jean-Philippe Bouchaud and Marc Mezard analyses a very simple model of an economy and shows that a high level of economic inequality is a more or less generic and unavoidable outcome. It is something to be expected on mathematical grounds alone. The basic idea is to model an economy as a system in which wealth flows around among people by two fundamentally different processes. First, it flows between individuals when they make exchanges or trades, through contracts, employment, sales and so on. Second, wealth also accrues to individuals (or departs from them) on account of investments in instruments yielding uncertain returns. Importantly, the second factor contributes an element to individual wealth that involves a random multiplicative factor -- random because investments are uncertain, and multiplicative because, quite sensibly, the wealthier invest more than the poorer, and typically more in direct proportion to their wealth. The poor partake very weakly in this multiplicative channel to wealth growth, while the wealthy participate progressively more strongly.

Bouchaud and Mezard showed that the inevitable outcome in such a basic system is that a large fraction of the total wealth in the economy ends up being held by a small faction of the population. This is the case even if every individual is considered to have the same inherent money making skills. (The authors, of course, do not believe this to be the case; but making this assumption makes it possible to explore how natural economic dynamics can drive large wealth disparities even in the absence of any differences in skill.) It's the multiplicative nature of the returns on investments that makes it happen. This pattern in the distribution of wealth holds in every nation and was known long ago, being first described by Italian economist Vilfredo Pareto. This paper gave to my knowledge the first really fundamental explanation of why this pattern holds everywhere. Since then a number of works have taken this approach much further.

However, the universal form of this distribution does not preclude the possibility of some societies being more unequal than others. You can have a society in which the top 5% hold 90% of the wealth, or in which that same top 5% holds 99.9% of the wealth. Such variations are driven by the relative strength of multiplicative returns on investment versus the flows in an economy that might act to reduce inequality. Obviously, taxation is one mechanism that leads to a redistribution of wealth from richer to poorer and one should expect, in a broad brush way, that lower taxes should tend to associate with higher levels of wealth inequality.

I strongly recommend this paper -- it's short and you can skip over some of the mathematical detail -- as it shows how some very important global economic realities may have quite simple underlying causes. The bigger problem, of course, is learning then how to craft real world policies so as to keep inequality within bounds, avoiding the many kinds of social dysfunction it clearly seems to give rise to.