From an article in The Economist, a graph showing the performance of the "Lobbying Index" versus the S&P 500 over the past decade. The Lobbying Index being an average over the 50 most intense lobbying firms within the S&P 500. It's pretty clear that lobbying -- a rather less than honourable profession in my book -- pays off:
Friday, September 30, 2011
The Fetish of Rationality
I'm currently reading Jonathan Aldred's book The Skeptical Economist. It's a brilliant exploration of how economic theory is run through at every level with hidden value judgments which often go a long way to determining its character. For example, the theory generally assumes that more choice always has to be better. This follows more or less automatically from the view that people are rational "utility maximizers" (a phrase that should really be banned for ugliness alone). After all, more available choices can only give a "consumer" the ability to meet their desires more effectively, and can never have negative consequences. Add extra choices and the consumer can always simply ignore them.
As Aldred points out, however, this just isn't how people work. One of the problems is that more choice means more thinking and struggling to decide what to do. As a result, adding more options often has the effect of inhibiting people from choosing anything. In one study he cites, doctors were presented with the case history of a man suffering from osteoarthritis and asked if they would A. refer him to a specialist or B. prescribe a new experimental medicine. Other doctors were presented with the same choice, except they could choose between two experimental medicines. Doctors in the second group made twice as many referrals to a specialist, apparently shying away from the psychological burden of having to deal with the extra choice between medicines.
I'm sure everyone can think of similar examples from their own lives in which too much choice becomes annihilating. Several years ago my wife and I were traveling in Nevada and stopped in for an ice cream at a place offering 200+ flavours and a variety of extra toppings, etc. There were an astronomical number of potential combinations. After thinking for ten minutes, and letting lots of people pass by us in the line, I finally just ordered a mint chocolate chip cone -- to end the suffering, as it were. My wife decided it was all too overwhelming and in the end didn't want anything! If there had only been vanilla and chocolate we'd have ordered in 5 seconds and been very happy with the result.
In discussing this problem of choice, Aldred refers to a beautiful paper I read a few years ago by economist John Conlisk entitled Why Bounded Rationality? The paper gives many reasons why economic theory would be greatly improved if it modeled individuals as having finite rather than infinite mental capacities. But one of the things he considers is a paradoxical contradiction at the very heart of the notion of rational behaviour. A rational person facing any problem will work out the optimal way to solve that problem. However, there are costs associated with deliberation and calculation. The optimal solution to the ice cream choice problem isn't to stand in the shop for 6 years while calculating how to maximize expected utility over all the possible choices. Faced with a difficult problem, therefore, a rational person first has to solve another problem -- for how long should I deliberate before it becomes advantageous to just take a guess?
This is a preliminary problem -- call is P1 -- which has to be solved before the real deliberation over the choice can begin. But, Conlisk pointed out, P1 is itself a difficult problem and a rational individual doesn't want to waste lots of resources thinking about that one too long either. Hence, before working on P1, the rational person first has to decide what is the optimal amount of time to spend on solving P1. This is another problem P2, which is also hard. Of course, it never ends. Take rationality to it's logical conclusion and it ends up destroying itself -- it's simply an inconsistent idea.
Anyone who is not an economist might be quite amazed by Conlisk's paper. It's a great read, but it will dawn on the reader that in a sane world it simply wouldn't be necessary. It's arguing for the obvious and is only required because economic theory has made such a fetish of rationality. The assumption of rationality may in some cases have made it possible to prove theorems by turning the consideration of human behaviour into a mathematical problem. But it has tied the hands of economic theorists in a thousand ways.
As Aldred points out, however, this just isn't how people work. One of the problems is that more choice means more thinking and struggling to decide what to do. As a result, adding more options often has the effect of inhibiting people from choosing anything. In one study he cites, doctors were presented with the case history of a man suffering from osteoarthritis and asked if they would A. refer him to a specialist or B. prescribe a new experimental medicine. Other doctors were presented with the same choice, except they could choose between two experimental medicines. Doctors in the second group made twice as many referrals to a specialist, apparently shying away from the psychological burden of having to deal with the extra choice between medicines.
I'm sure everyone can think of similar examples from their own lives in which too much choice becomes annihilating. Several years ago my wife and I were traveling in Nevada and stopped in for an ice cream at a place offering 200+ flavours and a variety of extra toppings, etc. There were an astronomical number of potential combinations. After thinking for ten minutes, and letting lots of people pass by us in the line, I finally just ordered a mint chocolate chip cone -- to end the suffering, as it were. My wife decided it was all too overwhelming and in the end didn't want anything! If there had only been vanilla and chocolate we'd have ordered in 5 seconds and been very happy with the result.
In discussing this problem of choice, Aldred refers to a beautiful paper I read a few years ago by economist John Conlisk entitled Why Bounded Rationality? The paper gives many reasons why economic theory would be greatly improved if it modeled individuals as having finite rather than infinite mental capacities. But one of the things he considers is a paradoxical contradiction at the very heart of the notion of rational behaviour. A rational person facing any problem will work out the optimal way to solve that problem. However, there are costs associated with deliberation and calculation. The optimal solution to the ice cream choice problem isn't to stand in the shop for 6 years while calculating how to maximize expected utility over all the possible choices. Faced with a difficult problem, therefore, a rational person first has to solve another problem -- for how long should I deliberate before it becomes advantageous to just take a guess?
This is a preliminary problem -- call is P1 -- which has to be solved before the real deliberation over the choice can begin. But, Conlisk pointed out, P1 is itself a difficult problem and a rational individual doesn't want to waste lots of resources thinking about that one too long either. Hence, before working on P1, the rational person first has to decide what is the optimal amount of time to spend on solving P1. This is another problem P2, which is also hard. Of course, it never ends. Take rationality to it's logical conclusion and it ends up destroying itself -- it's simply an inconsistent idea.
Anyone who is not an economist might be quite amazed by Conlisk's paper. It's a great read, but it will dawn on the reader that in a sane world it simply wouldn't be necessary. It's arguing for the obvious and is only required because economic theory has made such a fetish of rationality. The assumption of rationality may in some cases have made it possible to prove theorems by turning the consideration of human behaviour into a mathematical problem. But it has tied the hands of economic theorists in a thousand ways.
Thursday, September 29, 2011
Economists on the way to being a "religious cult"...
A short seven minute video produced by the Institute for New Economic Thinking offers the views (very briefly) of a number of economists on modeling and it's purposes (h/t to Moneyscience). Two things of note:
1. Along the way, Brad DeLong mentions Milton Friedman's famous claim that a model is better the more unrealistic its assumptions, and that the sole measure of a theory is making accurate predictions. I'd really like to know what DeLong thinks on this, but his views aren't there in the interview. He mentions Friedman's idea but doesn't defend or attack it, just a reference to one of the most influential ideas on this topic, I guess. Shows how much Friedman's view is still in play.
In my view (some not very well organized thoughts here) the core problem with Friedman's argument is that a theory with perfect predictions and perfectly unrealistic assumptions simply doesn't teach you anything -- you're left just as mystified by how the model can possibly work (give the right predictions) as you were with the original phenomena you set out to explain. It's like a miracle. Such a model might of course be valuable as a starting point, and in stimulating the invention of further models with more realistic assumptions which then -- if they give the same predictions -- may indeed teach you something about how certain kinds of interactions, behaviours, etc (in the assumptions) can lead to observed consequences.
But then -- it's the models with the more realistic assumptions that are superior. (It's worth remembering that Friedman liked to say provocative things even if he didn't quite believe them.)
2. An interesting quote from economist James Galbraith, with which I couldn't agree more:
1. Along the way, Brad DeLong mentions Milton Friedman's famous claim that a model is better the more unrealistic its assumptions, and that the sole measure of a theory is making accurate predictions. I'd really like to know what DeLong thinks on this, but his views aren't there in the interview. He mentions Friedman's idea but doesn't defend or attack it, just a reference to one of the most influential ideas on this topic, I guess. Shows how much Friedman's view is still in play.
In my view (some not very well organized thoughts here) the core problem with Friedman's argument is that a theory with perfect predictions and perfectly unrealistic assumptions simply doesn't teach you anything -- you're left just as mystified by how the model can possibly work (give the right predictions) as you were with the original phenomena you set out to explain. It's like a miracle. Such a model might of course be valuable as a starting point, and in stimulating the invention of further models with more realistic assumptions which then -- if they give the same predictions -- may indeed teach you something about how certain kinds of interactions, behaviours, etc (in the assumptions) can lead to observed consequences.
But then -- it's the models with the more realistic assumptions that are superior. (It's worth remembering that Friedman liked to say provocative things even if he didn't quite believe them.)
2. An interesting quote from economist James Galbraith, with which I couldn't agree more:
Modeling is not the end-all and the be-all of economics... The notion that the qualities of an economist should be defined by the modeling style that they adopt [is a disaster]. There is a group of people who say that if you're not doing dynamic stochastic general equilibrium modeling then you're not really a modern economist... that's a preposterous position which is going to lead to the reduction of economics to the equivalent of a small religious cult working on issues of interest to no one else in the world.
Basel III -- Taking away Jamie Dimon's Toys
Most people have by now heard the ridiculous claim by Jamie Dimon, CEO of JPMorgan Chase, that the new Basel III rules are "anti-American." The New York Times has an interesting set of contributions by various people on whether Dimon's claim has any merit. You'll all be shocked to learn that Steve Bartlett, president of the Financial Services Roundtable -- we can assume he's not biased, right? -- thinks that Dimon is largely correct. Personally, I tend to agree more with the views of Russell Roberts of George Mason University:
But not everyone is convinced of this by a long shot. Just after the crisis I wrote a feature article for Nature looking at new thinking about modeling economic systems and financial markets in particular. Researching the article, I came across lots of good new thinking about ways to model markets and go beyond the standard framework of economics. That all went into the article. I also suggested to my editor that we had to at least raise at the end of the article the nexus of influence between Wall St and the political system, and I proposed in particular to write a little about the famous paper by Romer and Akerlof, Looting: The Economic Underworld of Bankruptcy for Profit, which gives a simple and convincing argument in essence about how corporate managers (not only in finance) can engineer vast personal profits by running companies into the ground. Oddly, my editor in effect said "No, we can't include that because it's not science."
But that doesn't mean it's not important.
But back to Basel III. I had an article exploring this in some detail in Physics World in August. It is not available online. As a demonstration of my still lagging Blogger skills, I've captured images of the 4 pages and put them below. Not the best picture quality, I'm afraid.
Sadly, this is the truth, even though many people still cling to the hope that there are good people out there somewhere looking after the welfare of the overall system. Ultimately, I think, the cause of financial crises isn't to be found in the science of finance or of economics, but of politics. There is no way to prevent them as long as powerful individuals can game the system to their own advantage, privatizing the gains, as they say, and socializing the losses.
Who really writes the latest financial regulations, where the devil is in the details? Who has a bigger incentive to pay attention to their content — financial insiders such as the executives of large financial institutions or you and me, the outsiders? Why would you ever think that the regulations that emerge would be designed to promote international stability and growth rather than the naked self-interest of the financial community?
I do not believe it’s a coincidence that Basel I and II blew up in a way that enriched insiders at the expense of outsiders. To expect Basel III to yield a better result (now that we've supposedly learned so much) is to ignore the way the financial game is played. Until public policy stops subsidizing leverage (bailouts going back to 1984 make it easier for large financial institutions to fund each other’s activities using debt), it is just a matter of time before any financial system is gamed by the insiders.
Jamie Dimon is a crony capitalist. Don’t confuse that with the real kind. If he says Basel III is bad for America, you can bet that he means "bad for JPMorgan Chase." Either way, he’ll have a slightly larger say in the ultimate outcome than the wisest economist or outsider looking in.
But not everyone is convinced of this by a long shot. Just after the crisis I wrote a feature article for Nature looking at new thinking about modeling economic systems and financial markets in particular. Researching the article, I came across lots of good new thinking about ways to model markets and go beyond the standard framework of economics. That all went into the article. I also suggested to my editor that we had to at least raise at the end of the article the nexus of influence between Wall St and the political system, and I proposed in particular to write a little about the famous paper by Romer and Akerlof, Looting: The Economic Underworld of Bankruptcy for Profit, which gives a simple and convincing argument in essence about how corporate managers (not only in finance) can engineer vast personal profits by running companies into the ground. Oddly, my editor in effect said "No, we can't include that because it's not science."
But that doesn't mean it's not important.
But back to Basel III. I had an article exploring this in some detail in Physics World in August. It is not available online. As a demonstration of my still lagging Blogger skills, I've captured images of the 4 pages and put them below. Not the best picture quality, I'm afraid.
Wednesday, September 28, 2011
Financial Times numeracy check
This article from the Financial Times is unfortunately quite typical of the financial press (and yes, not only the financial press). Just ponder the plausibility of what is reported in the following paragraph, commenting on a proposal by José Manuel Barroso, European Commission president, to put a tax on financial transactions:
I would bet a great deal that a more accurate statement of the confidence of their results would be, say, between 0 and 2 percent crudely, or maybe even -1 and 3. But that would be admitting that no one has any certainty about what's coming next, and that's not part of the usual practice.
Mr Barroso did not release details of his plan, except to say it could raise some €55bn a year. However, a study carried out by the Commission has found that the tax could also dent long-term economic growth in the region by between 0.53 per cent and 1.76 per cent of gross domestic product.The article doesn't mention who did the study, or give a link to it. But there's worse. If reported accurately, it seems the European Commission's economists -- or whoever they had do the study mentioned -- actually think that the "3" in 0.53 and the "6" in 1.76 mean something. That's quite impressive accuracy when talking about economic growth. In a time of great uncertainty.
I would bet a great deal that a more accurate statement of the confidence of their results would be, say, between 0 and 2 percent crudely, or maybe even -1 and 3. But that would be admitting that no one has any certainty about what's coming next, and that's not part of the usual practice.
High-frequency trading: taming the chaos
I have an opinion piece that will be published later today in the next few days in Bloomberg Views. It is really just my attempt to bring attention to some very good points made in a recent speech by Andrew Haldane of the Bank of England. For anyone interested in further details, you can read the original speech (I highly recommend this) or two brief discussions I've given here looking at the first third of the speech and the second third of the speech.
I may not get around to writing a detailed analysis of the third part, which focuses on possible regulatory measures to lessen the chance of catastrophic Flash Crash type events in the future. But the ideas raised in this part are fairly standard -- a speed limit on trading, rules which would force market makers to participate even in volatile times (as was formerly the case for market makers) and so on. I think the most interesting part by far is the analysis of the recent increase in the frequency of abrupt market jumps (fat-tail events) over very short times, and of the risks facing market makers and how they respond as volatility increases. I think this should all help to frame the debate over HFT -- which seems extremely volatile itself -- in somewhat more scientific terms.
I also suggest that anyone who finds any of this interesting should go to the Bank of England website and read some of Andrew Haldane's other speeches. Every one is brilliant and highly illuminating.
I may not get around to writing a detailed analysis of the third part, which focuses on possible regulatory measures to lessen the chance of catastrophic Flash Crash type events in the future. But the ideas raised in this part are fairly standard -- a speed limit on trading, rules which would force market makers to participate even in volatile times (as was formerly the case for market makers) and so on. I think the most interesting part by far is the analysis of the recent increase in the frequency of abrupt market jumps (fat-tail events) over very short times, and of the risks facing market makers and how they respond as volatility increases. I think this should all help to frame the debate over HFT -- which seems extremely volatile itself -- in somewhat more scientific terms.
I also suggest that anyone who finds any of this interesting should go to the Bank of England website and read some of Andrew Haldane's other speeches. Every one is brilliant and highly illuminating.
Monday, September 26, 2011
Overconfidence is adaptive?
A fascinating paper in Nature from last week suggests that overconfidence may actually be an adaptive trait. This is interesting as it strikes at one of the most pervasive assumptions in all of economics -- the idea of human rationality, and the conviction that being rational must always be more adaptive than being irrational. Quite possibly not:
Humans show many psychological biases, but one of the most consistent, powerful and widespread is overconfidence. Most people show a bias towards exaggerated personal qualities and capabilities, an illusion of control over events, and invulnerability to risk (three phenomena collectively known as ‘positive illusions’)2, 3, 4, 14. Overconfidence amounts to an ‘error’ of judgement or decision-making, because it leads to overestimating one’s capabilities and/or underestimating an opponent, the difficulty of a task, or possible risks. It is therefore no surprise that overconfidence has been blamed throughout history for high-profile disasters such as the First World War, the Vietnam war, the war in Iraq, the 2008 financial crisis and the ill-preparedness for environmental phenomena such as Hurricane Katrina and climate change9, 12, 13, 15, 16.The paper studies this question in a simple analytical model of an evolutionary environment in which individuals compete for resources. If the resources are sufficiently valuable, the authors find, overconfidence can indeed be adaptive:
If overconfidence is both a widespread feature of human psychology and causes costly mistakes, we are faced with an evolutionary puzzle as to why humans should have evolved or maintained such an apparently damaging bias. One possible solution is that overconfidence can actually be advantageous on average (even if costly at times), because it boosts ambition, morale, resolve, persistence or the credibility of bluffing. If such features increased net payoffs in competition or conflict over the course of human evolutionary history, then overconfidence may have been favoured by natural selection5, 6, 7, 8.
However, it is unclear whether such a bias can evolve in realistic competition with alternative strategies. The null hypothesis is that biases would die out, because they lead to faulty assessments and suboptimal behaviour. In fact, a large class of economic models depend on the assumption that biases in beliefs do not exist17. Underlying this assumption is the idea that there must be some evolutionary or learning process that causes individuals with correct beliefs to be rewarded (and thus to spread at the expense of individuals with incorrect beliefs). However, unbiased decisions are not necessarily the best strategy for maximizing benefits over costs, especially under conditions of competition, uncertainty and asymmetric costs of different types of error8, 18, 19, 20, 21. Whereas economists tend to posit the notion of human brains as general-purpose utility maximizing machines that evaluate the costs, benefits and probabilities of different options on a case-by-case basis, natural selection may have favoured the development of simple heuristic biases (such as overconfidence) in a given domain because they were more economical, available or faster.
Here we present a model showing that, under plausible conditions for the value of rewards, the cost of conflict, and uncertainty about the capability of competitors, there can be material rewards for holding incorrect beliefs about one’s own capability. These adaptive advantages of overconfidence may explain its emergence and spread in humans, other animals or indeed any interacting entities, whether by a process of trial and error, imitation, learning or selection. The situation we model—a competition for resources—is simple but general, thereby capturing the essence of a broad range of competitive interactions including animal conflict, strategic decision-making, market competition, litigation, finance and war.Very interesting. But I just had a thought -- perhaps this may also explain why many economists seem to exhibit such irrational exuberance over the value of neo-classical theory itself?
High-frequency trading, the downside -- Part II
In this post I'm going to look a little further at Andrew Haldane's recent Bank of England speech on high-frequency trading. In Part I of this post I explored the first part of the speech which looked at evidence that HFT has indeed lowered bid-ask spreads over the past decade, but also seems to have brought about an increase in volatility. Not surprisingly, one measure doesn't even begin to tell the story of how HFT is changing the markets. Haldane explores this further in the second part of the speech, but also considers in a little more detail where this volatility comes from.
In well known study back in 1999, physicist Parameswaran Gopikrishnan and colleagues (from Gene Stanley's group in Boston) undertook what was then the most detailed look at market fluctuations (using data from the S&P Index in this case) over periods ranging from 1 minute up to 1 month. This early study established a finding which (I believe) has now been replicated across many markets -- market returns over timescales from 1 minute up to about 4 days all followed a fat-tailed power law distribution with exponent α close to 3. This study found that the return distribution became more Gaussian for times longer than about 4 days. Hence, there seems to be rich self-similarity and fractal structure to market returns on times down to 1 around second.
What about shorter times? I haven't followed this story for a few years. It turns out that in 2007, Eisler and Kertesz looked at a different set of data -- for total transactions on the NYSE between 2000 and 2002 -- and found that the behaviour at short times (less than 60 minutes) was more Gaussian. This is reflected in the so-called Hurst exponent H having an estimated value close to 0.5. Roughly speaking, the Hurst exponent describes -- based on empirical estimates -- how rapidly a time series tends to wander away from its current value with increasing time. Calculate the root mean square deviation over a time interval T and for a Gaussian random walk (Brownian motion) this should grow in proportion to T to the power H= 1/2. A Hurst exponent higher than 1/2 indicates some kind of interesting persistent correlations in movements.
However, as Haldane notes, Reginald Smith last year showed that stock movements over short times since around 2005 have begun showing more fat-tailed behaviour with H above 0.5. That paper shows a number of figures showing H rising gradually over the period 2002-2009 from 0.5 to around 0.6 (with considerable fluctuation on top of the trend). This rise means that the market on short times has increasingly violent excursions, as Haldane's chart 11 below illustrates with several simulations of time series having different Hurst exponents:
The increasing wildness of market movements has direct implications for the risks facing HFT market makers, and hence, the size of the bid-ask spread reflecting the premium they charge. As Haldane notes, the risk a market maker faces -- in holding stocks which may lose value or in encountering counterparties with superior information about true prices -- grows with the likely size of price excursions over any time period. And this size is directly linked to the Hurst exponent.
Hence, in increasingly volatile markets, HFTs become less able to provide liquidity to the market precisely because they have to protect themselves:
Hence, in increasingly volatile markets, HFTs become less able to provide liquidity to the market precisely because they have to protect themselves:
This has implications for the dynamics of bid-ask spreads, and hence liquidity, among HFT firms. During a market crash, the volatility of prices (σ) is likely to spike. From equation (1), fractality heightens the risk sensitivity of HFT bid-ask spreads to such a volatility event. In other words, liquidity under stress is likely to prove less resilient. This is because one extreme event, one flood or drought on the Nile, is more likely to be followed by a second, a third and a fourth. Reorganising that greater risk, market makers’ insurance premium will rise accordingly.
This is the HFT inventory problem. But the information problem for HFT market-makers in situations of stress is in many ways even more acute. Price dynamics are the fruits of trader interaction or, more accurately, algorithmic interaction. These interactions will be close to impossible for an individual trader to observe or understand. This algorithmic risk is not new. In 2003, a US trading firm became insolvent in 16 seconds when an employee inadvertently turned an algorithm on. It took the company 47 minutes to realise it had gone bust.
Since then, things have stepped up several gears. For a 14-second period during the Flash Crash, algorithmic interactions caused 27,000 contracts of the S&P 500 E-mini futures contracts to change hands. Yet, in net terms, only 200 contracts were purchased. HFT algorithms were automatically offloading contracts in a frenetic, and in net terms fruitless, game of pass-the-parcel. The result was a magnification of the fat tail in stock prices due to fire-sale forced machine selling.
These algorithmic interactions, and the uncertainty they create, will magnify the effect on spreads of a market event. Pricing becomes near-impossible and with it the making of markets. During the Flash Crash, Accenture shares traded at 1 cent, and Sotheby’s at $99,999.99, because these were the lowest and highest quotes admissible by HFT market-makers consistent with fulfilling their obligations. Bid-ask spreads did not just widen, they ballooned. Liquidity entered a void. That trades were executed at these “stub quotes” demonstrated algorithms were running on autopilot with liquidity spent. Prices were not just information inefficient; they were dislocated to the point where they had no information content whatsoever.
This simply follow from the natural dynamics of the market, and the situation market makers find themselves in. If they want to profit, if they want to survive, they need to manage their risks, and these risks grow rapidly in times of high volatility. Their response is quite understandable -- to leave the market, or least charge much more for their service.
Individually this is all quite rational, yet the systemic effects aren't likely to benefit anyone. The situation, Haldane notes, resembles a Tragedy of the Commons in which individually rational actions lead to a collective disaster, fantasies about the Invisible Hand notwithstanding:
If the way to make money is to make markets, and the way to market markets is to make haste, the result is likely to be a race – an arms race to zero latency. Competitive forces will generate incentives to break the speed barrier, as this is the passport to lower spreads which is in turn the passport to making markets. This arms race to zero is precisely what has played out in financial markets over the past few years.
Arms races rarely have a winner. This one may be no exception. In the trading sphere, there is a risk the individually optimising actions of participants generate an outcome for the system which benefits no-one – a latter-day “tragedy of the commons”. How so? Because speed increases the risk of feasts and famines in market liquidity. HFT contribute to the feast through lower bid-ask spreads. But they also contribute to the famine if their liquidity provision is fickle in situations of stress.
Haldane then goes on to explore what might be done to counter these trends. I'll finish with a third post on this part of the speech very soon.
But what is perhaps most interesting in all this is how much of Haldane's speech refers to recent work done by physicists -- Janos Kertesz, Jean-Philippe Bouchaud, Gene Stanley, Doyne Farmer and others -- rather than studies more in the style of neo-classical efficiency theory. It's encouraging to see that at least one very senior banking authority is taking this stuff seriously.
Friday, September 23, 2011
Brouwer's fixed point theorem...why mathematics is fun
**UPDATE BELOW**
I'm not going to post very frequently on Brouwer's fixed point theorem, but I had to look into it a little today. A version of it was famously used by Ken Arrow and Gerard Debreu in their 1954 proof that general equilibrium models in economics (models of a certain kind which require about 13 assumptions to define) do indeed have an equilibrium set of prices which makes supply equal demand for all goods. There's a nice review article on that here for anyone who cares.
Brouwer's theorem essentially says that when you take a convex set (a disk, say, including both the interior and the boundary) and map it into itself in some smooth and continuous way, there has to be one point which stays fixed, i.e. is mapped into itself. This has some interesting and counter-intuitive implications, as some contributor to Wikipedia has pointed out:
**UPDATE**
In comments, "computers can be gamed" rightly points out that the theorem only works if one considers a smooth mapping of a set into itself. This is very important.
Indeed, go to the Wikipedia page for Brouwer's theorem and in addition to the examples I mentioned above, they also give a three dimensional example -- the liquid in a cup. Stir that liquid, they suggest, and -- since the initial volume of liquid simply gets mapped into the same volume, with elements rearranged -- there must be one point somewhere which has not moved. But this is a mistake unless you carry out the stirring with extreme care -- or use a high viscosity liquid such as oil or glycerine.
Ordinary stirring of water creates fluid turbulence -- disorganized flow in which eddies create smaller eddies and you quickly get discontinuities in the flow down to the smallest molecular scales. In this case -- the ordinary case -- the mapping from the liquid's initial position to its later position is NOT smooth, and the theorem doesn't apply.
I'm not going to post very frequently on Brouwer's fixed point theorem, but I had to look into it a little today. A version of it was famously used by Ken Arrow and Gerard Debreu in their 1954 proof that general equilibrium models in economics (models of a certain kind which require about 13 assumptions to define) do indeed have an equilibrium set of prices which makes supply equal demand for all goods. There's a nice review article on that here for anyone who cares.
Brouwer's theorem essentially says that when you take a convex set (a disk, say, including both the interior and the boundary) and map it into itself in some smooth and continuous way, there has to be one point which stays fixed, i.e. is mapped into itself. This has some interesting and counter-intuitive implications, as some contributor to Wikipedia has pointed out:
The theorem has several "real world" illustrations. For example: take two sheets of graph paper of equal size with coordinate systems on them, lay one flat on the table and crumple up (without ripping or tearing) the other one and place it, in any fashion, on top of the first so that the crumpled paper does not reach outside the flat one. There will then be at least one point of the crumpled sheet that lies directly above its corresponding point (i.e. the point with the same coordinates) of the flat sheet. This is a consequence of the n = 2 case of Brouwer's theorem applied to the continuous map that assigns to the coordinates of every point of the crumpled sheet the coordinates of the point of the flat sheet immediately beneath it.
Similarly: Take an ordinary map of a country, and suppose that that map is laid out on a table inside that country. There will always be a "You are Here" point on the map which represents that same point in the country.
**UPDATE**
In comments, "computers can be gamed" rightly points out that the theorem only works if one considers a smooth mapping of a set into itself. This is very important.
Indeed, go to the Wikipedia page for Brouwer's theorem and in addition to the examples I mentioned above, they also give a three dimensional example -- the liquid in a cup. Stir that liquid, they suggest, and -- since the initial volume of liquid simply gets mapped into the same volume, with elements rearranged -- there must be one point somewhere which has not moved. But this is a mistake unless you carry out the stirring with extreme care -- or use a high viscosity liquid such as oil or glycerine.
Ordinary stirring of water creates fluid turbulence -- disorganized flow in which eddies create smaller eddies and you quickly get discontinuities in the flow down to the smallest molecular scales. In this case -- the ordinary case -- the mapping from the liquid's initial position to its later position is NOT smooth, and the theorem doesn't apply.
Class warfare and public goods
I think this is about the best short description I've heard yet of why wealth isn't created by heroic individuals (a la Ayn Rand's most potent fantasies). I just wish Elizabeth Warren had been appointed head of the new Bureau of Consumer Protection. Based on the words below, I can see why there was intense opposition from Wall St. She's obviously not a Randroid:
I hear all this, you know, “Well, this is class warfare, this is whatever.”—No!
There is nobody in this country who got rich on his own. Nobody.
You built a factory out there—good for you! But I want to be clear.
You moved your goods to market on the roads the rest of us paid for.
You hired workers the rest of us paid to educate.
You were safe in your factory because of police forces and fire forces that the
rest of us paid for.
You didn’t have to worry that maurauding bands would come and seize everything at your factory, and hire someone to protect against this, because of the work the rest of us did.
Now look, you built a factory and it turned into something terrific, or a great idea—God bless. Keep a big hunk of it.
But part of the underlying social contract is you take a hunk of that and pay forward for the next kid who comes along.
Thinking about thinking
Psychologist Daniel Kahneman has a book coming out in November. Thinking, fast and slow. It's all about mental heuristics and the two different functional levels of the brain -- the fast instinctive part which is effortless but prone to errors and the slow rational part which takes effort to use but which can (in some cases) correct some of the errors of the first part. His Nobel Prize Lecture from 2002 is a fascinating read so I'm looking forward to the book.
But meanwhile, edge.org has some videos and text of a series of very informal talks Kahneman recently gave. These give some fascinating insight into the origins some of his thinking on decision theory, prospect theory (why we value gains and losses relative to our own current position, rather than judge outcomes in terms of total wealth), why corporations make bad decisions and don't work too hard to improve their ability to make better ones, and so on. Here's one nice example of many:
But meanwhile, edge.org has some videos and text of a series of very informal talks Kahneman recently gave. These give some fascinating insight into the origins some of his thinking on decision theory, prospect theory (why we value gains and losses relative to our own current position, rather than judge outcomes in terms of total wealth), why corporations make bad decisions and don't work too hard to improve their ability to make better ones, and so on. Here's one nice example of many:
The question I'd like to raise is something that I'm deeply curious about, which is what should organizations do to improve the quality of their decision-making? And I'll tell you what it looks like, from my point of view.
I have never tried very hard, but I am in a way surprised by the ambivalence about it that you encounter in organizations. My sense is that by and large there isn't a huge wish to improve decision-making—there is a lot of talk about doing so, but it is a topic that is considered dangerous by the people in the organization and by the leadership of the organization. I'll give you a couple of examples. I taught a seminar to the top executives of a very large corporation that I cannot name and asked them, would you invest one percent of your annual profits into improving your decision-making? They looked at me as if I was crazy; it was too much.
I'll give you another example. There is an intelligence agency, and the CIA, and a lot of activity, and there are academics involved, and there is a CIA university. I was approached by someone there who said, will you come and help us out, we need help to improve our analysis. I said, I will come, but on one condition, and I know it will not be met. The condition is: if you can get a workshop where you get one of the ten top people in the organization to spend an entire day, I will come. If you can't, I won't. I never heard from them again.
What you can do is have them organize a conference where some really important people will come for three-quarters of an hour and give a talk about how important it is to improve the analysis. But when it comes to, are you willing to invest time in doing this, the seriousness just vanishes. That's been my experience, and I'm puzzled by it.
Wednesday, September 21, 2011
High-frequency trading -- the downside, Part I
Andrew Haldane of the Bank of England has given a stream of recent speeches -- more like detailed research reports -- offering deep insight into various pressing issues in finance. One of his most recent speeches looks at high-frequency trading (HFT), noting its positive aspects as well as its potential negative consequences. Importantly, he has tried to do this in non-ideological fashion, always looking to the data to back up any perspective.
The speech is wide ranging and I want to explore it points in some detail, so I'm going to break this post into three (I think) parts looking at different aspects of his argument. This is number one, the others will arrive shortly.
To begin with, Haldane notes that in the last decade as HFT has become prominent trading volumes have soared, and, as they have, the time over which stocks are held before being traded again has fallen:
First, he offers a useful clarification of why the bid-ask spread is normally taken as a useful measure of market liquidity (or more correctly, the inverse of market liquidity). As he points out, the profits market makers earn from the bid-ask spread represent a fee they require for taking risks that grow more serious with lower liquidity:
Next comes the question of whether HFT has made markets work more efficiently, and here things become more interesting. First, there is a great deal of evidence (some I've written about here earlier) showing that the rise of HFT has caused a decrease in bid-ask spreads, and hence an improvement in market liquidity. Haldane cites several studies:
It's hard to be precise, but the figure shows something like a ten-fold reduction in bid-ask spreads over the past decade. Hence, by this metric, HFT really does appear to have "greased the wheels of modern finance."
But there's also more to the story. Even if bid-ask spreads may have generally fallen, it's possible that other measures of market function have also changed, and not in a good way. Haldane moves on to another set of data, his Chart 9 (below), which shows data on volatility vs correlation for components of the S&P 500 since 1990. This chart indicates that there has been a general link between volatility and correlation -- in times of high market volatility, stock movements tend to be more correlated. Importantly, the link has grown increasingly strong in the latter period 2005-2010.
What this implies, Haldane suggests, is that HFT has driven this increasing link, with consequences.
This interpretation is as interesting as it is perhaps obvious in retrospect. Markets have calmer periods and stormier periods. HFT seems to have reduced bid-ask spreads in the calmer times, making markets work more smoothly. But it appears to have done just the opposite in stormy times:
As I said, the speech goes on to explore some other related arguments touching on other deep aspects of market behaviour. I hope to explore these in some detail soon.
The speech is wide ranging and I want to explore it points in some detail, so I'm going to break this post into three (I think) parts looking at different aspects of his argument. This is number one, the others will arrive shortly.
To begin with, Haldane notes that in the last decade as HFT has become prominent trading volumes have soared, and, as they have, the time over which stocks are held before being traded again has fallen:
... at the end of the Second World War, the average US share was held by the average investor for around four years. By the start of this century, that had fallen to around eight months. And by 2008, it had fallen to around two months.It was about a decade ago that trading execution times on some electronic trading platforms fell below the one second barrier. But the steady march to ever fast trading goes on:
As recently as a few years ago, trade execution times reached “blink speed” – as fast as the blink of an eye. At the time that seemed eye-watering, at around 300-400 milli-seconds or less than a third of a second. But more recently the speed limit has shifted from milli-seconds to micro-seconds – millionths of a second. Several trading platforms now offer trade execution measured in micro-seconds (Table 1).Haldane then goes on to consider what effect this trend has had so far on the nature of trading, looking in particular at market makers.
As of today, the lower limit for trade execution appears to be around 10 micro-seconds. This means it would in principle be possible to execute around 40,000 back-to-back trades in the blink of an eye. If supermarkets ran HFT programmes, the average household could complete its shopping for a lifetime in under a second.
It is clear from these trends that trading technologists are involved in an arms race. And it is far from over. The new trading frontier is nano-seconds – billionths of a second. And the twinkle in technologists’(unblinking) eye is pico-seconds – trillionths of a second. HFT firms talk of a “race to zero”.
First, he offers a useful clarification of why the bid-ask spread is normally taken as a useful measure of market liquidity (or more correctly, the inverse of market liquidity). As he points out, the profits market makers earn from the bid-ask spread represent a fee they require for taking risks that grow more serious with lower liquidity:
The market-maker faces two types of problem. One is an inventory-management problem – how much stock to hold and at what price to buy and sell. The market-maker earns a bid-ask spread in return for solving this problem since they bear the risk that their inventory loses value. ...Market-makers face a second, information-management problem. This arises from the possibility of trading with someone better informed about true prices than themselves – an adverse selection risk. Again, the market-maker earns a bid-ask spread to protect against this informational risk.The above offer no new insights, but explains the relationship in a very clear way.
The bid-ask spread, then, is the market-makers’ insurance premium. It provides protection against risks from a depreciating or mis-priced inventory. As such, it also proxies the “liquidity” of the market – that is, its ability to absorb buy and sell orders and execute them without an impact on price. A wider bid-ask spread implies greater risk in the sense of the market’s ability to absorb volume without affecting prices.
Next comes the question of whether HFT has made markets work more efficiently, and here things become more interesting. First, there is a great deal of evidence (some I've written about here earlier) showing that the rise of HFT has caused a decrease in bid-ask spreads, and hence an improvement in market liquidity. Haldane cites several studies:
For example, Brogaard (2010) analyses the effects of HFT on 26 NASDAQ-listed stocks. HFT is estimated to have reduced the price impact of a 100-share trade by $0.022. For a 1000-share trade, the price impact is reduced by $0.083. In other words, HFT boosts the market’s absorptive capacity. Consistent with that, Hendershott et al (2010) and Hasbrouck and Saar (2011) find evidence of algorithmic trading and HFT having narrowed bid-ask spreads.His Chart 8 (reproduced below) shows a measure of bid-ask spreads on UK equities over the past decade, the data having been normalised by a measure of market volatility to "strip out volatility spikes."
It's hard to be precise, but the figure shows something like a ten-fold reduction in bid-ask spreads over the past decade. Hence, by this metric, HFT really does appear to have "greased the wheels of modern finance."
But there's also more to the story. Even if bid-ask spreads may have generally fallen, it's possible that other measures of market function have also changed, and not in a good way. Haldane moves on to another set of data, his Chart 9 (below), which shows data on volatility vs correlation for components of the S&P 500 since 1990. This chart indicates that there has been a general link between volatility and correlation -- in times of high market volatility, stock movements tend to be more correlated. Importantly, the link has grown increasingly strong in the latter period 2005-2010.
What this implies, Haldane suggests, is that HFT has driven this increasing link, with consequences.
Two things have happened since 2005, coincident with the emergence of trading platform fragmentation and HFT. First, both volatility and correlation have been somewhat higher. Volatility is around 10 percentage points higher than in the earlier sample, while correlation is around 8 percentage points higher. Second, the slope of the volatility / correlation curve is steeper. Any rise in volatility now has a more pronounced cross-market effect than in the past.... Taken together, this evidence points towards market volatility being both higher and propagating further than in the past.
This interpretation is as interesting as it is perhaps obvious in retrospect. Markets have calmer periods and stormier periods. HFT seems to have reduced bid-ask spreads in the calmer times, making markets work more smoothly. But it appears to have done just the opposite in stormy times:
Far from solving the liquidity problem in situations of stress, HFT firms appear to have added to it. And far from mitigating market stress, HFT appears to have amplified it. HFT liquidity, evident in sharply lower peacetime bid-ask spreads, may be illusory. In wartime, it disappears. This disappearing act, and the resulting liquidity void, is widely believed to have amplified the price discontinuities evident during the Flash Crash.13 HFT liquidity proved fickle underThis is an interesting point, and shows how easy it is to jump to comforting but possible incorrect conclusions by looking at just one measure of market function, or by focusing on "normal" times as opposed to the non-normal times which are nevertheless a real part of market history.
stress, as flood turned to drought.
As I said, the speech goes on to explore some other related arguments touching on other deep aspects of market behaviour. I hope to explore these in some detail soon.
Tuesday, September 20, 2011
A bleak perspective... but probably true
I try not to say too much about our global economic and environmental future as I have zero claim to any special insight. I do have a fairly pessimistic view, which is reinforced every year or so when I read in Nature or Science the latest bleak assessment of the rapid and likely irreversible decline of marine ecosystems. I simply can't see humans on a global scale changing their ways very significantly until some truly dreadful catastrophes strike.
Combine environmental issues with dwindling resources and the global economic crisis, and the near term future really doesn't look so rosy. On this topic, I have been enjoying an interview at Naked Capitalism with Satyajit Das (part 1, part2, with part 3 I think still to come) who has worked for more than 30 years in the finance industry. I'm looking forward to reading his new book "Extreme Money: Masters of the Universe and the Cult of Risk." Here's an excerpt from the interview which, as much as any analysis I've read, seems like a plausible picture for our world over the next few decades:
Combine environmental issues with dwindling resources and the global economic crisis, and the near term future really doesn't look so rosy. On this topic, I have been enjoying an interview at Naked Capitalism with Satyajit Das (part 1, part2, with part 3 I think still to come) who has worked for more than 30 years in the finance industry. I'm looking forward to reading his new book "Extreme Money: Masters of the Universe and the Cult of Risk." Here's an excerpt from the interview which, as much as any analysis I've read, seems like a plausible picture for our world over the next few decades:
There are problems to which there are no answers, no easy solutions. Human beings are not all powerful creatures. There are limits to our powers, our knowledge and our understanding.
The modern world has been built on a ethos of growth, improving living standards and growing prosperity. Growth has been our answer to everything. This is what drove us to the world of ‘extreme money’ and financialisation in the first place. Now three things are coming together to bring that period of history to a conclusion – the end of financialisation, environmental concerns and limits to certain essential natural resources like oil and water. Environmental advocate Edward Abbey put it bluntly: “Growth for the sake of growth is the ideology of a cancer cell.”
We are reaching the end of a period of growth, expansion and, maybe, optimism. Increased government spending or income redistribution, even if it is implemented (which I doubt), may not necessarily work. Living standards will have to fall. Competition between countries for growth will trigger currency and trade wars – we are seeing that already with the Swiss intervening to lower their currency and emerging markets putting in place capital controls. All this will further crimp growth. Social cohesion and order may break down. Extreme political views might become popular and powerful. Xenophobia and nationalism will become more prominent as people look for scapegoats.
People draw comparisons to what happened in Japan. But Japan had significant advantages – the world’s largest savings pool, global growth which allowed its exporters to prosper, a homogeneous, stoic population who were willing to bear the pain of the adjustment. Do those conditions exist everywhere?
We will be caught in the ruins of this collapsed Ponzi scheme for a long time, while we try to rediscover more traditional sources of growth like innovation and productivity improvements – real engineering rather than financial engineering. But we will still have to pay for the cost of our past mistakes which will complicate the process.
Fyodor Dostoevsky wrote in The Possessed: “It is hard to change gods.” It seems to me that that’s what we are trying to do. It may be possible but it won’t be simple or easy. It will also take a long, long time and entail a lot of pain.
Friday, September 16, 2011
Milton Friedman's grand illusion
Three years ago I wrote an Op-Ed for the New York Times on the need for radical change in the way economists model whole economies. Today's General Equilibrium models -- and their slightly more sophisticated cousins, Dynamic Stochastic General Equilibrium models -- make assumptions with no basis in reality. For example, there is no financial sector in these model economies. They generally assume that the diversity of behaviour of all an economy's many firms and consumers can be ignored and simply included as the average behaviour of a few "representative" agents.
I argued then that it was about time economists started using far more sophisticated modeling tools, including agent based models, in which the diversity of interactions among economic agents can be included along with a financial sector. The idea is to model the simpler behaviours of agents as well as you can and let the macro-scale complex behaviour of the economy emerge naturally out of them, without making any restrictive assumptions about what kinds of things can or cannot happen in the larger economy. This kind of work is going forward rapidly. For some detail, I recommend this talk earlier this month by Doyne Farmer.
After that Op-Ed I received quite a number of emails from economists defending the General Equilibrium approach. Several of them mentioned Milton Friedman in their defense, saying that he had shown long ago that one shouldn't worry about the realism of the assumptions in a theory, but only about the accuracy of its predictions. I eventually found the paper to which they were referring, a classic in economic history which has exerted a huge influence over economists over the past half century. I recently re-read the paper and wanted to make a few comments on Friedman's main argument. It rests entirely, I think, on a devious or slippery use of words which makes it possible to give a sensible sounding argument for what is actually a ridiculous proposition.
The paper is entitled The Methodology of Positive Economics and was first published in 1953. It's an interesting paper and enjoyable to read. Essentially, it seems, Friedman's aim is to argue for scientific standards for economics akin to those used in physics. He begins by making a clear definition of what he means by "positive economics," which aims to be free from any particular ethical position or normative judgments. As he wrote, positive economics deals with...
So far so good. I think most scientists would see the above as conforming fairly closely to their own conception of how science should work (and of course this view is closely linked to views made famous by Karl Popper).
Next step: Friedman goes on to ask how one chooses between several hypotheses if they are all equally consistent with the available evidence. Here too his initial observations seem quite sensible:
Friedman's essay then shifts direction. He argues that the processes and practices involved in the initial formation of a hypothesis, and in the testing of that hypothesis, are not as distinct as people often think, Indeed, this is obviously so. Many scientists form a hypothesis and try to test it, then adjust the hypothesis slightly in view of the data. There's an ongoing evolution of the hypothesis in correspondence with the data and the kinds of experiments of observations which seem interesting.
To this point, Friedman's essay says nothing that wouldn't fit into any standard discussion of the generally accepted philosophy of science from the 1950s. But this is where it suddenly veers off wildly and attempts to support a view that is indeed quite radical. Friedman mentions the difficulty in the social sciences of getting
new evidence with which to test an hypothesis by looking at its implications. This difficulty, he suggests,
Now, what is wrong with Friedman's argument, if anything? I think the key issue is his use of the provocative terms such as "unrealistic" and "false" and "inaccurate" in places where he actually means "simplified," "approximate" or "incomplete." He switches without warning between these two different meanings in order to make the conclusion seem unavoidable, and profound, when in fact it is simply not true, or something we already believe and hardly profound at all.
To see the problem, take a simple example in physics. Newtonian dynamics describes the motions of the planets quite accurately (in many cases) even if the planets are treated as point masses having no extension, no rotation, no oceans and tides, mountains, trees and so on. The great triumph of Newtonian dynamics (including his law of gravitational attraction) is it's simplicity -- it asserts that out of all the many details that could conceivably influence planetary motion, two (mass and distance) matter most by far. The atmosphere of the planet doesn't matter much, nor does the amount of sunlight it reflects. The theory of course goes further to describe how other details do matter if one considers planetary motion in more detail -- rotation does matter, for example, because it generates tides which dissipate energy, taking energy slowly away from orbital motion.
But I don't think anyone would be tempted to say that Newtonian dynamics is a powerful theory because it is descriptively false in its assumptions. It's assumptions are actually descriptively simple -- that planets and The Sun have mass, and that a force acts between any two masses in proportion to the product of their masses and in inverse proportional to the distance between them. From these assumptions one can work out predictions for details of planetary motion, and those details turn out to be close to what we see. The assumptions are simple and plausible, and this is what makes the theory so powerful when it turns out to make powerful and accurate predictions.
Indeed, if those same predictions came out of a theory with obviously false assumptions -- all planets are perfect cubes, etc. -- it would be less powerful by far because it would be less believable. It's ability to make predictions would be as big a mystery as the original phenomenon of planetary motion itself -- how can a theory that is so obviously not in tune with reality still make such accurate predictions?
So whenever Friedman says "descriptively false" I think you can instead write "descriptively simple", and clarify the meaning by adding a phrase of the sort "which identify the key factors which matter most." Do that replacement in Friedman's most provocative phrase from above and you have something far more sensible:
That's not quite so bold, however, and it doesn't create a license for theorists to make any assumptions they want without being criticized if those assumptions stray very far from reality.
Of course, there is a place in science for unrealistic assumptions. A theory capturing core realities of a problem may simply be too difficult to work with, making it impossible to draw out testable predictions. Scientists often simplify the assumptions well past the point of plausibility just to be able to calculate something (quantum gravity in one dimension, for example), hoping that insight gained in the process may make it possible to step back toward a more realistic theory. But false assumptions in this case are merely tools for getting to a theory that doesn't have to make false assumptions.
Of course, there is another matter which Friedman skipped over entirely in his essay. He suggested that economic theories should be judged solely on the precision of their predictions, not the plausibility of their assumptions. But he never once in the essay gave a single example of an economic theory with unrealistic or descriptively false assumptions which makes impressively accurate predictions. A curious omission.
I argued then that it was about time economists started using far more sophisticated modeling tools, including agent based models, in which the diversity of interactions among economic agents can be included along with a financial sector. The idea is to model the simpler behaviours of agents as well as you can and let the macro-scale complex behaviour of the economy emerge naturally out of them, without making any restrictive assumptions about what kinds of things can or cannot happen in the larger economy. This kind of work is going forward rapidly. For some detail, I recommend this talk earlier this month by Doyne Farmer.
After that Op-Ed I received quite a number of emails from economists defending the General Equilibrium approach. Several of them mentioned Milton Friedman in their defense, saying that he had shown long ago that one shouldn't worry about the realism of the assumptions in a theory, but only about the accuracy of its predictions. I eventually found the paper to which they were referring, a classic in economic history which has exerted a huge influence over economists over the past half century. I recently re-read the paper and wanted to make a few comments on Friedman's main argument. It rests entirely, I think, on a devious or slippery use of words which makes it possible to give a sensible sounding argument for what is actually a ridiculous proposition.
The paper is entitled The Methodology of Positive Economics and was first published in 1953. It's an interesting paper and enjoyable to read. Essentially, it seems, Friedman's aim is to argue for scientific standards for economics akin to those used in physics. He begins by making a clear definition of what he means by "positive economics," which aims to be free from any particular ethical position or normative judgments. As he wrote, positive economics deals with...
"what is," not with "what ought to be." Its task is to provide a system of generalizations that can be used to make correct predictions about the consequences of any change in circumstances. Its performance is to be judged by the precision, scope, and conformity with experience of the predictions it yields.Friedman then asks how one should judge the validity of a hypothesis, and asserts that...
...the only relevant test of the validity of a hypothesis is comparison of its predictions with experience. The hypothesis is rejected if its predictions are contradicted ("frequently" or more often than predictions from an alternative hypothesis); it is accepted if its predictions are not contradicted; great confidence is attached to it if it has survived many opportunities for contradiction. Factual evidence can never "prove" a hypothesis; it can only fail to disprove it, which is what we generally mean when we say, somewhat inexactly, that the hypothesis has been "confirmed" by experience."
So far so good. I think most scientists would see the above as conforming fairly closely to their own conception of how science should work (and of course this view is closely linked to views made famous by Karl Popper).
Next step: Friedman goes on to ask how one chooses between several hypotheses if they are all equally consistent with the available evidence. Here too his initial observations seem quite sensible:
...there is general agreement that relevant considerations are suggested by the criteria "simplicity" and "fruitfulness," themselves notions that defy completely objective specification. A theory is "simpler" the less the initial knowledge needed to make a prediction within a given field of phenomena; it is more "fruitful" the more precise the resulting prediction, the wider the area within which the theory yields predictions, and the more additional lines for further research it suggests.Again, right in tune I think with the practice and views of most scientists. I especially like the final point that part of the value of a hypothesis also comes from how well it stimulates creative thinking about further hypotheses and theories. This point is often overlooked.
Friedman's essay then shifts direction. He argues that the processes and practices involved in the initial formation of a hypothesis, and in the testing of that hypothesis, are not as distinct as people often think, Indeed, this is obviously so. Many scientists form a hypothesis and try to test it, then adjust the hypothesis slightly in view of the data. There's an ongoing evolution of the hypothesis in correspondence with the data and the kinds of experiments of observations which seem interesting.
To this point, Friedman's essay says nothing that wouldn't fit into any standard discussion of the generally accepted philosophy of science from the 1950s. But this is where it suddenly veers off wildly and attempts to support a view that is indeed quite radical. Friedman mentions the difficulty in the social sciences of getting
new evidence with which to test an hypothesis by looking at its implications. This difficulty, he suggests,
... makes it tempting to suppose that other, more readily available, evidence is equally relevant to the validity of the hypothesis-to suppose that hypotheses have not only "implications" but also "assumptions" and that the conformity of these "assumptions" to "reality" is a test of the validity of the hypothesis different from or additional to the test by implications. This widely held view is fundamentally wrong and productive of much mischief.Having raised this idea that assumptions are not part of what should be tested, Friedman then goes on to attack very strongly the idea that a theory should strive at all to have realistic assumptions. Indeed, he suggests, a theory is actually superior insofar as its assumptions are unrealistic:
In so far as a theory can be said to have "assumptions" at all, and in so far as their "realism" can be judged independently of the validity of predictions, the relation between the significance of a theory and the "realism" of its "assumptions" is almost the opposite of that suggested by the view under criticism. Truly important and significant hypotheses will be found to have "assumptions" that are wildly inaccurate descriptive representations of reality, and, in general, the more significant the theory, the more unrealistic the assumptions... The reason is simple. A hypothesis is important if it "explains" much by little,... To be important, therefore, a hypothesis must be descriptively false in its assumptions...This is the statement that the economists who wrote to me used to defend unrealistic assumptions in General Equilibrium theories. Their point was that having unrealistic assumptions isn't just not a problem, but is a positive strength for a theory. The more unrealistic the better, as Friedman argued (and apparently proved, in the eyes of some economists).
Now, what is wrong with Friedman's argument, if anything? I think the key issue is his use of the provocative terms such as "unrealistic" and "false" and "inaccurate" in places where he actually means "simplified," "approximate" or "incomplete." He switches without warning between these two different meanings in order to make the conclusion seem unavoidable, and profound, when in fact it is simply not true, or something we already believe and hardly profound at all.
To see the problem, take a simple example in physics. Newtonian dynamics describes the motions of the planets quite accurately (in many cases) even if the planets are treated as point masses having no extension, no rotation, no oceans and tides, mountains, trees and so on. The great triumph of Newtonian dynamics (including his law of gravitational attraction) is it's simplicity -- it asserts that out of all the many details that could conceivably influence planetary motion, two (mass and distance) matter most by far. The atmosphere of the planet doesn't matter much, nor does the amount of sunlight it reflects. The theory of course goes further to describe how other details do matter if one considers planetary motion in more detail -- rotation does matter, for example, because it generates tides which dissipate energy, taking energy slowly away from orbital motion.
But I don't think anyone would be tempted to say that Newtonian dynamics is a powerful theory because it is descriptively false in its assumptions. It's assumptions are actually descriptively simple -- that planets and The Sun have mass, and that a force acts between any two masses in proportion to the product of their masses and in inverse proportional to the distance between them. From these assumptions one can work out predictions for details of planetary motion, and those details turn out to be close to what we see. The assumptions are simple and plausible, and this is what makes the theory so powerful when it turns out to make powerful and accurate predictions.
Indeed, if those same predictions came out of a theory with obviously false assumptions -- all planets are perfect cubes, etc. -- it would be less powerful by far because it would be less believable. It's ability to make predictions would be as big a mystery as the original phenomenon of planetary motion itself -- how can a theory that is so obviously not in tune with reality still make such accurate predictions?
So whenever Friedman says "descriptively false" I think you can instead write "descriptively simple", and clarify the meaning by adding a phrase of the sort "which identify the key factors which matter most." Do that replacement in Friedman's most provocative phrase from above and you have something far more sensible:
A hypothesis is important if it "explains" much by little,... To be important, therefore, a hypothesis must be descriptively simple in its assumptions. It must identify the key factors which matter most...
That's not quite so bold, however, and it doesn't create a license for theorists to make any assumptions they want without being criticized if those assumptions stray very far from reality.
Of course, there is a place in science for unrealistic assumptions. A theory capturing core realities of a problem may simply be too difficult to work with, making it impossible to draw out testable predictions. Scientists often simplify the assumptions well past the point of plausibility just to be able to calculate something (quantum gravity in one dimension, for example), hoping that insight gained in the process may make it possible to step back toward a more realistic theory. But false assumptions in this case are merely tools for getting to a theory that doesn't have to make false assumptions.
Of course, there is another matter which Friedman skipped over entirely in his essay. He suggested that economic theories should be judged solely on the precision of their predictions, not the plausibility of their assumptions. But he never once in the essay gave a single example of an economic theory with unrealistic or descriptively false assumptions which makes impressively accurate predictions. A curious omission.
Thursday, September 15, 2011
The long history of options
I just finished reading Niall Ferguson's book The Ascent of Money, which I strongly recommend to anyone interested in the history of economics and especially finance. Some readers of this blog may suspect that I am at times anti-finance, but this isn't really true. Ferguson makes a very convincing argument that finance is a technology -- a rich and evolving set of techniques for solving problems -- which has been as important to human well-being as knowledge of mechanics, chemistry and fire. I don't think that's at all overstated -- finance is a technology for sharing and cooperating in our management of wealth, savings and risk in the face of uncertainty. It's among the most basic and valuable technologies we possess.
Having said that, I am critical of finance when I think it is A) based on bad science, or B) used dishonestly as a tool by some people to take advantage of others. Naturally, because finance is complicated and difficult to understand there are many instances of both A and B. And of course one often finds concepts from category A aiding acts of category B.
But one thing I found particularly interesting in Ferguson's history is the early origins of options contracts and other derivatives. The use of derivatives has of course exploded since the work of Black and Scholes in the 1970s provided a more or less sensible way to price some of them. It's easy to forget that options have been around at least since the mid 1500s (in Dutch and French commodities markets). They were in heavy use by the late 1600s in the coffee houses of London were shareholders traded stocks of the East India Company and roughly 100 other joint-stock companies.
Looking a little further, I came across this excellent review article on the early history of options by Geoffrey Poitras of Simon Fraser University. This article goes into much greater detail than Ferguson on the history of options. As Poitras notes, early use in commodities markets arose quite naturally to meet key needs of the time (as any new technology does):
Or, of course, perhaps the possibility of continuing losses was recognized, and it was also recognized that these losses would in effect belong to someone else -- the person from whom the funds were borrowed.
These points aren't especially important, but they do bring home the point that almost everything we've seen in the past 20 years and in the recent financial crisis have precursors stretching back centuries. We're largely listening to an old tune being replayed with modern instruments.
Having said that, I am critical of finance when I think it is A) based on bad science, or B) used dishonestly as a tool by some people to take advantage of others. Naturally, because finance is complicated and difficult to understand there are many instances of both A and B. And of course one often finds concepts from category A aiding acts of category B.
But one thing I found particularly interesting in Ferguson's history is the early origins of options contracts and other derivatives. The use of derivatives has of course exploded since the work of Black and Scholes in the 1970s provided a more or less sensible way to price some of them. It's easy to forget that options have been around at least since the mid 1500s (in Dutch and French commodities markets). They were in heavy use by the late 1600s in the coffee houses of London were shareholders traded stocks of the East India Company and roughly 100 other joint-stock companies.
Looking a little further, I came across this excellent review article on the early history of options by Geoffrey Poitras of Simon Fraser University. This article goes into much greater detail than Ferguson on the history of options. As Poitras notes, early use in commodities markets arose quite naturally to meet key needs of the time (as any new technology does):
Another interesting point is the wide use in the 1500s of trading instruments which were essentially flat out gambles, not so unlike the credit default swaps of our time (ostensibly used to manage risk, but often used to make outright bets). As Poitras writes,
The evolution of trading in free standing option contracts revolved around two important elements: enhanced securitization of the transactions; and the emergence of speculative trading. Both these developments are closely connected with the concentration of commercial activity, initially at the large medieval market fairs and, later, on the bourses. Though it is difficult to attach specific dates to the process, considerable progress was made by the Champagne fairs with the formalization of the lettre de foire and the bill of exchange, e.g., Munro (2000). The sophisticated settlement process used to settle accounts at the Champagne fairs was a precursor of the clearing methods later adopted for exchange trading of securities and commodities. Over time, the medieval market fairs came to be surpassed by trade in urban centres such as Bruges (de Roover 1948; van Houtte 1966) and, later, in Antwerp and Lyons. Of these two centres, Antwerp was initially most important for trade in commodities while Lyons for trade in bills. Fully developed bourse trading in commodities emerged in Antwerp during the second half of the 16th century (Tawney 1925, p.62-5; Gelderblom and Jonker 2005). The development of the Antwerp commodity market provided sufficient liquidity to support the development of trading in ‘to arrive’ contracts. Due to the rapid expansion of seaborne trade during the period, speculative transactions in ‘to arrive’ grain that was still at sea were particularly active. Trade in whale oil, herring and salt was also important (Gelderblom and Jonker 2005; Barbour 1950; Emery 1895). Over time, these contracts came to be actively traded by speculators either directly or indirectly involved in trading that commodity but not in need of either taking or making delivery of the specific shipment.
The concentration of liquidity on the Antwerp Exchange furthered speculative trading centered around the important merchants and large merchant houses that controlled either financial activities or the goods trade. The milieu for such trading was closely tied to medieval traditions of gambling (Van der Wee 1977): “Wagers, often connected with the conclusion of commercial and financial transactions, were entered into on the safe return of ships, on the possibility of Philip II visiting the Netherlands, on the sex of children as yet unborn etc. Lotteries, both private and public, were also extremely popular, and were submitted as early as 1524 to imperial approval to prevent abuse.”One other interesting point (among many) is the advice of observers of the 17th century options markets to use easy credit to fund speculative activity. A man named Josef de la Vega in 1688 wrote a book on the markets called Confusion de Confusiones (still an apt title), and offered some fairly reckless advice to speculators:
De la Vega (p.155) goes on to describe an even more naive trading strategy: “If you are [consistently] unfortunate in all your operations and people begin to think that you are shaky, try to compensate for this defect by [outright] gambling in the premium business, [i.e., by borrowing the amount of the premiums]. Since this procedure has become general practice, you will be able to find someone who will give you credit (and support you in difficult situations, so you may win without dishonor).”
The possibility that the losses may continue is left unrecognized.
Or, of course, perhaps the possibility of continuing losses was recognized, and it was also recognized that these losses would in effect belong to someone else -- the person from whom the funds were borrowed.
These points aren't especially important, but they do bring home the point that almost everything we've seen in the past 20 years and in the recent financial crisis have precursors stretching back centuries. We're largely listening to an old tune being replayed with modern instruments.
Labels:
derivatives,
finance,
history,
options,
speculation,
technology
Thursday, September 8, 2011
Of hurricanes and economic equilibrium
Economists generally interpret economies and financial markets as systems in equilibrium or at least close to equilibrium. The history of economics has revolved around this idea for more than a century since the early theories of Leon Walras and Stanley Jevons. More recently, the famous Efficient Markets Hypothesis (EMH) has been the guiding theme of financial market theory. I've written earlier about the many shortcomings of this hypothesis, but of course EMH enthusiasts often point to evidence in favour of the hypothesis. For example, it's hard to beat the market and difficult to find any predictability in market movements. This seems to point to something like information efficiency -- all public information already being reflected in prices.
The EMH runs into trouble in the form of many exceptions to market unpredictability, so-called "anomalies," such as those identified by Andrew Lo of MIT. His 1990 book A Non-random Walk Down Wall St., written with Craig MacKinlay, documented a number of persisting patterns in market movements. For example, he and colleagues found in a study of price movements over a period of 25 years that the present returns of lower valued stocks were significantly correlated with the past returns of higher valued stocks. This means that by looking at what has happened recently to higher priced stocks, investors can in principle predict with some success what will happen to the future prices of cheaper stocks. This clearly contradicts the efficient market hypothesis.
But there's a way out for the EMH enthusiast, who typically responds by saying something like "Of course, there may be small deviations from equilibrium from time to time, but the market will then act to eliminate this deviation. As arbitragers move to profit from it, their actions will tend to wipe out the deviation and its associated element of predictability, restoring the efficient equilibrium after a short time."
The EMH defender can then even claim that this insight is precisely where the EMH displays its greatest power, as it makes specific predictions which generaly turn out to be true. In the case of the "anomaly" identified by Lo, for example, the EMH enthusiast would point out that this particular predictability has disappeared over the past 20 years just as the EMH would predict. By 1997, other researchers revisiting the phenomenon needed high frequency data even to find it at the level of minute-by-minute returns on the New York Stock Exchange. By 2005, when physicists Bence Toth and Janos Kertesz looked at the effect again, they found it had completely vanished. Conclusion: not only is the EMH safe from the criticism, it's a shining example of a theory which makes accurate predictions -- and as every economist knows, this is the sole measure by which to judge the value of a theory, as Milton Friedman argued in his famous 1966 essay The Methodology of Positive Economics.
There are many dubious elements to such a claim, of course. For example, the EMH doesn't make any specific predictions about time scales, so this prediction is a rather weak one. As I noted in my earlier post on the EMH, some similar anomalies have persisted for 20 years and haven't gone away.
But I want to step back and consider something more seriously wrong with the idea that the progressive disappearance of a pricing anomaly is evidence for the "increasing efficiency of the market." This I think is not correct. To see why, it will help to develop an analogy, and the recent hurricane Irene, which swept northward along the coast of the US, provides a good one.
Suppose that some atmospheric theorists developed the Efficient Atmospheres Hypothesis (EAH) which asserts that planetary atmospheres in general -- and the Earth's, in particular -- are always in a state very close to equilibrium with the air resting in a calm state of repose. Given the total amount of air, its density and the force of gravity, the theorists have even been able to predict the air pressure at sea level and how it should fall off with altitude. The EAH works pretty well in explaining some of the most basic aspects of the atmosphere. But the theory also makes a more controversial claim -- that, in fact, the air pressure at any two places in the atmosphere (at the same altitude) should always be identical, or at least very nearly the same. After all, the EAH theorists argue, if there were pressure differences, they would create winds carrying air and energy from the higher pressure zone toward the lower pressure zone. That flow of air would lower the pressure in the former place and raise it in the latter, eventually bringing those two pressures back into balance.
In other words, any momentary imbalance in air pressure should create forces which quickly act to wipe out that difference. Hence, the pressure everywhere (at the same altitude) should be identical, or almost identical.
Now, critics of EAH might say this is crazy. After all, we observe winds all the time, and sometimes great storms. Just last week high winds from hurricane Irene caused massive flooding up and down the eastern US. Isn't this obvious disproof of the EAH? On the contrary, the EAH theorists might respond, these observations actually provide further evidence for the theory. OK, they concede, the atmosphere does sometimes depart a little from perfect equilibrium, but the hurricane is clear evidence of how normal economic forces drive the system back into equilibrium. After all, if you look at the data on the hurricane after it passed by the US, you'll see that the immense pressure differences within it were slowly eroded away as the hurricane dissipated energy through its winds. Eventually Irene petered out altogether as the atmosphere was restored to equilibrium. This is simply another victory for the EAH; indeed, it predicted just this ultimate fate for such a disturbance.
In the setting of atmospheric physics, of course, no one would take the EAH seriously. It's obviously crazy to claim that the gradual disappearance of Irene was evidence for the "increasing efficiency" of the atmosphere, or (equivalently) for its return to the equilibrium state. Irene was one very much out-of-equilibrium disturbance, but it's disappearance says nothing about whether the atmosphere as a whole has become closer to equilibrium. As Irene disappeared, hundreds of other storms, and perhaps other proto-hurricanes, were being stirred up elsewhere on the planet. Storms are always fading away and others are always growing in the great chaotic maelstrom of the atmosphere, and all of this reflects its condition as a system driven out of equilibrium by energy from the Sun. To show that the atmosphere was actually moving closer to equilibrium over time would require some global study of storms and winds and air pressure differences to see if they were in some general sense progressively getting smaller. What happens to one storm is actually quite irrelevant.
Returning to the case of financial markets, the same must be true of a single anomaly such as the one identified by Lo and MacKinlay. Over twenty years, this one has slowly disappeared, as good empirical work has shown. But this doesn't really tell us anything about whether the market as a whole is getting more or less efficient, closer to or further away from equilibriuim. That's an entirely different question.
I've belabored this analogy because I think it helps to illustrate an important point. Arguments over the EMH often center on highly technical analysis of statistical correlations and market predictability or the lack of it. Yet often such analysis can be correct in their technical detail, but not actually support the larger claims made by the people doing the analysis.
I think the analogy also helps to bring into focus the shady deviousness of many arguments about market efficiency based on the EMH. In a talk from several years ago, soon after the crisis, Jeremy Siegel of the University of Pennsylvania addressed the question of whether the crisis had shown the EMH to be false. The EMH asserts, of course, that markets work very efficiently to exploit the wisdom of crowds to funnel savings into investments giving the best long run returns (rather than, say, fueling a financial bubble based around mortgages bundled opaquely into crappy CDOs). But no, he asserted, the financial crisis hadn't shown anything to be wrong with the EMH, indeed it offered further evidence for the EMH because (I'm paraphrasing from memory) "everything that happened did so for sound economic reasons." In other words, when people finally understood the true nature of those CDOs, their values plummeted, just as the EMH would predict.
This is like an Efficient Atmospheres Hypothesis theorist saying, first, that "a hurricane could never come into existence, because the atmosphere is efficient and in equilibrium, and the forces of physics act to keep it there," and then, in the aftermath of a hurricane, saying "see, I was right, the atmosphere worked to restore equilibrium just as I said."
The EMH runs into trouble in the form of many exceptions to market unpredictability, so-called "anomalies," such as those identified by Andrew Lo of MIT. His 1990 book A Non-random Walk Down Wall St., written with Craig MacKinlay, documented a number of persisting patterns in market movements. For example, he and colleagues found in a study of price movements over a period of 25 years that the present returns of lower valued stocks were significantly correlated with the past returns of higher valued stocks. This means that by looking at what has happened recently to higher priced stocks, investors can in principle predict with some success what will happen to the future prices of cheaper stocks. This clearly contradicts the efficient market hypothesis.
But there's a way out for the EMH enthusiast, who typically responds by saying something like "Of course, there may be small deviations from equilibrium from time to time, but the market will then act to eliminate this deviation. As arbitragers move to profit from it, their actions will tend to wipe out the deviation and its associated element of predictability, restoring the efficient equilibrium after a short time."
The EMH defender can then even claim that this insight is precisely where the EMH displays its greatest power, as it makes specific predictions which generaly turn out to be true. In the case of the "anomaly" identified by Lo, for example, the EMH enthusiast would point out that this particular predictability has disappeared over the past 20 years just as the EMH would predict. By 1997, other researchers revisiting the phenomenon needed high frequency data even to find it at the level of minute-by-minute returns on the New York Stock Exchange. By 2005, when physicists Bence Toth and Janos Kertesz looked at the effect again, they found it had completely vanished. Conclusion: not only is the EMH safe from the criticism, it's a shining example of a theory which makes accurate predictions -- and as every economist knows, this is the sole measure by which to judge the value of a theory, as Milton Friedman argued in his famous 1966 essay The Methodology of Positive Economics.
There are many dubious elements to such a claim, of course. For example, the EMH doesn't make any specific predictions about time scales, so this prediction is a rather weak one. As I noted in my earlier post on the EMH, some similar anomalies have persisted for 20 years and haven't gone away.
But I want to step back and consider something more seriously wrong with the idea that the progressive disappearance of a pricing anomaly is evidence for the "increasing efficiency of the market." This I think is not correct. To see why, it will help to develop an analogy, and the recent hurricane Irene, which swept northward along the coast of the US, provides a good one.
Suppose that some atmospheric theorists developed the Efficient Atmospheres Hypothesis (EAH) which asserts that planetary atmospheres in general -- and the Earth's, in particular -- are always in a state very close to equilibrium with the air resting in a calm state of repose. Given the total amount of air, its density and the force of gravity, the theorists have even been able to predict the air pressure at sea level and how it should fall off with altitude. The EAH works pretty well in explaining some of the most basic aspects of the atmosphere. But the theory also makes a more controversial claim -- that, in fact, the air pressure at any two places in the atmosphere (at the same altitude) should always be identical, or at least very nearly the same. After all, the EAH theorists argue, if there were pressure differences, they would create winds carrying air and energy from the higher pressure zone toward the lower pressure zone. That flow of air would lower the pressure in the former place and raise it in the latter, eventually bringing those two pressures back into balance.
In other words, any momentary imbalance in air pressure should create forces which quickly act to wipe out that difference. Hence, the pressure everywhere (at the same altitude) should be identical, or almost identical.
Now, critics of EAH might say this is crazy. After all, we observe winds all the time, and sometimes great storms. Just last week high winds from hurricane Irene caused massive flooding up and down the eastern US. Isn't this obvious disproof of the EAH? On the contrary, the EAH theorists might respond, these observations actually provide further evidence for the theory. OK, they concede, the atmosphere does sometimes depart a little from perfect equilibrium, but the hurricane is clear evidence of how normal economic forces drive the system back into equilibrium. After all, if you look at the data on the hurricane after it passed by the US, you'll see that the immense pressure differences within it were slowly eroded away as the hurricane dissipated energy through its winds. Eventually Irene petered out altogether as the atmosphere was restored to equilibrium. This is simply another victory for the EAH; indeed, it predicted just this ultimate fate for such a disturbance.
In the setting of atmospheric physics, of course, no one would take the EAH seriously. It's obviously crazy to claim that the gradual disappearance of Irene was evidence for the "increasing efficiency" of the atmosphere, or (equivalently) for its return to the equilibrium state. Irene was one very much out-of-equilibrium disturbance, but it's disappearance says nothing about whether the atmosphere as a whole has become closer to equilibrium. As Irene disappeared, hundreds of other storms, and perhaps other proto-hurricanes, were being stirred up elsewhere on the planet. Storms are always fading away and others are always growing in the great chaotic maelstrom of the atmosphere, and all of this reflects its condition as a system driven out of equilibrium by energy from the Sun. To show that the atmosphere was actually moving closer to equilibrium over time would require some global study of storms and winds and air pressure differences to see if they were in some general sense progressively getting smaller. What happens to one storm is actually quite irrelevant.
Returning to the case of financial markets, the same must be true of a single anomaly such as the one identified by Lo and MacKinlay. Over twenty years, this one has slowly disappeared, as good empirical work has shown. But this doesn't really tell us anything about whether the market as a whole is getting more or less efficient, closer to or further away from equilibriuim. That's an entirely different question.
I've belabored this analogy because I think it helps to illustrate an important point. Arguments over the EMH often center on highly technical analysis of statistical correlations and market predictability or the lack of it. Yet often such analysis can be correct in their technical detail, but not actually support the larger claims made by the people doing the analysis.
I think the analogy also helps to bring into focus the shady deviousness of many arguments about market efficiency based on the EMH. In a talk from several years ago, soon after the crisis, Jeremy Siegel of the University of Pennsylvania addressed the question of whether the crisis had shown the EMH to be false. The EMH asserts, of course, that markets work very efficiently to exploit the wisdom of crowds to funnel savings into investments giving the best long run returns (rather than, say, fueling a financial bubble based around mortgages bundled opaquely into crappy CDOs). But no, he asserted, the financial crisis hadn't shown anything to be wrong with the EMH, indeed it offered further evidence for the EMH because (I'm paraphrasing from memory) "everything that happened did so for sound economic reasons." In other words, when people finally understood the true nature of those CDOs, their values plummeted, just as the EMH would predict.
This is like an Efficient Atmospheres Hypothesis theorist saying, first, that "a hurricane could never come into existence, because the atmosphere is efficient and in equilibrium, and the forces of physics act to keep it there," and then, in the aftermath of a hurricane, saying "see, I was right, the atmosphere worked to restore equilibrium just as I said."
Monday, September 5, 2011
Quantum thinking
I just had a feature article for New Scientist magazine covering research showing some rather peculiar connections between the mathematics of quantum theory and patterns of human decision making. I don't want to say too much more here, but would like to clarify one very important point and give some links.
I was inspired to write this article a couple years ago at a brain storming session held by the European Commission. Participants were supposed to be bold and propose radical visions about where the most promising avenues for research lay in the near future (this was in the context of information and computing technology). One Belgian researcher gave a fascinating talk on the application of quantum mathematics to human decision making, claiming that quantum logic fits actual human behaviour more closely than does classical logic. There are many famous "anomalies" -- such as the Ellsberg Paradox -- where people systematically violate the laws of classical logic and probability when making decisions of economic importance. The Belgian researcher explained that the quantum formalism is able to accommodate such behaviour, and was therefore surprisingly useful in understanding how people organize and use concepts.
What struck me then was the derision with which several other scientists (physicists) greeted this suggestion, while completely mis-understanding what the man had said. One physicist came close to screaming that this was "embarrassing mumbo jumbo" somehow linked to the idea that quantum physics underlies brain function (the idea proposed over a decade ago by Roger Penrose in his profound book Shadows of the Mind). He had dismissed the idea so quickly that he hadn't listened. The Belgian physicist had actually pointed out that he wasn't at all suggesting that quantum physics plays a role in the brain, only that the mathematics of quantum physics is useful in describing human behaviour.
This is a very important point -- the mathematics of quantum theory (the mathematics of Hilbert spaces) isn't identical with the theory and somehow owned by it, but stands quite independent of that theory and existed for at least a century before quantum theory was invented. The Belgian was saying -- this mathematics which turned out to be so useful for quantum physics is now turning out to be profoundly useful in quite another setting.
The New Scientist article is just a very brief introduction to some of the work. A few other things I found utterly fascinating while researching the article are:
1. This research paper called A Quantum Logic of Down Below which falls somewhere in between philosophy, psychology and computer science. The second author Dominic Widdows is a computer scientist at Google working on information retrieval. The paper essentially argues that philosophers historically devised classical logic and then took it as a model for what human logic must be or at least should be. They suggest this was the wrong way around. Pure logic isn't our best example of reasoning. The best example of reasoning systems is people, and so a logic of what reasoning is and can be ought to start with people rather than mathematics. This is a powerful idea. As the authors put it:
2. A second fascinating paper is more technical and describes some applications of this in computer science and information retrieval. Here the idea is that if people create concepts and texts and organize them using a quantum-style logic, then search methods based on classical logic aren't likely to search such conceptual spaces very effectively. This paper describes applications in which literature search can be improved by using quantum logic operations. Most interesting (and I did mention this in the New Scientist piece) is the use of quantum operations to generate what might be closely akin to "hunches" or "guesses" about where in a mass of textual data interesting ideas might be found -- guesses not based on logical deduction, but on something less tightly constrained and ultimately more powerful.
I was inspired to write this article a couple years ago at a brain storming session held by the European Commission. Participants were supposed to be bold and propose radical visions about where the most promising avenues for research lay in the near future (this was in the context of information and computing technology). One Belgian researcher gave a fascinating talk on the application of quantum mathematics to human decision making, claiming that quantum logic fits actual human behaviour more closely than does classical logic. There are many famous "anomalies" -- such as the Ellsberg Paradox -- where people systematically violate the laws of classical logic and probability when making decisions of economic importance. The Belgian researcher explained that the quantum formalism is able to accommodate such behaviour, and was therefore surprisingly useful in understanding how people organize and use concepts.
What struck me then was the derision with which several other scientists (physicists) greeted this suggestion, while completely mis-understanding what the man had said. One physicist came close to screaming that this was "embarrassing mumbo jumbo" somehow linked to the idea that quantum physics underlies brain function (the idea proposed over a decade ago by Roger Penrose in his profound book Shadows of the Mind). He had dismissed the idea so quickly that he hadn't listened. The Belgian physicist had actually pointed out that he wasn't at all suggesting that quantum physics plays a role in the brain, only that the mathematics of quantum physics is useful in describing human behaviour.
This is a very important point -- the mathematics of quantum theory (the mathematics of Hilbert spaces) isn't identical with the theory and somehow owned by it, but stands quite independent of that theory and existed for at least a century before quantum theory was invented. The Belgian was saying -- this mathematics which turned out to be so useful for quantum physics is now turning out to be profoundly useful in quite another setting.
The New Scientist article is just a very brief introduction to some of the work. A few other things I found utterly fascinating while researching the article are:
1. This research paper called A Quantum Logic of Down Below which falls somewhere in between philosophy, psychology and computer science. The second author Dominic Widdows is a computer scientist at Google working on information retrieval. The paper essentially argues that philosophers historically devised classical logic and then took it as a model for what human logic must be or at least should be. They suggest this was the wrong way around. Pure logic isn't our best example of reasoning. The best example of reasoning systems is people, and so a logic of what reasoning is and can be ought to start with people rather than mathematics. This is a powerful idea. As the authors put it:
... what reasoning is (or should be) can only be read off from what reasoners are (and can be). Such a view one finds, for example in [Gabbay and Woods, 2001] and [Gabbay and Woods, 2003b], among logicians, and, also in the social scientific literature [Simon, 1957, Stanovich, 1999, Gigerenzer and Selten, 2001b]. Here the leading idea of the “new logic” is twofold. First, that logic’s original mission as a theory of human reasoning should be re-affirmed. Second, that a theory of human reasoning must take empirical account of what human reasoners are like – what they are interested in and what they are capable of.They then go on to argue that whatever the accurate logic of human reasoning is, it is more similar to quantum logic than to classical.
2. A second fascinating paper is more technical and describes some applications of this in computer science and information retrieval. Here the idea is that if people create concepts and texts and organize them using a quantum-style logic, then search methods based on classical logic aren't likely to search such conceptual spaces very effectively. This paper describes applications in which literature search can be improved by using quantum logic operations. Most interesting (and I did mention this in the New Scientist piece) is the use of quantum operations to generate what might be closely akin to "hunches" or "guesses" about where in a mass of textual data interesting ideas might be found -- guesses not based on logical deduction, but on something less tightly constrained and ultimately more powerful.
Subscribe to:
Posts (Atom)