A blogger named Irvine Renter (aka Larry Roberts) knows an awful lot about the housing market and writes about it here. He also produces (and links to) some great cartoons, like this one...
and this one...
and this one...
and this one...
Wednesday, February 20, 2013
Monday, February 18, 2013
A real model of Minsky
Noah Smith has a wonderfully informative post on the business cycle in economics. He's looking at the question of whether standard macroeconomic theories view the episodic ups and downs of the economy as the consequence of a real cycle, something arising from positive feed backs that drive persisting oscillations all on their own, or if they instead view these fluctuations as the consequence of external shocks to the system. As he notes, the tendency in macroeconomics has very much been the latter:
This is in fact a system of just the kind Noah is describing. Such a pendulum (taken in the linear regime) is akin to the AR(1) autoregressive process entering into macroeconomic models and it acts essentially as a filter on the source of shocks. The response of the system to a stream of random shocks can have a harmonic component, which can make the output look roughly like cycles as Noah mentioned. For an analogy, think of a big brass bell. This is a pendulum in the abstract, as it has internal vibratory modes that, once excited, damp way over time. Hang this bell in a storm and, as it receives a barrage of shocks, you'll hear a ringing that tells you more about the bell than it does the storm.
Still, to get really interesting cycles you need to go beyond the ordinary pendulum. You need a system capable of creating oscillatory behavior all on its own. In dynamical systems theory, this means a system with a limit cycle in its dynamics, which settles down in the absence of persisting perturbation to a cyclic behavior rather than to a fixed point. The existence of such a limit cycle generally implies that the system will have an unstable fixed point -- a state that seems superficially like an equilibrium, but which in fact will always dissolve away into cyclic behavior over time. Mathematically, this is the kind of situation one ought to think about when considering the possibility that natural instabilities drive oscillations in economics. Perhaps the equilibrium of the market is simply unstable, and the highs and lows of the business cycle reflect some natural limit cycle?
Noah mentions the work of Steve Keen, who has developed models along such lines. As far as I understand, these are generally low-dimensional models with limit cycle behavior and I expect they may be very instructive. But Noah also makes a good point that the data on the business cycle really doesn't show a clear harmonic signal at any one specific frequency. The real world is messier. An alternative to low dimensional models written in terms of aggregate economic variables is to build agent based models (of much higher dimension) to explore how natural human behavior such as trend following might lead to instabilities at least qualitatively like those we see.
For some recent work along these lines, take a look at this paper by Blake LeBaron which attempts to flesh out Hyman Minsky's well known story of inherent market instability in an agent based model. Here's the basic idea, as LeBaron describes it:
Now, it is not surprising at all that one can make a computational model to generate dynamics of this kind. But if you read the paper, LeBaron has tried hard to choose the various parameters to fit realistically with what is known about human learning dynamics and the behavior of different kinds of market players. The model also does a good job in reproducing many of the key statistical features of financial time series including long range fundamental deviations, volatility persistence, and fat tailed return distributions. So it generates Minsky-like fluctuations in what is arguably a plausible setting (although I'm sure experts will quibble with some details).
To my mind, one particularly interesting point to emerge from this model is the limited ability of fundamentalist investors to control the unstable behavior of speculators. One nice feature of agent based models is that it's possible to look inside and examine all manner of details. For example, during these bubble phases, which kind of investor controls most of the wealth? As LeBaron notes,
I certainly don't mean to imply that these kinds of agent based models are superior to the low-dimensional modelling of Steve Keen and others. I think these are both useful approaches, and they ought to be complementary. Here's LeBaron's summing up at the end of the paper:
When things like this [cycles] happen in nature - like the Earth going around the Sun, or a ball bouncing on a spring, or water undulating up and down - it comes from some sort of restorative force. With a restorative force, being up high is what makes you more likely to come back down, and being low is what makes you more likely to go back up. Just imagine a ball on a spring; when the spring is really stretched out, all the force is pulling the ball in the direction opposite to the stretch. This causes cycles.I think this is interesting and deserves some further discussion. Take an ordinary pendulum. Give such a system a kick and it will swing for a time but eventually the motion will damp away. For a while, high now does portend low in the near future, and vice versa. But this pendulum won't start start swinging this way on its own, nor will it persist in swinging over long periods of time unless repeatedly kicked by some external force.
It's natural to think of business cycles this way. We see a recession come on the heels of a boom - like the 2008 crash after the 2006-7 boom, or the 2001 crash after the late-90s boom - and we can easily conclude that booms cause busts.
So you might be surprised to learn that very, very few macroeconomists think this! And very, very few macroeconomic models actually have this property.
In modern macro models, business "cycles" are nothing like waves. A boom does not make a bust more likely, nor vice versa. Modern macro models assume that what looks like a "cycle" is actually something called a "trend-stationary stochastic process" (like an AR(1)). This is a system where random disturbances ("shocks") are temporary, because they decay over time. After a shock, the system reverts to the mean (i.e., to the "trend"). This is very different from harmonic motion - a boom need not be followed by a bust - but it can end up looking like waves when you graph it...
This is in fact a system of just the kind Noah is describing. Such a pendulum (taken in the linear regime) is akin to the AR(1) autoregressive process entering into macroeconomic models and it acts essentially as a filter on the source of shocks. The response of the system to a stream of random shocks can have a harmonic component, which can make the output look roughly like cycles as Noah mentioned. For an analogy, think of a big brass bell. This is a pendulum in the abstract, as it has internal vibratory modes that, once excited, damp way over time. Hang this bell in a storm and, as it receives a barrage of shocks, you'll hear a ringing that tells you more about the bell than it does the storm.
Still, to get really interesting cycles you need to go beyond the ordinary pendulum. You need a system capable of creating oscillatory behavior all on its own. In dynamical systems theory, this means a system with a limit cycle in its dynamics, which settles down in the absence of persisting perturbation to a cyclic behavior rather than to a fixed point. The existence of such a limit cycle generally implies that the system will have an unstable fixed point -- a state that seems superficially like an equilibrium, but which in fact will always dissolve away into cyclic behavior over time. Mathematically, this is the kind of situation one ought to think about when considering the possibility that natural instabilities drive oscillations in economics. Perhaps the equilibrium of the market is simply unstable, and the highs and lows of the business cycle reflect some natural limit cycle?
Noah mentions the work of Steve Keen, who has developed models along such lines. As far as I understand, these are generally low-dimensional models with limit cycle behavior and I expect they may be very instructive. But Noah also makes a good point that the data on the business cycle really doesn't show a clear harmonic signal at any one specific frequency. The real world is messier. An alternative to low dimensional models written in terms of aggregate economic variables is to build agent based models (of much higher dimension) to explore how natural human behavior such as trend following might lead to instabilities at least qualitatively like those we see.
For some recent work along these lines, take a look at this paper by Blake LeBaron which attempts to flesh out Hyman Minsky's well known story of inherent market instability in an agent based model. Here's the basic idea, as LeBaron describes it:
Minksy conjectures that financial markets begin to build up bubbles as investors become increasingly overconfident about markets. They begin to take more aggressive positions, and can often start to increase their leverage as financial prices rise. Prices eventually reach levels which cannot be sustained either by correct, or any reasonable forecast of future income streams on assets. Markets reach a point of instability, and the over extended investors must now begin to sell, and are forced to quickly deleverage in a fire sale like situation. As prices fall market volatility increases, and investors further reduce risky positions. The story that Minsky tells seems compelling, but we have no agreed on approach for how to model this, or whether all the pieces of the story will actually fit together. The model presented in this paper tries to bridge this gap.The model is in crude terms like many I've described earlier on this blog. The agents are adaptive and try to learn the most profitable ways to behave. They are also heterogeneous in their behavior -- some rely more on perceived fundamentals to make their investment decisions, while others follow trends. The agents respond to what has recently happened in the market, and then the market reality emerges out of their collective behavior. That reality, in some of the runs LeBaron explores, shows natural, irregular cycles of bubbles and subsequent crashes of the sort Minsky envisioned. The figure below, for example, shows data for the stock price, weekly returns and trading volume as they fluctuate over a 10 year period of the model:
Now, it is not surprising at all that one can make a computational model to generate dynamics of this kind. But if you read the paper, LeBaron has tried hard to choose the various parameters to fit realistically with what is known about human learning dynamics and the behavior of different kinds of market players. The model also does a good job in reproducing many of the key statistical features of financial time series including long range fundamental deviations, volatility persistence, and fat tailed return distributions. So it generates Minsky-like fluctuations in what is arguably a plausible setting (although I'm sure experts will quibble with some details).
To my mind, one particularly interesting point to emerge from this model is the limited ability of fundamentalist investors to control the unstable behavior of speculators. One nice feature of agent based models is that it's possible to look inside and examine all manner of details. For example, during these bubble phases, which kind of investor controls most of the wealth? As LeBaron notes,
The large amount of wealth in the adaptive strategy relative to the fundamental is important. The fundamental traders will be a stabilizing force in a falling market. If there is not enough wealth in that strategy, then it will be unable to hold back sharp market declines. This is similar to a limits to arbitrage argument. In this market without borrowing the fundamental strategy will not have sufficient wealth to hold back a wave of self-reinforcing selling coming from the adaptive strategies.Another important point, which LeBaron mentions in the paragraph above, is that there's no leverage in this model. People can't borrow to amplify investments they feel especially confident of. Leverage of course plays a central role in the instability mechanism described by Minsky, but it doesn't seem to be absolutely necessary to get this kind of instability. It can come solely from the interaction of different agents following distinct strategies.
I certainly don't mean to imply that these kinds of agent based models are superior to the low-dimensional modelling of Steve Keen and others. I think these are both useful approaches, and they ought to be complementary. Here's LeBaron's summing up at the end of the paper:
The dynamics are dominated by somewhat irregular swings around fundamentals, that show up as long persistent changes in the price/dividend ratio. Prices tend to rise slowly, and then crash fast and dramatically with high volatility and high trading volume. During the slow steady price rise, agents using similar volatility forecast models begin to lower their assessment of market risk. This drives them to be more aggressive in the market, and sets up a crash. All of this is reminiscent of the Minksy market instability dynamic, and other more modern approaches to financial instability.
Instability in this market is driven by agents steadily moving to more extreme portfolio positions. Much, but not all, of this movement is driven by risk assessments made by the traders. Many of them continue to use models with relatively short horizons for judging market volatility. These beliefs appear to be evolutionarily stable in the market. When short term volatility falls they extend their positions into the risky asset, and this eventually destabilizes the market. Portfolio composition varying from all cash to all equity yields very different dynamics in terms of forced sales in a falling market. As one moves more into cash, a market fall generates natural rebalancing and stabilizing purchases of the risky asset in a falling market. This disappears as agents move more of their wealth into the risky asset. It would reverse if they began to leverage this position with borrowed money. Here, a market fall will generate the typical destabilizing fire sale behavior shown in many models, and part of the classic Minsky story. Leverage can be added to this market in the future, but for now it is important that leverage per se is not necessary for market instability, and it is part of a continuum of destabilizing dynamics.
Tuesday, February 12, 2013
Edmund Phelps trashes rational expectations
I'm not generally one to enjoy reading interviews with macroeconomists, but this one is an exception. Published yesterday in Bloomberg, it features an interview by Caroline Baum of Edmund Phelps, Nobel Prize winner for his work on the relationship between inflation and unemployment. This focus of the interview is on Phelp's views of the rational expectations revolution. He is not a big fan:
Highly ironic also that, nearly half a century after Lucas and others began pushing this stuff, the trend is now back toward "adaptive expectations." Is rational expectations anything other than an expensive 50 year diversion into useless nonsense?
Q (Baum): So how did adaptive expectations morph into rational expectations?I think this is exactly the issue: "fear of uncertainty". No science can be effective if it aims to banish uncertainty by theoretical fiat. And this is what really makes rational expectations economics stand out as crazy when compared to other areas of science and engineering. It's a short interview, well worth a quick read.
A (Phelps): The "scientists" from Chicago and MIT came along to say, we have a well-established theory of how prices and wages work. Before, we used a rule of thumb to explain or predict expectations: Such a rule is picked out of the air. They said, let's be scientific. In their mind, the scientific way is to suppose price and wage setters form their expectations with every bit as much understanding of markets as the expert economist seeking to model, or predict, their behavior. The rational expectations approach is to suppose that the people in the market form their expectations in the very same way that the economist studying their behavior forms her expectations: on the basis of her theoretical model.
Q: And what's the consequence of this putsch?
A: Craziness for one thing. You’re not supposed to ask what to do if one economist has one model of the market and another economist a different model. The people in the market cannot follow both economists at the same time. One, if not both, of the economists must be wrong. Another thing: It’s an important feature of capitalist economies that they permit speculation by people who have idiosyncratic views and an important feature of a modern capitalist economy that innovators conceive their new products and methods with little knowledge of whether the new things will be adopted -- thus innovations. Speculators and innovators have to roll their own expectations. They can’t ring up the local professor to learn how. The professors should be ringing up the speculators and aspiring innovators. In short, expectations are causal variables in the sense that they are the drivers. They are not effects to be explained in terms of some trumped-up causes.
Q: So rather than live with variability, write a formula in stone!
A: What led to rational expectations was a fear of the uncertainty and, worse, the lack of understanding of how modern economies work. The rational expectationists wanted to bottle all that up and replace it with deterministic models of prices, wages, even share prices, so that the math looked like the math in rocket science. The rocket’s course can be modeled while a living modern economy’s course cannot be modeled to such an extreme. It yields up a formula for expectations that looks scientific because it has all our incomplete and not altogether correct understanding of how economies work inside of it, but it cannot have the incorrect and incomplete understanding of economies that the speculators and would-be innovators have.
Highly ironic also that, nearly half a century after Lucas and others began pushing this stuff, the trend is now back toward "adaptive expectations." Is rational expectations anything other than an expensive 50 year diversion into useless nonsense?
Sunday, February 10, 2013
Let there be light
*** UPDATE BELOW***
My Bloomberg column this month looks at an idea for improving the function of the interbank lending market. The idea is radical, yet also conceptually very simple. It's radical because it proposes a complete transformation of banking transparency. It shows how transparency may be the best route to achieving overall banking stability and efficiency. It will be interesting to see what free-market ideologues think of the idea, as it doesn't fit into any standard ideological narrative such as "get government out of the way" or "unregulated markets work best." It offers a means to improve market function -- something one would expect free-market cheerleaders to favor -- but does so in a way that threatens banking secrecy and also involves some central coordination.
My column was necessarily vague on detail given its length, so let me give some more discussion here. The paper presenting the ideas is this one by Stefan Thurner and Sebastian Poledna. It is first important to recognize that they consider here not all markets, but specifically the interbank market, in which banks loan funds to one another to manage demands for liquidity. This is the market that famously froze up following the collapse of Lehman Brothers, as banks suddenly realized they had no understanding at all of the risks facing potential counterparties. The solution proposed by Thurner and Poledna strikes directly at such uncertainty, by offering a mechanism to calculate those risks and make them available to everyone.
Here's the basic logic of their argument. They start with the point that standard theories of finance operate on the basis of wholly unrealistic assumptions about the ability of financial institutions to assess their risks rationally. Even if the individuals at these institutions were rational, the overwhelming complexity of today's market makes it impossible to judge systemic risks because of a lack of information:
The question then becomes: is it possible to do this? Well, here's one idea. As the authors point out, much of the information that would be required to compute the systemic risks associated with any one bank is already reported to central banks, at least in developed nations. No private party has this information. No single investment bank has this information. Perhaps even no single central bank has this information. But central banks together do have it, and they could use it to perform a calculation of considerably value (again, in the context of the interbank market):
So imagine this: central banks around the world get together tomorrow, and within a month or so manage to coordinate their information flows (ok, maybe that's optimistic). They set up some computers to run the calculations, and a server to host the results, which would be updated every day, perhaps even hourly. Soon you or I or any banker in the world could go to some web site and in a few seconds read out the DebtRank score for any bank in the developed world, and also to see the listing of banks ranked by the systemic risks they present. Wouldn't that be wonderful? Even if central banks took no further steps, this alone would be a valuable project, using global information to produce a globally valuable information resource for everyone. (Notice also that publishing DebtRank scores isn't the same as publishing all the data that banks supply to their central banks. That level of detail could be kept secret, with only the single measure of overall systemic risk being published.)
I would hope that almost everyone would be in favor of such a project or something like it. Of course, one group would be dead set against it -- those banks who have the highest DebtRank scores because they are systemically the most risky. But their private concerns shouldn't trump the public interest in reducing such risks. Measuring and making such numbers public is the first step in making any such reduction.
But Thurner and Poledna do go further in making a specific proposal for using this information in a sensible way to reduce systemic risks. Here's how that works. Banks in the interbank market borrow funds for varying amounts of time from other banks. Typically, there's a standard interbank interest rate prevailing for all banks (as far as I understand). Hence, a bank looking to borrow doesn't really have much preference as to which bank it finds as a lender; the interest paid will be the same. But if the choice of lender doesn't matter to the borrowing bank, it does matter a lot to the rest of us, as borrowing from a systemically risky bank threatens the financial system. If the borrower can't pay back, that risky bank could be put into distress and cause trouble for the system at large. So, we really should have banks in the interbank market looking to borrow first from the least systemically risky banks, i.e. from those with low values of DebtRank.
This is what Thurner and Poledna propose. Let central banks regulate that borrowers in the interbank market do just that -- seek out the least risky banks first as lenders. In this way, banks acting to take on lots of systemic risk would thereby be marked as too dangerous to make further loans. Further borrowing would instead be undertaken by less risky banks, thereby improving the spread of risks across the system. Don't trust in the miracle of the free market to make this happen -- it won't -- but step in and provide a mechanism for it to happen. As the authors describe it:
Another way to put it is this: banks currently have no incentive whatsoever, when seeking to borrow, to avoid borrowing from banks that play systemically important roles in the financial system. They don't care who they borrow from. But we do care, and we could easily force them -- using information that is already collected -- to borrow in a more responsible, safer way. Can there be any argument against that?
What are the chances for such a sensible idea to be put into practice? I have no idea. But I certainly hope these ideas make it quickly onto the radar of people at the new Office for Financial Research.
**UPDATE**
A reader emailed to alert me to this blog he runs which champions the idea of "ultra transparency" as a means for ensuring greater stability in finance. At face value, it makes a lot of sense and I think that the work I wrote about today fits into this perspective very well. The idea is simply that governments can provide information resources to the markets which would support better decisions by everyone in reckoning risks and rewards. Of course, we need independent people and firms gathering information in a decentralized way. But that isn't enough. In today's hypercomplex markets, some of the risks are simply invisible to anyone lacking vast quantities of data and the means to analyze them. Only governments currently have the requisite access to such information.
My Bloomberg column this month looks at an idea for improving the function of the interbank lending market. The idea is radical, yet also conceptually very simple. It's radical because it proposes a complete transformation of banking transparency. It shows how transparency may be the best route to achieving overall banking stability and efficiency. It will be interesting to see what free-market ideologues think of the idea, as it doesn't fit into any standard ideological narrative such as "get government out of the way" or "unregulated markets work best." It offers a means to improve market function -- something one would expect free-market cheerleaders to favor -- but does so in a way that threatens banking secrecy and also involves some central coordination.
My column was necessarily vague on detail given its length, so let me give some more discussion here. The paper presenting the ideas is this one by Stefan Thurner and Sebastian Poledna. It is first important to recognize that they consider here not all markets, but specifically the interbank market, in which banks loan funds to one another to manage demands for liquidity. This is the market that famously froze up following the collapse of Lehman Brothers, as banks suddenly realized they had no understanding at all of the risks facing potential counterparties. The solution proposed by Thurner and Poledna strikes directly at such uncertainty, by offering a mechanism to calculate those risks and make them available to everyone.
Here's the basic logic of their argument. They start with the point that standard theories of finance operate on the basis of wholly unrealistic assumptions about the ability of financial institutions to assess their risks rationally. Even if the individuals at these institutions were rational, the overwhelming complexity of today's market makes it impossible to judge systemic risks because of a lack of information:
Since the beginning of banking the possibility of a lender to assess the riskiness of a potential borrower has been essential. In a rational world, the result of this assessment determines the terms of a lender-borrower relationship (risk-premium), including the possibility that no deal would be established in case the borrower appears to be too risky. When a potential borrower is a node in a lending-borrowing network, the node’s riskiness (or creditworthiness) not only depends on its financial conditions, but also on those who have lending-borrowing relations with that node. The riskiness of these neighboring nodes depends on the conditions of their neighbors, and so on. In this way the concept of risk loses its local character between a borrower and a lender, and becomes systemic.In this connection, recall Alan Greenspan's famous admission that he had trusted in the ability of rational bankers to keep markets working by controlling their counterparty risk. As he exclaimed in 2006,
The assessment of the riskiness of a node turns into an assessment of the entire financial network [1]. Such an exercise can only carried out with information on the asset-liablilty network. This information is, up to now, not available to individual nodes in that network. In this sense, financial networks – the interbank market in particular – are opaque. This intransparency makes it impossible for individual banks to make rational decisions on lending terms in a financial network, which leads to a fundamental principle: Opacity in financial networks rules out the possibility of rational risk assessment, and consequently, transparency, i.e. access to system-wide information is a necessary condition for any systemic risk management.
"Those of us who have looked to the self-interest of lending institutions to protect shareholder's equity -- myself especially -- are in a state of shocked disbelief."The trouble, at least partially, is that no matter how self-interested those lending institutions were, they couldn't possibly have made the staggeringly complex calculations required to assess those risks accurately. The system is too complex. They lacked necessary information. Hence, as Thurner and Poledna point out, we might help things by making this information more transparent.
The question then becomes: is it possible to do this? Well, here's one idea. As the authors point out, much of the information that would be required to compute the systemic risks associated with any one bank is already reported to central banks, at least in developed nations. No private party has this information. No single investment bank has this information. Perhaps even no single central bank has this information. But central banks together do have it, and they could use it to perform a calculation of considerably value (again, in the context of the interbank market):
In most developed countries interbank loans are recorded in the ‘central credit register’ of Central Banks, that reflects the asset-liability network of a country [5]. The capital structure of banks is available through standard reporting to Central Banks. Payment systems record financial flows with a time resolution of one second, see e.g. [6]. Several studies have been carried out on historical data of asset-liability networks [7–12], including overnight markets [13], and financial flows [14].I wrote a little about this DebtRank idea here. It's a computational algorithm applied to a financial network which offers a means to assess systemic risks in a coherent, self-consistent way; it brings network effects into view. The technical details aren't so important, but the original paper proposing the notion is here. The important thing is that the DebtRank algorithm, along with the data provided to central banks, makes it possible in principle to calculate a good estimate of the overall systemic risk presented by any bank in the network.
Given this data, it is possible (for Central Banks) to compute network metrics of the asset-liability matrix in real-time, which in combination with the capital structure of banks, allows to define a systemic risk-rating of banks. A systemically risky bank in the following is a bank that – should it default – will have a substantial impact (losses due to failed credits) on other nodes in the network. The idea of network metrics is to systematically capture the fact, that by borrowing from a systemically risky bank, the borrower also becomes systemically more risky since its default might tip the lender into default. These metrics are inspired by PageRank, where a webpage, that is linked to a famous page, gets a share of the ‘fame’. A metric similar to PageRank, the so-called DebtRank, has been recently used to capture systemic risk levels in financial networks [15].
So imagine this: central banks around the world get together tomorrow, and within a month or so manage to coordinate their information flows (ok, maybe that's optimistic). They set up some computers to run the calculations, and a server to host the results, which would be updated every day, perhaps even hourly. Soon you or I or any banker in the world could go to some web site and in a few seconds read out the DebtRank score for any bank in the developed world, and also to see the listing of banks ranked by the systemic risks they present. Wouldn't that be wonderful? Even if central banks took no further steps, this alone would be a valuable project, using global information to produce a globally valuable information resource for everyone. (Notice also that publishing DebtRank scores isn't the same as publishing all the data that banks supply to their central banks. That level of detail could be kept secret, with only the single measure of overall systemic risk being published.)
I would hope that almost everyone would be in favor of such a project or something like it. Of course, one group would be dead set against it -- those banks who have the highest DebtRank scores because they are systemically the most risky. But their private concerns shouldn't trump the public interest in reducing such risks. Measuring and making such numbers public is the first step in making any such reduction.
But Thurner and Poledna do go further in making a specific proposal for using this information in a sensible way to reduce systemic risks. Here's how that works. Banks in the interbank market borrow funds for varying amounts of time from other banks. Typically, there's a standard interbank interest rate prevailing for all banks (as far as I understand). Hence, a bank looking to borrow doesn't really have much preference as to which bank it finds as a lender; the interest paid will be the same. But if the choice of lender doesn't matter to the borrowing bank, it does matter a lot to the rest of us, as borrowing from a systemically risky bank threatens the financial system. If the borrower can't pay back, that risky bank could be put into distress and cause trouble for the system at large. So, we really should have banks in the interbank market looking to borrow first from the least systemically risky banks, i.e. from those with low values of DebtRank.
This is what Thurner and Poledna propose. Let central banks regulate that borrowers in the interbank market do just that -- seek out the least risky banks first as lenders. In this way, banks acting to take on lots of systemic risk would thereby be marked as too dangerous to make further loans. Further borrowing would instead be undertaken by less risky banks, thereby improving the spread of risks across the system. Don't trust in the miracle of the free market to make this happen -- it won't -- but step in and provide a mechanism for it to happen. As the authors describe it:
The idea is to reduce systemic risk in the IB network by not allowing borrowers to borrow from risky nodes. In this way systemically risky nodes are punished, and an incentive for nodes is established to be low in systemic riskiness. Note, that lending to a systemically dangerous node does not increase the systemic riskiness of the lender. We implement this scheme by making the DebtRank of all banks visible to those banks that want to borrow. The borrower sees the DebtRank of all its potential lenders, and is required (that is the regulation part) to ask the lenders for IB loans in the order of their inverse DebtRank. In other words, it has to ask the least risky bank first, then the second risky one, etc. In this way the most risky banks are refrained from (profitable) lending opportunities, until they reduce their liabilities over time, which makes them less risky. Only then will they find lending possibilities again. This mechanism has the effect of distributing risk homogeneously through the network.The overall effect in the interbank market would be -- in an idealized model, at least -- to make systemic banking collapses much less likely. Thurner and Poledna ran a number of agent-based simulations to test out the dynamics of such a market, with encouraging results. The model involves banks, firms and households and their interactions; details in the paper for those interested. Bottom line, as illustrated in the figure below, is that cascading defaults through the banking system become much less likely. Here the red shows the statistical likelihood over many runs of banking cascades of varying size (number of banks involved) when borrowing banks choose their counterparties at random; this is the "business as usual" situation, akin to the market today. In contrast, the green and blue show the same distribution if borrowers instead sought counterparties so as to avoid those with high values of DebtRank (green and blue for slightly different conditions). Clearly, system wide problems become much less likely.
Another way to put it is this: banks currently have no incentive whatsoever, when seeking to borrow, to avoid borrowing from banks that play systemically important roles in the financial system. They don't care who they borrow from. But we do care, and we could easily force them -- using information that is already collected -- to borrow in a more responsible, safer way. Can there be any argument against that?
What are the chances for such a sensible idea to be put into practice? I have no idea. But I certainly hope these ideas make it quickly onto the radar of people at the new Office for Financial Research.
**UPDATE**
A reader emailed to alert me to this blog he runs which champions the idea of "ultra transparency" as a means for ensuring greater stability in finance. At face value, it makes a lot of sense and I think that the work I wrote about today fits into this perspective very well. The idea is simply that governments can provide information resources to the markets which would support better decisions by everyone in reckoning risks and rewards. Of course, we need independent people and firms gathering information in a decentralized way. But that isn't enough. In today's hypercomplex markets, some of the risks are simply invisible to anyone lacking vast quantities of data and the means to analyze them. Only governments currently have the requisite access to such information.
Subscribe to:
Posts (Atom)