On September 13th, 2012, the FOMC extended its projections for maintaining “exceptionally low levels for the federal funds rate...at least through mid-2015” while enacting a plan to purchase “additional agency mortgage-backed securities at a pace of $40 billion per month” indefinitely. Proponents of greater monetary stimulus rejoiced in unison, but none were as jubilant as the Market Monetarists. To them, the dawning of a Sumnerian era has arrived.
Meanwhile, a small group of “rebels” that has opposed this movement for the past several years took solace in its coming unraveling. From their perspective, when the new monetarist theories are actually implemented through real-world policy, the results will mimic the failures of previous attempts at monetarism (I highly recommend reading The Scourge of Monetarism by Nicholas Kaldor). The current version, which relies on minimal empirical proof and a faulty understanding of modern monetary operations, will finally lose its luster.
Although I have been a relatively vocal member of this latter group, I suggested caution in believing the failure of new Fed policy to materially impact NGDP and unemployment, especially, would detract from monetarist momentum. Instead, at the first sign of reality diverging from expectations, I expect Market Monetarists (and most other economists who support further monetary stimulus) to claim that the most recent FOMC policy accommodations were either poorly crafted or insufficient in size. Similar reactions frequently stem from Keynesians regarding fiscal stimulus/spending, which can seemingly never be implemented correctly or in large enough doses to achieve the ideal outcome.
Only two weeks have passed since the FOMC policy statement was released, but inflation expectations and stocks have already given back nearly all of their gains:
(The green and blue lines depict 5- and 10-year inflation expectations, respectively, based on the difference between the corresponding Treasury rates and TIPS. The red line shows the S&P 500, which has primarily fallen since reaching a new post-recession peak the morning following the FOMC announcement.)
To my surprise, the clamor for more Fed action has already begun. Commenting on a similar graph to the one depicted above, Marcus Nunes asks:
Mr. Bernanke, “when will you ever learn”?
If my expectations about the ineffectiveness of monetary stimulus and monetarist response prove correct, then Nunes’ question is only the first of what will soon be an overwhelming plea for more Fed action.
These actions over the past couple weeks have reminded me of Buzz Lightyear’s classic line, in the movie Toy Story, “To infinity...and beyond!” The Fed’s new policy offers potentially infinite asset purchases, but doubt is quickly growing over whether infinity is enough. Will we find out what is beyond QEternity?
Note: Since time has been limited these days with studying and homework, I haven't had a chance to share or comment on several wonderful posts about the reasons QEternity will be ineffective. Here are some that stood out from the rest:
A Disturbing Look Inside the Mind of Ben Bernanke
How?
Quantitative easing isn't magic
Oh NGDP, is there anything you can't do?
Endogenous Versus Exogenous Money, One More Time
Inspiration or Insanity? Fed action and Market Reaction
Effects of QE3
The Fatal Conceit
Shamanistic Economics
Bernanke Goes All In...but will it work?
An area of concern during and after the most recent financial crisis is the shadow banking system. For many people, the important and troubling notion of shadow banking is that it rests outside the purview of many regulatory structures. However, there is a subtler distinction which has persisted throughout history and may offer guidance for future crises. That distinction is the reliance on short-term funding through sales of commercial paper (or repos), rather than deposits, which increases susceptibility to bank runs. Although shadow banking may seem like a relatively new phenomenon, the practice actually goes back at least 250 years.
In Responding to a Shadow Banking Crisis: the Lessons of 1763, Stephen Quinn and William Roberds guide us through events leading up to Amsterdam’s banking panic and analyze the central bank’s corresponding actions. There are numerous similarities between the different periods ranging from “the Lehman-like failure of the banking house Gebroeders de Neufville (p. 3)” to the expansion of central bank liquidity “in an unprecedented and ad hoc basis. (p. 3)” Leverage also played an important role in both crises. The following chart compares the average weekly starting balances of the eight largest banks in 1763 with their average weekly turnover (i.e. the amount of borrowing necessary to fund their positions):
While credit was initially accepted from and provided to these institutions equally, those banks most highly leveraged and reliant on short-term funding found themselves in the most precarious positions following the first bank failure. It should be no surprise then that the fall of Bear Stearns, Merrill Lynch, Lehman Brothers, and potential others preceded in a similar order.
Seeking to stem the crisis, the Bank of Amsterdam (the central bank) initially allowed its balance sheet to respond endogenously to market demand through its coin window. As the stock of eligible collateral grew thin, the Bank of Amsterdam relaxed its eligibility constraints to include silver bullion. The following chart compares the rise in central bank balance sheets through various measures in each of the crises:
In 1763 and 2008, under highly different regulatory constructs, the largest banks were highly leveraged and increasingly reliant on short-term funding. Although these factors were of little concern while markets remained liquid, a sudden dearth of liquidity nearly caused numerous large bank failures and led to severe economic fallout in the surrounding areas. Responding to the crisis in 1763, notable differences were the lack of federal bailouts and the central bank’s decision to only lend against good collateral at above market rates.
The crisis of 1763 teaches us that shadow banking is not necessarily a function of eluding regulations but rather an effective means for banks to increase their extension of credit and disperse the associated risks. Unlike the measures taken in 1763, responses to the 2008 financial crisis have further reduced the incentives of bank creditors and managers to worry about leverage or liquidity. Shadow banking has a long history and will probably have a long future as well. If we are to either avoid more frequent recurrences of these crises or reduce their impact on the broader economy, we must learn and practice the lessons from history.
1) Thoughts on the Michael Woodford Paper…. by Cullen Roche @ Pragmatic Capitalism
Regarding NGDP Targeting I am definitely skeptical that the Fed’s commitment to a NGDP target will have the stimulative effects that some hope it will. I still fail to see the transmission mechanism whereby balance sheets are meaningfully impacted in a manner that alters current income. Fed policy usually works through altering credit markets by making inside money less expensive and inducing borrowers to borrow. Obviously, at the zero bound with low demand for credit this policy approach has run aground. QE could “work”, but it’s been implemented incorrectly or at least inefficiently in my opinion since monetary policy is about price and not quantity. The portfolio rebalancing effect is interesting, but I have my hesitations about the unintended consequences of targeting nominal wealth as a form of putting the cart before the horse. Ie, the Bernanke Put has its negative side effects. But I am not against trying policies with the understanding that I could definitely be wrong. At this point, the economy is so abysmal and unstable that we should be trying more.
Woj’s Thoughts - Cullen has been a primary source of knowledge/learning about monetary operations so his thoughts on these matters are always worth reading. My position remains similar that the transmission mechanisms (portfolio rebalancing, lower real rates, higher asset prices) are insufficient to meaningfully alter the demand for credit, which continues to restrain growth and inflation. One question I have regarding Woodford’s policy suggestions is, how does the Fed credibly commit to maintaining low interest rates beyond the tenure of Bernanke or voting FOMC members? This is a similar issue faced by Congress or the President in committing to any policy (ex. lower deficits) beyond their term. My hunch is that Bernanke recognizes this dilemma and will refrain from attempting such an action for fear of impairing the Fed’s credibility. Only time will tell if that’s correct. The one issue I have with Cullen’s post is the last sentence in the paragraph above. Saying “that we should be trying more” implies that the expected cost of further actions does not outweigh the expected benefits. I remain skeptical of this argument from the perspective that risks from rising commodity and stock prices, not to mention increasing wealth inequality, are greater than the likely benefits.
2) Why you won’t find hyperinflation in democracies by Felix Salmon @ Reuters.com
The real value of this paper is its exhaustive nature. By looking down the list you can see what isn’t there — and, strikingly, what you don’t see are any instances of central banks gone mad in otherwise-productive economies. As Cullen Roche says, hyperinflation is caused by many things, such as losing a war, or regime collapse, or a massive drop in domestic production. But one thing is clear: it’s not caused by technocrats going mad or bad.
For that matter, there are no hyperinflations at all in North America: the closest we’ve come, geographically speaking, was in Nicaragua, from 1986-91. In fact, if you put to one side the failed states of Zimbabwe and North Korea, there hasn’t been a hyperinflation anywhere in the world since February 1997, more than 15 years ago, despite the enormous number of heterodox central-bank actions in that time.
Woj’s Thoughts - Why are we still even talking about hyperinflation in the US? How many years must pass with ~2% inflation before these concerns are simply ignored? Disinflation remains most characteristic of our economy today and I remain of the view that actual deflation is a more probable outcome in the next 5 years than inflation exceeding 5%.
3) Teaching History of Thought by Jonathan Finegold @ Economic Thought
Maybe it’s my inexperience, but I feel that a one semester undergraduate course — heavily complimented by course readings — could fit in most high profile economists between 1870 and present day, and teach them well. Mises and Hayek, for instance, could be taught in one or two fifty minute lectures (and, no doubt, these themes would recur), with 3–4 readings (call it ~150–200 pages over a weekend). Keynes could be taught in 1–2 lectures, as well. This kind of class would require a lot of reading of “basic” material, with lectures centered on more advanced ideas. Few undergraduate students experience this, but it would be a class where the professor doesn’t regurgitate the reading material.
…
What would an “ideal” history of thought class look like to you? We can even talk about undergraduate and graduate courses. Who would you teach? Who would you leave out? Why?
Woj’s Thoughts - I’ve yet to take a true history of economic thought course, but found these questions intriguing since most of my background is based on a random walk through historical readings and economic blogs. My experiences in other economics courses, however, suggests that the specific economists or theories a professor finds worthy of lecturing on is highly correlated to those with whom they agree. Who knows how my future will play out, but the idea of teaching a history of thought course someday is incredibly appealing. Maybe the suggestions here and on Jonathan’s blog will help in structuring that course.
While outlining Milton Friedman’s “The Methodology of Positive Economics” a few days back, I criticized his method of choosing between “valid” hypotheses on the basis that:
“logical completeness and consistency” [are] criteria which “are relevant but play a subsidiary role. (p. 5)”
Mainstream economists have clearly sided with Friedman on this debate, preferring “simplicity” to realism. However, as someone interested in Austrian tradition, I found comfort in the following from Ludwig Lachmann’s paper on “The Significance Of The Austrian School Of Economics In The History Of Ideas”:
Since we lack successful prediction as a means of evidence, we must of course devote special care to the validity of our theoretical assumptions.
Although I’ve just begun to read Lachmann’s work, he is quickly moving up the ranks of my favorite economists.
For the first week of Macroeconomics we’re learning about the Neoclassical/Exogenous/Solow growth model. As an introduction to the topic, the following Wikipedia page was recommended: http://en.wikipedia.org/wiki/Exogenous_growth_model. The page notes:
A key prediction of neoclassical growth models is that the income levels of poor countries will tend to catch up with or converge towards the income levels of rich countries as long as they have similar characteristics – for instance saving rates.
Considering that this hypothesis remains prevalent today, one can be forgiven for assuming that empirical observation supports this prediction. But, as the following sentences show:
Since the 1950s, the opposite empirical result has been observed on average. If the average growth rate of countries since, say, 1960 is plotted against initial GDP per capita (i.e. GDP per capita in 1960), one observes a positive relationship. In other words, the developed world appears to have grown at a faster rate than the developing world, the opposite of what is expected according to a prediction of convergence.
The Solow model presents an example of positive economics gone wrong. Since countries with “similar characteristics” can never be objectively defined, assuming they can even exist, the hypothesis cannot be refuted by empirical observation. Placing more value upon realistic assumptions will allow economics to enjoy the benefits of “creative destruction” it values so highly in a market system.
In 1953, Milton Friedman set out to establish “The Methodology of Positive Economics” to improve economics’ contribution to determining policy. He believed that “differences about economic policy among disinterested citizens derive predominant from different predictions about the economic consequences of taking action...rather than from fundamental differences in basic values. (p. 2-3)” While the goal was worthy of the task, it becomes clear quite quickly that Friedman’s normative views will shape the discussion.
Friedman begins by explaining that “The ultimate goal of a positive science is the development of theory” or “hypothesis” that yields valid and meaningful (i.e., not truistic) predictions about phenomena not yet observed. (p. 3-4)” While few would probably disagree with that statement, a divergence of opinion tends to arise when assumptions upon which a hypothesis rests are unrealistic. However, once one understands the “canons of formal logic (p. 4)” it becomes readily apparent that a hypothesis cannot be rejected solely on these grounds. A basic proposition of formal logic is that x implicates y (x → y) is always true, unless x is true and y is false. Therefore a hypothesis is valid if the predicted outcome (y) occurs, regardless of whether the assumptions (x) are realistic. Although Friedman was only restating the principles of formal logic, arguments of this kind remain prevalent today, hence the importance of his point cannot be understated.
After dispelling the notion that a theory can be rejected based solely on its unrealistic assumptions, Friedman ventures on to the far more difficult task of applying positive economics to policy determination. At this juncture the discussion takes a distinctly normative turn, which Friedman acknowledges but downplays at the same time. The troubles with making this application are primarily three-fold:
1) How does one determine if a theory is valid?
2) How does one choose among hypotheses that are consistent with available evidence?
3) How does one decide the circumstances for a which a theory holds?
Regarding question one, Friedman points out the inherent difficulty (near impossibility) of conducting “controlled experiments” in the social sciences “foster[ed] a retreat into pretty formal or tautological analysis.9 (p. 6)” This path is clearly not suitable if economic theory “is to be able to predict and not merely describe the consequences of action. (p.7)” Empirical evidence seemingly offers the greatest potential for validating theories, but leads to the normative question of “whether [a theory] yields sufficiently accurate predictions. (p. 9)”
Moving beyond questions of validity, Friedman accepts that “The choice among alternative hypotheses equally consistent with the available evidence must to some extent be arbitrary, (p. 5)” but believes “there is general agreement that relevant considerations are suggested by the criteria “simplicity” and “fruitfulness”. (p. 5)” Friedman also mentions “logical completeness and consistency” as criteria which “are relevant but play a subsidiary role. (p. 5)” Establishing the relative weights of these criteria, let alone the relative value of different hypotheses on each scale, clearly extends into normative economics. On the whole I find Friedman’s support for his normative views unconvincing and suggest reordering the criteria as follows: fruitfulness, consistency, logical completeness, and simplicity.
Assuming agreement on a specific, valid hypothesis, one must still decide how “to specify the circumstances under which the formula works. (p. 11)” Friedman argues that “the assumptions [cannot] be used to determine the circumstances for which a theory holds. (p. 11)” Instead, the validity of the hypothesis tells us whether the theory holds for given circumstances. Unfortunately this statement is no more than a tautology and offers little guidance for applying policy to different circumstances in advance of known outcomes. Here we find the importance of a hypothesis’ logical completeness and consistency. Without knowing in advance if, and to what degree, a hypothesis will prove correct, the assumptions (regardless of how unrealistic) become a necessary guide to determining applicable circumstances.
Friedman sets out on the difficult task of developing a positive economics with normative implications. While his efforts to prove that a hypothesis must not be judged solely by the realistic nature of its assumptions were valiant, this strength of his paper has been largely overlooked. Instead, Friedman’s normative views regarding determination of a theory’s validity, choosing among competing valid hypotheses and application to specific circumstances have indoctrinated economists with a means to defend orthodox, mainstream economics from all criticism.
These departures from the “positive”, though not explicitly or (maybe) even implicitly intended by Friedman, have led to the frequent practice of three pitfalls in straight thinking:
1) Fallacies of composition - Ex. Households face a budget constraint based on income and ability to borrow, therefore governments must face similar constraints.
2) Fallacies of analogy - Ex. Greece’s sovereign debt rates are high because of its large public debt-to-GDP and Japan has higher public debt-to-GDP, therefore Japan’s sovereign debt rates should be soaring.
3) Post hoc ergo procter hoc (i.e. "after this, therefore because of this") - Ex. Lehman Brothers went bankrupt right before the financial crisis happened, therefore the failure of Lehman Brothers caused the financial crisis.
The failure of mainstream economics to predict the housing bubble, financial crisis, recession, and weak recovery, has not materially weakened the acceptance of previously held theories. This should not be surprising given Friedman’s claim that “the continued use and acceptance of [a hypothesis] over a long period, and the failure of any coherent, self-consistent alternative to be developed and be widely accepted, is strong indirect testimony to its worth. (p. 14)” Contrary to this view, advances in modeling network systems implies that suboptimal outcomes can actually be self-sustaining.
“The Methodology of Positive Economics” is ultimately both a positive and normative method for discerning real implications of economic hypotheses, although the line has become blurred over time. If economics is to regain its grandeur, it must recognize the strict limits of positive economics before readdressing the normative decisions posed by Friedman. In this manner, economics may come to study and accept a much broader range of hypotheses that are not only more fruitful, but also more logically complete and consistent.
Update - Credit should be given to Ernest Nagel’s paper, “Assumptions in Economic Theory”, for inspiring a number of my thoughts in this post.