Monetary Economics Program Meeting
March 8, 2013
Gary Gorton and Andrew Metrick, Yale University and NBER, and Lei Xie, Yale University
Why did the failure of Lehman Brothers make the financial crisis dramatically worse? Gorton, Metrick, and Xie suggest that following the initial runs on repo and asset-backed commercial paper, the financial crisis was a process of a build-up of risk during the crisis. During the crisis, market participants tried to preserve the "moneyness" of money market instruments by shortening their maturities: the flight from maturity. These researchers show that the flight from maturity was manifested in a steepening of the term structures of spreads in money markets. The failure of Lehman Brothers was the tipping point of this build-up of systemic fragility. They produce a chronology of the crisis that formalizes the dynamics of the crisis and test for common breakpoints in panels, showing the date of the subprime shock and the dates of runs in the secured and unsecured money markets.
Matthias Doepke, Northwestern University and NBER, and Martin Schneider, Stanford University and NBER
Doepke and Schneider develop a theory that gives rise to an endogenous unit of account. Agents enter into non-contingent contracts with a variety of business partners. Trade unfolds sequentially in credit chains and is subject to random matching. By using a dominant unit of account, agents can lower their exposure to relative price risk, avoid costly default, and create more total surplus. The authors discuss conditions under which it is optimal to adopt circulating government paper as the dominant unit of account, and the optimal choice of "currency areas" when there is variation in the intensity of trade within and across regions.
James Costain and Anton Nakov, Bank of Spain
Costain and Nakov propose a near-rational model of retail price adjustment consistent with microeconomic and macroeconomic evidence on price dynamics. The framework is based on the idea that avoiding errors in decision making is costly. Given the assumed cost function for error avoidance, the timing of firms' price adjustments is determined by a weighted binary logit, and the prices firms choose are determined by a multinomial logit. The researchers build this behavior into a DSGE model, estimate the decision cost function by matching microdata, and simulate aggregate dynamics using a tractable algorithm for heterogeneous-agent models. Eerrors in the prices firms set and errors in the timing of these adjustments are both relevant for the results. Errors of the first type help to make the model consistent with some puzzling observations from microdata, such as the coexistence of large and small price changes, the behavior of adjustment hazards, and the relative variability of prices and costs. Errors of the second type increase the real effects of monetary shocks by reducing the correlation between the value of price adjustment and the probability of adjustment (that is, by reducing the "selection effect"). Allowing for both types of errors also helps to reproduce the effects of trend inflation on price adjustment behavior. This model of error-prone pricing in many ways resembles a stochastic menu cost (SMC) model, but it has fewer free parameters than most SMC models have. Also, unlike those models, it does not require the implausible assumption of i.i.d. adjustment costs. The derivation here of a weighted logit from control costs offers an alternative justification for the adjustment hazard derived by Woodford (2008). The assumption that costs are related to entropy is similar to the framework of Sims (2003) and the subsequent "rational inattention" literature. However, the setup here has the major technical advantage that a firm's idiosyncratic state variable is simply its price level and productivity, whereas under rational inattention a firm's idiosyncratic state is its prior (which is generally an infinite-dimensional object).
Samuel Hanson and Adi Sunderam, Harvard University, and David Scharfstein, Harvard University and NBER
Hanson, Scharfstein, and Sunderam analyze the leading reform proposals to address the structural vulnerabilities of money market mutual funds (MMFs). They take the main goal of MMF reform to be safeguarding financial stability. In light of this goal, reforms should reduce the ex ante incentives for MMFs to take excessive risk and increase the ex post resilience of MMFs to system-wide runs. Their analysis suggests that requiring MMFs to have subordinated capital buffers best accomplishes these goals. Subordinated capital provides MMFs with loss absorption capacity, lowering the probability that a MMF suffers losses large enough to trigger a run, and reduces incentives to take excessive risks. They estimate that a capital buffer in the range of 3 to 4 percent would significantly reduce the probability that ordinary MMF shareholders ever suffer losses. In exchange for having the safer investment product made possible by subordinated capital, the yield paid to ordinary MMFs shareholders would decline by only 0.05 percent. Capital buffers would generate significant financial stability benefits, while maintaining the current fixed net asset value (NAV) structure of MMFs. Other reform alternatives, such as converting MMFs to a floating NAV, would be less effective in protecting financial stability.
David Lucca and Emanuel Moench, Federal Reserve Bank of New York
Lucca and Moench document large excess returns on U.S. equities in anticipation of monetary policy decisions made at scheduled meetings of the Federal Open Market Committee (FOMC) in the past few decades. The abnormal pre-FOMC returns have increased over time and account for large fractions of total annual realized stock returns. While other major international equity indexes experienced similar pre-FOMC returns, they find no such effect in U.S. Treasury securities and other fixed income instruments. Other U.S. macroeconomic news announcements also do not give rise to analogous pre-announcement equity returns. The pre-FOMC return is higher in periods when the slope of the Treasury yield curve is low, implied equity market volatility is high, and when past pre-FOMC returns have been high. They discuss a few possible sources of the pre-FOMC announcement drift, none of which appears to be fully consistent with the vast empirical evidence that we assemble.
Lars Svensson, Sveriges Riksbank and NBER
In 1993 the Riksbank announced an official target for annual CPI inflation of 2 percent to apply from 1995 on. Over the 15 years since then, 1997-2011, average CPI inflation has equaled 1.4 percent, thus falling short of the target by 0.6 percentage points. In contrast, average inflation in Australia, Canada, and the United Kingdom, all of which have had a fixed inflation target as long as Sweden, has been on or very close to the target. Has this undershooting of the inflation target in Sweden had any costs in terms of higher average unemployment? This depends on whether the long-run Phillips curve in Sweden is vertical or not. During 1997-2011, average inflation expectations have been close to the target. The inflation target has thus been credible. If inflation expectations are also anchored to the target when average inflation deviates from the target, then the long-run Phillips curve is no longer vertical but downward-sloping. Average inflation below the credible target means that average unemployment is higher than it would have been if average inflation had been on target. The data indicate that the average unemployment rate has been about 0.8 percentage points higher. This is a large unemployment cost of undershooting the inflation target. Svensson performs some simple robustness tests indicating that the estimate of the unemployment cost is rather robust, but that the estimate remains preliminary, and further scrutiny may be needed to assess its robustness.