NBER Heterogeneous-Agent Macro Workshop
Matthew Rognlie
Spring 2023
(based on Auclert, Rigato, Rognlie, Straub 2023, forthcoming in QJE)
import numpy as np
from scipy import linalg, integrate, optimize, interpolate
import matplotlib.pyplot as plt
# some useful plot defaults
plt.rcParams.update({'font.size' : 20, 'lines.linewidth' : 3.5, 'figure.figsize' : (13,7)})
Substituting first matrix equation into second to get matrix mapping to :
We call this matrix the pass-through matrix, mapping shocks to log nominal marginal cost to changes in price
def Psi_td(Phi, beta):
T = len(Phi)
beta_Phi = Phi*beta**np.arange(T)
return 1/(Phi.sum() * beta_Phi.sum()) * np.tril(linalg.toeplitz(Phi)) @ np.triu(linalg.toeplitz(beta_Phi))
T = 800
beta = 0.98
Phi_calvo = 0.75**np.arange(T)
Psi_calvo = Psi_td(Phi_calvo, beta)
plt.plot(Psi_calvo[:21, [0, 5, 10]]);
Phi_taylor = 1.*(np.arange(T) < 4)
Psi_taylor = Psi_td(Phi_taylor, beta)
plt.plot(Psi_taylor[:21, [0, 5, 10]]);
The New Keynesian Phillips curve is derived in terms of real marginal cost :
which can then be related, in models, to the output gap, employment, etc.
How can we go from the pass-through matrix, relating nominal marginal cost and prices, to something like this, relating real marginal cost and inflation?
We get a fixed-point equation—conditional on real marginal cost, prices themselves affect the price-setting decision:
Solving this gives:
Equivalent to , representing "rounds" of feedback
Finally, first differences to obtain inflation:
We have
and call matrix mapping real marginal cost to inflation the generalized Phillips curve
Generalized Phillips curve for Calvo is matrix representation of NKPC:
Need high to avoid artifacts from truncation close to , but can implement formula directly:
def K_from_Psi(Psi):
T = len(Psi)
# calculate (I-Psi)^(-1)*Psi
K = np.linalg.solve(np.eye(T) - Psi, Psi)
# apply (I-L) to this, i.e. take first difference in rows
K[1:] -= K[:-1]
return K
K_calvo = K_from_Psi(Psi_calvo)
plt.plot(K_calvo[:21, [0, 5, 10, 15]]);
Note: this looks like numerical error, but isn't—the generalized Phillips curve for Taylor pricing is really this ugly-looking!
Inertia: inflation persists a bit after real marginal cost shocks (at dates 0, 5, 10, 15):
K_taylor = K_from_Psi(Psi_taylor)
plt.plot(K_taylor[:21, [0, 5, 10, 15]]);
lamb_ih = 0.5*(1 - 0.8 ** np.arange(T-1))
Phi_ih = np.ones(T)
Phi_ih[1:] = np.cumprod(1-lamb_ih) # survival from 1 onward is cumulative product of 1-hazard
Plot hazard rate of price adjustment:
plt.plot(lamb_ih[:21]);
Psi_ih = Psi_td(Phi_ih, beta)
K_ih = K_from_Psi(Psi_ih)
plt.plot(K_ih[:21, [0, 5, 10, 15]]);
Same rate of decay in anticipation (can prove this!), mostly forward-looking, inertia small in comparison!
plt.plot(K_ih[:71, [25, 45, 65]]);
lamb_dh = 0.2*(1 + 3*0.8** np.arange(T-1))
Phi_dh = np.ones(T)
Phi_dh[1:] = np.cumprod(1-lamb_dh)
Psi_dh = Psi_td(Phi_dh, beta)
K_dh = K_from_Psi(Psi_dh)
plt.plot(K_dh[:21, [0, 5, 10, 15]]);
Long-run decay of anticipation at rate , distinct behavior confined to neighborhood of shock (and date 0)
plt.plot(K_dh[:71, [25, 45, 65]]);
The pass-through matrix is a sequence-space Jacobian, and TD has fake news matrix equal to a simple rank-one matrix:
A "fake news" shock about future marginal costs at leads to an increase in reset prices at date 0 in proportion to , which has a persistent effect at date in proportion to
F_calvo = Psi_calvo.copy()
F_calvo[1:, 1:] -= Psi_calvo[:-1, :-1]
Only one nonzero singular value, i.e. this is rank one:
u, s, vh = np.linalg.svd(F_calvo[:100, :100]) # not looking out to T to avoid artifacts of truncation
s[:5]
Corresponding singular vectors are proportional to and , respectively, can rescale so first entry is 1 and see exponential decay:
u[:6, 0] / u[0, 0]
vh[0, :6] / vh[0, 0]
Can easily assume fraction of firms follow one TD rule, fraction follow another, just by combining pass-through matrices.
For instance, let's try simple Calvo mixture, with some more flexible and some less, averaging out to same 0.25 frequency of price change:
Phi_sticky = 0.9**np.arange(T)
Phi_flex = 0.6**np.arange(T)
Psi_sticky = Psi_td(Phi_sticky, beta)
Psi_flex = Psi_td(Phi_flex, beta)
Psi_mixture = 0.5*Psi_sticky + 0.5*Psi_flex
Same intuition as decreasing hazards: price-setters are more likely to have recently reset, so they lower prices after inflation to go closer to sticky counterparts
K_mixture = K_from_Psi(Psi_mixture)
plt.plot(K_mixture[:21, [0, 5, 10, 15]]);
Continuum of firms, assume their ideal log price follows random walk with symmetric, mean-zero shocks .
Defining idiosyncratic price gap as , firm objective in simple model becomes
i.e. pick a state-contingent plan for price gaps to minimize quadratic loss, subject to time-varying aggregate marginal cost shocks, and a menu cost for adjusting your price (i.e. not letting price gap drift with shocks)
Optimal policy: adjust to reset point if shock takes you outside adjustment bands
Note policies common across all firms
Symmetry of shocks implies and in steady state
Let be steady-state density of price gaps in any period, "before" firms outside bands have adjusted to , and be steady-state fraction of firms adjusting:
Assume shocks to ideal log price normal, define first mean-zero normal pdf:
C = 1/np.sqrt(2*np.pi)
def normal_pdf(x, sigma):
return C/sigma*np.exp(-(x/sigma)**2/2)
We'll assume quarterly calibration where standard deviation of shocks is 0.05 (roughly calibrated value from our paper):
sigma = 0.05 # roughly the calibrated value in our New Pricing Models paper
def f(x):
return normal_pdf(x, sigma)
Assume we're given adjustment band (and by symmetry on other side), iterate on law of motion for :
def ss_dist(xbar):
xs = np.linspace(-xbar, xbar, 60) # 60 nodes for cubic spline, could exploit symmetry
gxs = np.full(60, 1/(2*xbar)) # initially guess uniform
g = interpolate.CubicSpline(xs, gxs)
for it in range(100):
freq = 1 - g.integrate(-xbar, xbar) # frequency of price adjustment
# evaluate g(x) at each node x for cubic spline, reinterpolate
# painfully inefficient to call cubic spline over and over again w/overhead
# but we won't bother with more efficiency here
gxs_new = (freq*f(xs)
+ np.array([integrate.quad(lambda xp: f(x - xp)*g(xp), -xbar, xbar)[0] for x in xs]))
g = interpolate.CubicSpline(xs, gxs_new)
if np.max(np.abs(gxs_new - gxs)) < 1E-10:
return g
gxs = gxs_new
raise ValueError('No convergence')
xbar = 0.1
g = ss_dist(xbar)
freq = 1 - g.integrate(-xbar, xbar)
freq
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));
Don't need to know itself to do this!
xbar = optimize.brentq(lambda xbar: ss_dist(xbar).integrate(-xbar, xbar) - 0.75, 0.01, 0.1)
xbar
g = ss_dist(xbar)
freq = 1 - g.integrate(-xbar, xbar)
freq
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));
If end-of-period price gap is today, what (following steady-state policy) will it be on average in periods?
Law of iterated expectations, plus symmetry (reset to zero, where expectations are zero), imply:
xs = np.linspace(-xbar, xbar, 60)
def E_recursion(E):
Exs = [integrate.quad(lambda xp: f(xp - x)*E(xp), -xbar, xbar)[0] for x in xs]
return interpolate.CubicSpline(xs, Exs)
Decay much faster than probability of resetting prices, due to "selection effects"
E0 = lambda x: x
Es = [E0]
for i in range(4):
Es.append(E_recursion(Es[-1]))
for i in range(5):
plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();
for i in range(1, 5):
plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();
Let's look at "persistence" of expectation function starting from reset point or adjustment band :
TPhi = 30
Phi_e = np.empty(TPhi)
Phi_i = np.empty(TPhi)
Et = interpolate.CubicSpline(xs, xs) # identity
for t in range(TPhi):
Phi_e[t] = Et(xbar)/xbar
Phi_i[t] = Et.derivative()(0)
Et = E_recursion(Et)
Initially persistence at is less, because people are more likely to adjust there:
plt.plot(Phi_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(Phi_i[:11], label=r'persistence from $x^*=0$')
plt.legend();
lambda_e = (Phi_e[:-1] - Phi_e[1:])/Phi_e[:-1]
lambda_i = (Phi_i[:-1] - Phi_i[1:])/Phi_i[:-1]
plt.plot(lambda_e[:11], label=r'from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'from $x^*=0$')
plt.legend();
TPhi = 30
Phi_actual = np.empty(TPhi)
Et_noreset = interpolate.CubicSpline(xs, np.ones(len(xs)))
for t in range(TPhi):
if t < 4:
plt.plot(xs, Et_noreset(xs), label=f'Price survives to t={t}')
Phi_actual[t] = Et_noreset(0)
Et_noreset = E_recursion(Et_noreset)
plt.legend();
lambda_actual = 1 - Phi_actual[1:] / Phi_actual[:-1]
plt.plot(lambda_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'persistence from $x^* =0$')
plt.plot(lambda_actual[:11], label='actual survival hazard')
plt.legend();
Assume aggregate price is average of price gaps, (idiosyncratic shocks average out to zero, so price gaps give price level)
The pass-through matrix from nominal marginal cost to price for the menu cost model is given by a mixture of time-dependent models
where and are pass-through matrices with survivals and
First term gives effect of extensive margin, second term gives effect of intensive margin, with weight on extensive margin of
Suppose I'm right at the threshold and decide not to adjust today
effect on price level today:
effect tomorrow:
... effect in periods:
for i in range(5):
plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();
Suppose I'm adjusting and I decide to raise my price by , slightly above zero
effect on price level today:
effect tomorrow:
... effect in periods:
Pad with zeros to get -length, build pass-through matrices for extensive and intensive margin:
Phi_e_long = np.zeros(T)
Phi_e_long[:TPhi] = Phi_e
Psi_e = Psi_td(Phi_e_long, beta)
Phi_i_long = np.zeros(T)
Phi_i_long[:TPhi] = Phi_i
Psi_i = Psi_td(Phi_i_long, beta)
Weight extensive by , intensive by :
alpha = 2*g(xbar)*xbar*Phi_e.sum()
alpha
Psi = alpha*Psi_e + (1-alpha)*Psi_i
Very sharp spikes, corresponding to near-flexible prices (rapidly declining "virtual survival", i.e. low persistence of decisions):
plt.plot(Psi[:21, [0, 5, 10]]);
Insanely close to the Calvo New Keynesian Phillips curve, but a lot more flexible (slope of 2, not 0.1)!
K = K_from_Psi(Psi)
plt.plot(K[:21, [0, 5, 10, 15]]);
Both extensive and intensive very quickly converge to the same constant hazard (corresponding to leading odd eigenvalue—cf Alvarez and Lippi Ecta 2022 and our paper), much higher than actual probability of adjusting (near 0.25)...
... so they behave like a Calvo, but one with more like a 0.7 quarterly adjustment frequency!
plt.plot(lambda_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'persistence from $x^* =0$')
plt.plot(lambda_actual[:11], label='actual survival hazard')
plt.legend();
lamb = 0.2
def ss_dist(xbar, tol=1E-7):
xs = np.linspace(-xbar, xbar, 60)
gxs = np.full(60, 1/(2*xbar))
g = interpolate.CubicSpline(xs, gxs)
for it in range(120):
freq = 1 - (1-lamb)*g.integrate(-xbar, xbar)
gxs_new = (freq*f(xs)
+ np.array([integrate.quad(lambda xp: (1-lamb)*f(x - xp)*g(xp), -xbar, xbar)[0] for x in xs]))
g = interpolate.CubicSpline(xs, gxs_new)
if np.max(np.abs(gxs_new - gxs)) < tol:
return g
gxs = gxs_new
raise ValueError('No convergence')
Now that so many adjustments are free, bands have to be wider to hit same total frequency of adjustments:
xbar = optimize.brentq(lambda xbar: (1-lamb)*ss_dist(xbar).integrate(-xbar, xbar) - 0.75, 0.1, 0.15)
g = ss_dist(xbar, tol=1E-10)
freq = 1 - (1-lamb)*g.integrate(-xbar, xbar)
xbar, freq
Much less density near bands as well:
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));
Same as before, but adding an extra factor to account for free resets to zero:
xs = np.linspace(-xbar, xbar, 60)
def E_recursion(E):
Exs = [integrate.quad(lambda xp: (1-lamb)*f(xp - x)*E(xp), -xbar, xbar)[0] for x in xs]
return interpolate.CubicSpline(xs, Exs)
Rewrite code that calculates virtual and actual survival:
TPhi = 50
Phi_e = np.empty(TPhi)
Phi_i = np.empty(TPhi)
Phi_actual = np.empty(TPhi)
Et = interpolate.CubicSpline(xs, xs) # identity
Et_noreset = interpolate.CubicSpline(xs, np.ones(len(xs)))
for t in range(TPhi):
Phi_e[t] = Et(xbar)/xbar
Phi_i[t] = Et.derivative()(0)
Phi_actual[t] = Et_noreset(0)
Et = E_recursion(Et)
Et_noreset = E_recursion(Et_noreset)
Still converging to a constant, different from actual hazard, but it takes longer ( higher, so bigger difference between starting at and ) and less of a gap:
lamb_e = 1 - Phi_e[1:] / Phi_e[:-1]
lamb_i = 1 - Phi_i[1:] / Phi_i[:-1]
lamb_actual = 1 - Phi_actual[1:] / Phi_actual[:-1]
plt.plot(lamb_e[:11], label='extensive')
plt.plot(lamb_i[:11], label='intensive')
plt.plot(lamb_actual[:11], label='actual')
plt.legend();
Phi_e_long = np.zeros(T)
Phi_e_long[:TPhi] = Phi_e
Psi_e = Psi_td(Phi_e_long, beta)
Phi_i_long = np.zeros(T)
Phi_i_long[:TPhi] = Phi_i
Psi_i = Psi_td(Phi_i_long, beta)
Weight on extensive margin is smaller, because so many adjustments are free that it's less important:
alpha = 2*(1-lamb)*g(xbar)*xbar*Phi_e.sum()
alpha
Psi = alpha*Psi_e + (1-alpha)*Psi_i
Less sharp spikes, corresponding to less flexible prices:
plt.plot(Psi[:21, [0, 5, 10]]);
(If you run the NKPC as a regression on data that follows this, it still holds almost exactly.)
K = K_from_Psi(Psi)
plt.plot(K[:21, [0, 5, 10, 15]]);