Pricing models in the sequence space

NBER Heterogeneous-Agent Macro Workshop

Matthew Rognlie

Spring 2023

(based on Auclert, Rigato, Rognlie, Straub 2023, forthcoming in QJE)

Imports

In [51]:
import numpy as np
from scipy import linalg, integrate, optimize, interpolate
import matplotlib.pyplot as plt

# some useful plot defaults
plt.rcParams.update({'font.size' : 20, 'lines.linewidth' : 3.5, 'figure.figsize' : (13,7)})

Generalized time-dependent (TD) models

  • Exogenous survival function Φs probability a price still in force s periods after being set, starts at Φ0=1
  • Hazard rate λs(Φs1Φs)/Φs of price being reset at s
    • Visit from "generalized Calvo fairy" with probability λs
  • Objective when resetting in simple problem: minimize discounted quadratic loss function given shifter to log nominal marginal cost (normalized mean zero)
    • weighted by probability price still in effect!
P^targminP^s=0βsΦs12(P^MC^t+s)2

Examples of time-dependent models

  • Calvo: Φs=θs for some Calvo adjustment rate 1θ
    • Constant hazard rate λs=1θ
  • Taylor: Φs=1 for s<N and Φs=0 for sN
    • Hazard rate is 0 until we reach Nth period, where it becomes 1 (extreme example of increasing hazard rate)
    • Prices only last for N periods, for some determinate N

First-order condition for log reset price

  • Want to set log prices equal to weighted average of future log prices:
P^t=s=0βsΦsMC^t+ss=0βsΦs
  • In vectorized form:
P^=1s=0βsΦs (Φ0βΦ1β2Φ20Φ0βΦ100Φ0)MC^

Aggregate prices

  • Assume aggregate price index is average of prices currently in effect
    • (to first-order approximation around zero-inflation steady state)
  • The distribution of ages of prices is proportional to survival function, so we have
P^t=s=0ΦsP^tss=0Φs
  • In vectorized form:
P^=1s=0Φs (Φ000Φ1Φ00Φ2Φ1Φ0) P^

Combining everything

Substituting first matrix equation into second to get matrix mapping MC^ to P^:

P^=1(s=0Φs)(s=0βsΦs) (Φ000Φ1Φ00Φ2Φ1Φ0)(Φ0βΦ1β2Φ20Φ0βΦ100Φ0)Ψ MC^

We call this matrix the pass-through matrix, mapping shocks to log nominal marginal cost to changes in price

Let's calculate this!

Ψ1(s=0Φs)(s=0βsΦs) (Φ000Φ1Φ00Φ2Φ1Φ0)(Φ0βΦ1β2Φ20Φ0βΦ100Φ0)
In [4]:
def Psi_td(Phi, beta):
    T = len(Phi)
    beta_Phi = Phi*beta**np.arange(T)
    return 1/(Phi.sum() * beta_Phi.sum()) * np.tril(linalg.toeplitz(Phi)) @ np.triu(linalg.toeplitz(beta_Phi))

Pass-through matrix for Calvo

In [19]:
T = 800
beta = 0.98
Phi_calvo = 0.75**np.arange(T)
Psi_calvo = Psi_td(Phi_calvo, beta)
plt.plot(Psi_calvo[:21, [0, 5, 10]]);

Pass-through matrix for Taylor

In [20]:
Phi_taylor = 1.*(np.arange(T) < 4)
Psi_taylor = Psi_td(Phi_taylor, beta)
plt.plot(Psi_taylor[:21, [0, 5, 10]]);

General points about pass-through matrices

  • "Tent-shaped": an MIT shock to nominal marginal cost at some date passes through to aggregate shocks partly at that date...
    • but partly in anticipation
    • and partly afterward
    • as sticky prices are stuck after shock, and set in anticipation of shock
  • Perfectly flexible prices correspond to identity matrix Ψ=I
    • we're assuming that firms ideally want 100% pass-through if flexible
    • more generally, Ψ is mapping from statically optimal prices to actual aggregate prices
  • Impulse response to permanent nominal marginal cost shock given by Ψ1, i.e. sum of columns
    • converges to 1, i.e full long-run pass-through

New Keynesian Phillips curve: real marginal cost and inflation

The New Keynesian Phillips curve is derived in terms of real marginal cost mc^t=MC^tP^t:

πt=κmc^t+βπt+1

which can then be related, in models, to the output gap, employment, etc.

How can we go from the pass-through matrix, relating nominal marginal cost and prices, to something like this, relating real marginal cost and inflation?

Solving fixed point

We get a fixed-point equation—conditional on real marginal cost, prices themselves affect the price-setting decision:

P^=ΨMC^=Ψ(mc^+P^)

Solving this gives:

P^=(IΨ)1Ψmc^

Equivalent to P^=(Ψ+Ψ2+Ψ3+)mc^, representing "rounds" of feedback

Finally, first differences to obtain inflation:

π=(IL)(IΨ)1ΨKmc^

Generalized Phillips curve

We have

π=(IL)(IΨ)1ΨKmc^

and call matrix K mapping real marginal cost to inflation the generalized Phillips curve

Generalized Phillips curve for Calvo is matrix representation of NKPC:

K=(κβκβ2κ0κβκ00κ)

Calculate generalized Phillips curve

Need high T to avoid artifacts from truncation close to T, but can implement formula directly:

In [10]:
def K_from_Psi(Psi):
    T = len(Psi)
    
    # calculate (I-Psi)^(-1)*Psi
    K = np.linalg.solve(np.eye(T) - Psi, Psi)
    
    # apply (I-L) to this, i.e. take first difference in rows
    K[1:] -= K[:-1]
    
    return K

Get what we expect for Calvo!

In [12]:
K_calvo = K_from_Psi(Psi_calvo)
plt.plot(K_calvo[:21, [0, 5, 10, 15]]);

Something much uglier for Taylor, with a bit of inertia

Note: this looks like numerical error, but isn't—the generalized Phillips curve for Taylor pricing is really this ugly-looking!

Inertia: inflation persists a bit after real marginal cost shocks (at dates 0, 5, 10, 15):

In [14]:
K_taylor = K_from_Psi(Psi_taylor)
plt.plot(K_taylor[:21, [0, 5, 10, 15]]);

Smoother version of increasing hazard

In [17]:
lamb_ih = 0.5*(1 - 0.8 ** np.arange(T-1))
Phi_ih = np.ones(T)
Phi_ih[1:] = np.cumprod(1-lamb_ih) # survival from 1 onward is cumulative product of 1-hazard

Plot hazard rate of price adjustment:

In [18]:
plt.plot(lamb_ih[:21]);

Generalized Phillips curve for this case: smoother, still with inertia

In [21]:
Psi_ih = Psi_td(Phi_ih, beta)
K_ih = K_from_Psi(Psi_ih)
plt.plot(K_ih[:21, [0, 5, 10, 15]]);

But this case looks a lot like Calvo in the long run...

Same β rate of decay in anticipation (can prove this!), mostly forward-looking, inertia small in comparison!

In [30]:
plt.plot(K_ih[:71, [25, 45, 65]]);

Opposite case: decreasing hazard, anti-inertia

In [33]:
lamb_dh = 0.2*(1 + 3*0.8** np.arange(T-1))
Phi_dh = np.ones(T)
Phi_dh[1:] = np.cumprod(1-lamb_dh)
Psi_dh = Psi_td(Phi_dh, beta)
K_dh = K_from_Psi(Psi_dh)
plt.plot(K_dh[:21, [0, 5, 10, 15]]);

Also looks more like Calvo in long run...

Long-run decay of anticipation at rate β, distinct behavior confined to neighborhood of shock (and date 0)

In [34]:
plt.plot(K_dh[:71, [25, 45, 65]]);

Summary of what we've learned for time-dependent

  • No generalized Phillips curves look too different from Calvo
    • they're all more forward-looking than backward-looking, with anticipation decaying at β
  • Increasing hazards lead to a bit of inertia (and vice versa for decreasing hazards)
  • Intuition: with increasing hazards, people resetting prices today are more likely not to have reset prices recently...
  • ... so they missed any recent inflation, and need to set higher prices to "catch up", resulting in inertial inflation

Nerdy sequence-space Jacobian side note

The pass-through matrix is a sequence-space Jacobian, and TD Ψ has fake news matrix Ft,s=Jt,sJt1,s1 equal to a simple rank-one matrix:

F1(s=0Φs)(s=0βsΦs) (Φ0Φ1Φ2)(Φ0βΦ1β2Φ2)

A "fake news" shock about future marginal costs at s leads to an increase in reset prices at date 0 in proportion to βsΦs, which has a persistent effect at date t in proportion to Φt

Nerdy side note continued: recovering survival function

In [37]:
F_calvo = Psi_calvo.copy()
F_calvo[1:, 1:] -= Psi_calvo[:-1, :-1]

Only one nonzero singular value, i.e. this is rank one:

In [41]:
u, s, vh = np.linalg.svd(F_calvo[:100, :100]) # not looking out to T to avoid artifacts of truncation
s[:5]
Out[41]:
array([1.47714857e-01, 1.31197148e-16, 5.13419814e-17, 4.77971054e-17,
       4.50458450e-17])

Corresponding singular vectors are proportional to Φs and βsΦs, respectively, can rescale so first entry is 1 and see exponential decay:

In [42]:
u[:6, 0] / u[0, 0]
Out[42]:
array([1.        , 0.75      , 0.5625    , 0.421875  , 0.31640625,
       0.23730469])
In [45]:
vh[0, :6] / vh[0, 0]
Out[45]:
array([1.        , 0.735     , 0.540225  , 0.39706537, 0.29184305,
       0.21450464])

Other detour: mixture models

Can easily assume fraction of firms follow one TD rule, fraction follow another, just by combining pass-through matrices.

For instance, let's try simple Calvo mixture, with some more flexible and some less, averaging out to same 0.25 frequency of price change:

In [49]:
Phi_sticky = 0.9**np.arange(T)
Phi_flex = 0.6**np.arange(T)

Psi_sticky = Psi_td(Phi_sticky, beta)
Psi_flex = Psi_td(Phi_flex, beta)
Psi_mixture = 0.5*Psi_sticky + 0.5*Psi_flex

Generalized Phillips curve shows anti-inertia

Same intuition as decreasing hazards: price-setters are more likely to have recently reset, so they lower prices after inflation to go closer to sticky counterparts

In [50]:
K_mixture = K_from_Psi(Psi_mixture)
plt.plot(K_mixture[:21, [0, 5, 10, 15]]);

Menu cost models

Basic menu cost model

Continuum of firms, assume their ideal log price pit follows random walk pit=pit1+ϵit with symmetric, mean-zero shocks ϵit.

Defining idiosyncratic price gap as xitpitpit, firm objective in simple model becomes

min{xit}E0t=0βt[12(xitMC^t)2+ξ1xitxit1ϵit]

i.e. pick a state-contingent plan for price gaps xit to minimize quadratic loss, subject to time-varying aggregate marginal cost shocks, and a menu cost ξ for adjusting your price (i.e. not letting price gap drift with shocks)

Optimal policy is "Ss" rule

Optimal policy: adjust to reset point xt if shock takes you outside adjustment bands [x_t,x¯t]

xit={xtif xit1ϵit[x_t,x¯t]xit1ϵitotherwise

Note policies (xt,x_t,x¯t) common across all firms i

Symmetry of shocks implies xt=0 and x_t=x¯t in steady state

Law of motion for density of price gaps

Let g(x) be steady-state density of price gaps xit1ϵit in any period, "before" firms outside bands have adjusted to xt, and freq be steady-state fraction of firms adjusting:

freq=1x¯x¯g(x)dx
g(x)=freqf(x)+x¯x¯f(xx)g(x)dx

Calculate steady-state density g

Assume shocks to ideal log price normal, define first mean-zero normal pdf:

In [52]:
C = 1/np.sqrt(2*np.pi)
def normal_pdf(x, sigma):
    return C/sigma*np.exp(-(x/sigma)**2/2)

We'll assume quarterly calibration where standard deviation of shocks is 0.05 (roughly calibrated value from our paper):

In [53]:
sigma = 0.05 # roughly the calibrated value in our New Pricing Models paper
def f(x):
    return normal_pdf(x, sigma)

Steady-state density continued

Assume we're given adjustment band x¯ (and by symmetry x¯ on other side), iterate on law of motion for g:

In [64]:
def ss_dist(xbar):
    xs = np.linspace(-xbar, xbar, 60) # 60 nodes for cubic spline, could exploit symmetry
    gxs = np.full(60, 1/(2*xbar)) # initially guess uniform
    g = interpolate.CubicSpline(xs, gxs)
    
    for it in range(100):
        freq = 1 - g.integrate(-xbar, xbar) # frequency of price adjustment
        
        # evaluate g(x) at each node x for cubic spline, reinterpolate
        # painfully inefficient to call cubic spline over and over again w/overhead
        # but we won't bother with more efficiency here
        gxs_new = (freq*f(xs) 
            + np.array([integrate.quad(lambda xp: f(x - xp)*g(xp), -xbar, xbar)[0] for x in xs]))
        g = interpolate.CubicSpline(xs, gxs_new)
        
        if np.max(np.abs(gxs_new - gxs)) < 1E-10:
            return g
        gxs = gxs_new
        
    raise ValueError('No convergence')

Plot density for arbitrary x¯:

In [65]:
xbar = 0.1
g = ss_dist(xbar)
freq = 1 - g.integrate(-xbar, xbar)
freq
Out[65]:
0.1446444875703089
In [67]:
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));

Calibrate x¯ to hit average frequency of 0.25

Don't need to know ξ itself to do this!

In [68]:
xbar = optimize.brentq(lambda xbar: ss_dist(xbar).integrate(-xbar, xbar) - 0.75, 0.01, 0.1)
xbar
Out[68]:
0.06779121612923503
In [69]:
g = ss_dist(xbar)
freq = 1 - g.integrate(-xbar, xbar)
freq
Out[69]:
0.249999999999991

Actual density looks similar to what we already saw

In [70]:
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));

Defining expectation functions in our context (cf fake news algorithm)

If end-of-period price gap is x today, what (following steady-state policy) will it be on average in t periods?

Law of iterated expectations, plus symmetry (reset to zero, where expectations are zero), imply:

Et(x)=x¯x¯f(xx)Et1(x)dx
In [72]:
xs = np.linspace(-xbar, xbar, 60)
def E_recursion(E):
    Exs = [integrate.quad(lambda xp: f(xp - x)*E(xp), -xbar, xbar)[0] for x in xs]
    return interpolate.CubicSpline(xs, Exs)

Plot some expectation functions

Decay much faster than probability of resetting prices, due to "selection effects"

In [73]:
E0 = lambda x: x
Es = [E0]
for i in range(4):
    Es.append(E_recursion(Es[-1]))

for i in range(5):
    plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();

Or ignore the first to see typical shape

In [74]:
for i in range(1, 5):
    plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();

Exactly how fast do these decay?

Let's look at "persistence" of expectation function starting from reset point x=0 or adjustment band x¯=0:

ΦteEt(x¯)x¯             ΦtiEt(0)=limx0Et(x)x
In [75]:
TPhi = 30
Phi_e = np.empty(TPhi)
Phi_i = np.empty(TPhi)

Et = interpolate.CubicSpline(xs, xs) # identity
for t in range(TPhi):
    Phi_e[t] = Et(xbar)/xbar
    Phi_i[t] = Et.derivative()(0)
    Et = E_recursion(Et)

Very fast decay

Initially persistence at x¯ is less, because people are more likely to adjust there:

In [79]:
plt.plot(Phi_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(Phi_i[:11], label=r'persistence from $x^*=0$')
plt.legend();

But "hazard" very quickly approaches same constant

In [80]:
lambda_e = (Phi_e[:-1] - Phi_e[1:])/Phi_e[:-1]
lambda_i = (Phi_i[:-1] - Phi_i[1:])/Phi_i[:-1]
plt.plot(lambda_e[:11], label=r'from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'from $x^*=0$')
plt.legend();

Can also iterate to track actual "survival probability" of prices

In [82]:
TPhi = 30
Phi_actual = np.empty(TPhi)
Et_noreset = interpolate.CubicSpline(xs, np.ones(len(xs)))
for t in range(TPhi):
    if t < 4:
        plt.plot(xs, Et_noreset(xs), label=f'Price survives to t={t}')
    Phi_actual[t] = Et_noreset(0)
    Et_noreset = E_recursion(Et_noreset)
plt.legend();

Decays much more slowly than persistences—"selection effect"!

In [85]:
lambda_actual = 1 - Phi_actual[1:] / Phi_actual[:-1]
plt.plot(lambda_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'persistence from $x^* =0$')
plt.plot(lambda_actual[:11], label='actual survival hazard')
plt.legend();

Proposition 1 from Auclert, Rigato, Rognlie, Straub (2023)

Assume aggregate price is average of price gaps, P^t=xitdi (idiosyncratic shocks average out to zero, so price gaps give price level)

The pass-through matrix from nominal marginal cost to price for the menu cost model is given by a mixture of time-dependent models

Ψ=αΨΦe+(1α)ΨΦi

where ΨΦe and ΨΦi are pass-through matrices with survivals Φte and Φti

First term gives effect of extensive margin, second term gives effect of intensive margin, with weight on extensive margin of α2g(x¯)x¯t=0Φte

Partial intuition: "persistence" at x¯ gives persistent effect of extensive margin decisions

  • Suppose I'm right at the threshold x¯ and decide not to adjust today

    • effect on price level today: +x¯

    • effect tomorrow: +E1(x¯)

    • ... effect in t periods: +Et(x¯)

  • Expectation function summarizes effect of my decision today on future price level
    • it incorporates fact that if I don't adjust today, I'll probably adjust soon anyway
    • "persistence" of effect is Φte=Et(x¯)/x¯

Re-plot expectation functions to visualize this

In [81]:
for i in range(5):
    plt.plot(xs, Es[i](xs), label=f'E{i}')
plt.legend();

Partial intuition, continued: "persistence" at x=0 gives persistent effect of intensive margin decisions

  • Suppose I'm adjusting and I decide to raise my price by dx, slightly above zero

    • effect on price level today: +dx

    • effect tomorrow: +E1(dx)=+E1(0)dx

    • ... effect in t periods: +Et(0)dx

  • Again, expectation function summarizes effect decision today on future price level
    • incorporates fact that if I set higher price today, I'm more likely to adjust downward in future
    • "persistence" of higher price is Φti+Et(0)

Taking stock

  • We've seen that extensive margin decisions "persist" at Φte, intensive margin decisions "persist" at Φti, each like in a properly calibrated time-dependent model
  • But that's only one side of time-dependent models!
  • Key to time-dependent models is that future matters proportional to βsΦs!
    • Recall reset-price equation for time-dependent case:
P^t=s=0βsΦsMC^t+ss=0βsΦs

What's intuition for why future shocks matter in proportion to Φs?

  • Loose intuition: when making a decision at extensive or intensive margin, you care about future marginal costs only insofar as your decision today actually affects your future price
    • and that's exactly what these persistences Φte and Φti are capturing!
  • More formal intuition: envelope theorem argument implies that if there's some perturbation MC^s, then for t<s, we have same recursion as for expectation functions, with added β:
dVt(x)=βEx|xdVt+1(x)
  • Since dVs(x)=x=E0(x), follows that dVt(x)=βsEst(x)
    • future marginal costs affect value function in proportion to expectation function!

Let's implement!

Pad with zeros to get T-length, build pass-through matrices for extensive and intensive margin:

In [86]:
Phi_e_long = np.zeros(T)
Phi_e_long[:TPhi] = Phi_e
Psi_e = Psi_td(Phi_e_long, beta)

Phi_i_long = np.zeros(T)
Phi_i_long[:TPhi] = Phi_i
Psi_i = Psi_td(Phi_i_long, beta)

Weight extensive by α, intensive by 1α:

In [88]:
alpha = 2*g(xbar)*xbar*Phi_e.sum()
alpha
Out[88]:
0.612530214997201
In [89]:
Psi = alpha*Psi_e + (1-alpha)*Psi_i

Visualize pass-through matrix

Very sharp spikes, corresponding to near-flexible prices (rapidly declining "virtual survival", i.e. low persistence of decisions):

In [90]:
plt.plot(Psi[:21, [0, 5, 10]]);

What about generalized Phillips curve?

Insanely close to the Calvo New Keynesian Phillips curve, but a lot more flexible (slope of 2, not 0.1)!

In [92]:
K = K_from_Psi(Psi)
plt.plot(K[:21, [0, 5, 10, 15]]);

Why is this?

Both extensive and intensive very quickly converge to the same constant hazard (corresponding to leading odd eigenvalue—cf Alvarez and Lippi Ecta 2022 and our paper), much higher than actual probability of adjusting (near 0.25)...

... so they behave like a Calvo, but one with more like a 0.7 quarterly adjustment frequency!

In [93]:
plt.plot(lambda_e[:11], label=r'persistence from $\overline{x}$')
plt.plot(lambda_i[:11], label=r'persistence from $x^* =0$')
plt.plot(lambda_actual[:11], label='actual survival hazard')
plt.legend();

Adding "free resets"

Modified menu cost model with free resets

  • Suppose that there's a λ chance of a "free reset", where menu cost is zero
    • Then always adjust back to xt
    • Higher λ as share of total resets means adjustment is less state-dependent, more Calvo-like
    • Close to "Calvo-Plus" model of Nakamura and Steinsson, also others in literature
  • We'll redo steps quickly modifying the model to account for this, not going into much detail
    • We'll calibrate to λ=0.2 and still adjustment frequency of 0.25, so that four-fifths of adjustments are free
    • Same equivalence result holds with suitably redefined expectations functions and weights!

Modified steady-state density function (almost same as before)

In [98]:
lamb = 0.2
def ss_dist(xbar, tol=1E-7):
    xs = np.linspace(-xbar, xbar, 60)
    gxs = np.full(60, 1/(2*xbar))
    g = interpolate.CubicSpline(xs, gxs)
    
    for it in range(120):
        freq = 1 - (1-lamb)*g.integrate(-xbar, xbar)
        gxs_new = (freq*f(xs) 
            + np.array([integrate.quad(lambda xp: (1-lamb)*f(x - xp)*g(xp), -xbar, xbar)[0] for x in xs]))
        g = interpolate.CubicSpline(xs, gxs_new)
        
        if np.max(np.abs(gxs_new - gxs)) < tol:
            return g
        gxs = gxs_new
        
    raise ValueError('No convergence')

Need wider adjustment bands to match same frequency

Now that so many adjustments are free, bands have to be wider to hit same total frequency of adjustments:

In [99]:
xbar = optimize.brentq(lambda xbar: (1-lamb)*ss_dist(xbar).integrate(-xbar, xbar) - 0.75, 0.1, 0.15)
g = ss_dist(xbar, tol=1E-10)
freq = 1 - (1-lamb)*g.integrate(-xbar, xbar)
xbar, freq
Out[99]:
(0.13854976682399764, 0.2500000003281999)

Much less density near bands as well:

In [97]:
xs = np.linspace(-xbar, xbar, 100)
plt.plot(xs, g(xs));

Also rewrite expectation iteration and functions

Same as before, but adding an extra (1λ) factor to account for free resets to zero:

In [101]:
xs = np.linspace(-xbar, xbar, 60)
def E_recursion(E):
    Exs = [integrate.quad(lambda xp: (1-lamb)*f(xp - x)*E(xp), -xbar, xbar)[0] for x in xs]
    return interpolate.CubicSpline(xs, Exs)

Rewrite code that calculates virtual and actual survival:

In [102]:
TPhi = 50
Phi_e = np.empty(TPhi)
Phi_i = np.empty(TPhi)
Phi_actual = np.empty(TPhi)

Et = interpolate.CubicSpline(xs, xs) # identity
Et_noreset = interpolate.CubicSpline(xs, np.ones(len(xs)))

for t in range(TPhi):
    Phi_e[t] = Et(xbar)/xbar
    Phi_i[t] = Et.derivative()(0)
    Phi_actual[t] = Et_noreset(0)
    Et = E_recursion(Et)
    Et_noreset = E_recursion(Et_noreset)

What do hazards look like now?

Still converging to a constant, different from actual hazard, but it takes longer (x¯ higher, so bigger difference between starting at x¯ and x=0) and less of a gap:

In [103]:
lamb_e = 1 - Phi_e[1:] / Phi_e[:-1]
lamb_i = 1 - Phi_i[1:] / Phi_i[:-1]
lamb_actual = 1 - Phi_actual[1:] / Phi_actual[:-1]
plt.plot(lamb_e[:11], label='extensive')
plt.plot(lamb_i[:11], label='intensive')
plt.plot(lamb_actual[:11], label='actual')
plt.legend();

Calculate pass-through matrix now

In [104]:
Phi_e_long = np.zeros(T)
Phi_e_long[:TPhi] = Phi_e
Psi_e = Psi_td(Phi_e_long, beta)

Phi_i_long = np.zeros(T)
Phi_i_long[:TPhi] = Phi_i
Psi_i = Psi_td(Phi_i_long, beta)

Weight on extensive margin is smaller, because so many adjustments are free that it's less important:

In [105]:
alpha = 2*(1-lamb)*g(xbar)*xbar*Phi_e.sum()
alpha
Out[105]:
0.32028114429238486
In [107]:
Psi = alpha*Psi_e + (1-alpha)*Psi_i

Visualizing pass-through matrix

Less sharp spikes, corresponding to less flexible prices:

In [108]:
plt.plot(Psi[:21, [0, 5, 10]]);

Generalized Phillips curve a bit less Calvo-like, but still extremely close

(If you run the NKPC as a regression on data that follows this, it still holds almost exactly.)

In [109]:
K = K_from_Psi(Psi)
plt.plot(K[:21, [0, 5, 10, 15]]);

Paradox: making model more Calvo-like gives it a slightly less Calvo-like shape?

  • Might seem like a paradox: why is the basic menu cost model a more perfect fit to the Calvo NKPC than a model with Calvo-like free-adjustments?
  • Explanation of paradox: wider bands means that extensive and intensive virtual hazards more different, take longer to converge to constant hazard characteristic of Calvo
    • In limit of only free adjustments, converges to Calvo, but takes a while to get there...
  • Paradox is only about shape rather than scale: slope is much closer to the Calvo case (but still higher)

Generalizing the approach

  • More time-dependent models needed with more distinct margins
    • asymmetries or drift from inflation: 3 models to represent separate lower and upper extensive margins
    • mean-reverting shocks: even more models
    • different objective functions or aggregation: slightly generalize TD model
  • Go all the way to generalized hazard functions (cf Alvarez Lippi Oskolkov) in paper, requiring a continuum of TD models representing extensive margin
  • Similar results should hold for similar models with fixed costs
  • Near-equivalence to Calvo remarkably robust!

Big picture for pricing models

  • Hard to escape New Keynesian Phillips curve functional form!
  • Menu cost models better-microfounded but worsen "slope puzzle" (gap between micro and macro) for the NKPC
  • Some promising other routes
    • multisector (e.g. Rubbo; Afrouzi and Bhattarai)
    • departures from FIRE (e.g. Angeletos and Lian; also Ludwig's lecture for techniques!)

Big picture for our techniques

  • Generalizing sequence-space Jacobians to new kinds of models
  • Example of simple discrete choice, in this case without taste shocks
    • Needed continuous shocks and to be careful about integration
  • Equivalence result is really just the fake news algorithm in disguise
    • If you try to figure out what the algorithm means, you'll find it!