Insurance Working Group Meets
April 5 and 6, 2013
Benjamin Handel, University of California, Berkeley and NBER, and Jonathan Kolstad, University of Pennsylvania and NBER
Traditional models of insurance choice are predicated on rational choice and risk protection. When these models are taken to data, it is typical to use the choices that consumers make from menus of health insurance options to estimate their risk preferences. A key empirical assumption is that, conditioning on observed health risk, risk preferences represent the primary component of persistent unobserved preferences. If other factors, such as information about plan options or perceived plan hassle costs, also affect choices, then risk preference estimates will be biased. In addition to having positive implications for choice predictions, omitting such unobserved choice factors can have normative implications for welfare analysis. Handel and Kolstad combine administrative data on health plan choices with unique survey data on consumer beliefs and other unobserved preference factors in order to separately identify risk preferences, information frictions, and plan hassle costs. These datasets are linked at the individual level, allowing them to develop a simple empirical framework. They demonstrate that including additional factors in choice affects standard preference parameters. They then develop a welfare framework that integrates information frictions and hassle costs, and they assess the welfare impact of a counterfactual menu design with only a high-deductible health plan option. The welfare loss from the restricted menu of plans is 46 percent lower after accounting for information frictions and hassle costs, illustrating that welfare implications, and subsequent policy decisions, differ substantially when these additional choice factors are accounted for.
Keith Ericson, Boston University and NBER, and Amanda Starc, University of Pennsylvania
Standardization of complex products is widely touted as improving consumer decisions and intensifying price competition, but evidence on standardization' s effect is limited. Ericson and Starc examine a natural experiment: the standardization of health insurance plans on the Massachusetts Health Insurance Exchange. Pre-standardization, firms had wide latitude to design contracts, which were then grouped into tiers of quality. A regulatory change then forced firms to standardize the financial features of their insurance plans and offer seven defined plans; plans remained differentiated on network and brand. They find that standardization altered consumers' choices, their valuation of plan attributes, and the market equilibrium. Post-standardization, consumers shifted to relatively more generous health insurance plans. Using a discrete choice model, they show that this shift is explained by changing weights placed on plan attributes. They evaluate the welfare effects of standardization and conduct a number of counterfactuals, showing that while standardization increased welfare, firms captured some of the surplus by re-optimizing premiums. They use hypothetical choice experiments with different insurance menus to replicate the effect of standardization and conduct alternative counterfactuals.
Jeffrey Brown, University of Illinois and NBER; Arie Kapteyn, RAND Corporation; Erzo Luttmer, Dartmouth College and NBER; and Olivia Mitchell, University of Pennsylvania and NBER
Brown, Kapteyn, Luttmer, and Mitchell provide experimental evidence that individuals have difficulty valuing annuities, and this difficulty – not a preference for lump sums – can help to explain observed low levels of annuity purchases. Although the median price at which people are willing to sell an annuity is close to median actuarial values, this masks notable heterogeneity in responses, including substantial numbers of respondents whose responses are difficult to reconcile with optimizing behavior under any reasonable parameter assumptions. They also discover that people are willing to pay substantially less to buy a larger annuity, a result not due to liquidity constraints or endowment effects. Strikingly, they also learn that individual responses to the buy-versus-sell decisions are negatively correlated, an effect that is stronger for the less financially sophisticated. These findings are consistent with boundedly rational consumers who adopt a "buy low, sell high" heuristic when faced with a complex trade-off. Moreover, at the margin, subjective valuations vary nearly one-for-one with actuarial values but are uncorrelated with utility-based measures designed to measure the insurance value of annuities. This supports the hypothesis that people use simplifying heuristics to think about annuities, rather than engaging in optimizing behavior. The results also underscore the difficulty of explaining the cross-sectional variation in annuity valuations using standard empirical models. These findings raise doubt about whether most consumers can make optimal decisions about annuitization.
Tatyana Deryugina, University of Illinois at Urbana-Champaign
The government acting as an insurer of last resort can cause moral hazard if agents respond by taking on more risk or reducing private insurance coverage, thinking they will be bailed out. Theoretically, ex ante measures can ameliorate this problem, but it is not known how effective actual policies are in reducing ex post government spending. Using instrumental variables and detailed building codes data, Deryugina shows that stricter building codes reduce the amount of money spent by the federal government following a hurricane. Specifically, she finds that raising the required wind speed a building must withstand by one mile per hour decreases the amount of money subsequently spent by the federal government by 2.2 percent − 4 percent, or $14, 000 − $25, 600 per affected zip code during a hurricane. She also shows that this decrease is entirely driven by reduced aid to homeowners as opposed to renters.
Dwight Jaffee and Johan Walden, University of California, Berkeley, and Rustam Ibragimov, Harvard University
Jaffee, Walden, and Ibragimov study a competitive insurance industry in which insurers have limited liability, face frictional costs in holding capital, and offer coverage over a range of risk classes. They distinguish monoline and multiline industry structures, and provide what they believe are the first propositions indicating the conditions under which each structure is optimal. Markets for which the risks are limited in number, asymmetric or correlated may be best served by monoline insurers. Markets characterized by a large number of essentially independent risks, on the other hand, will be best served by many multiline firms, each with a different level of capital. These results are consistent with the observed structures in the insurance industry, and have implications more broadly for financial services industries, including banking.
Ralph Koijen, University of Chicago and NBER, and Motohiro Yogo, Federal Reserve Bank of Minneapolis
During the financial crisis, life insurers sold long-term policies at deep discounts relative to actuarial value. In December 2008, the average markup was -19 percent for life annuities and -57 percent for universal life insurance. This extraordinary pricing behavior was a consequence of financial frictions and statutory reserve regulation that allowed life insurers to record far less than a dollar of reserve per dollar of future insurance liability. Koijen and Yogo identify the shadow cost of financial frictions through exogenous variation in required reserves across different types of policies. The shadow cost was $2.32 per dollar of statutory capital for the average insurance company from November 2008 to February 2009.
Daniel Gottlieb, University of Pennsylvania, and Kent Smetters, University of Pennsylvania and NBER
Life insurance is a large yet poorly understood industry. A final death benefit is not paid for a majority of policies. Insurers make money on customers who let their policies lapse and they lose money on customers who keep their coverage. Policy loads are inverted relative to the dynamic pattern consistent with reclassification risk insurance. As an industry, insurers lobby to ban secondary markets despite the liquidity provided. These (and other) stylized facts cannot be explained easily by information problems alone. Gottlieb and Smetters demonstrate that a simple model of narrow framing, where consumers do not fully account for their need for future liquidity when purchasing insurance, offers a simple and unified explanation.
Levon Barseghyan and Francesca Molinari, Cornell University, and Joshua Teitelbaum, Georgetown Law School
Economists strive to develop models that satisfy a criterion of stability: that is, a single parameterization of the model should be consistent with observed behavior in closely related contexts. Barseghyan, Molinari, and Teitelbaum exploit the stability criterion to infer the structure of households' risk preferences as represented by a probability-distortion model and revealed by deductible choices in three lines of property insurance. They use a partial identification approach, making minimal additional assumptions and adding them sequentially in order to clearly show the role of each in sharpening the inference. The idea is to use revealed preferences arguments to bound the model parameters, and to exploit the stability criterion and other minimal assumptions to sharpen the inference. The researchers make four additional assumptions -- the first is constant absolute risk aversion, and the second is plausibility -- that is, there exists a single coefficient of absolute risk aversion (CARA) and three distorted probabilities (one for each context) that can rationalize a household's choices. Given CARA, plausibility cannot be rejected for 87 percent of households. Their last two assumptions---monotonicity and linearity---are shape restrictions on the probability-distortion function. Monotonicity requires that the distortion function is increasing, and linearity requires that the function is linear. Monotonicity cannot be rejected for 85 percent of "rationalizable" households (that is, households that satisfy plausibility), and monotonicity and linearity cannot be rejected for 81 percent of those households. By contrast, only 40 percent of households are consistent with expected utility, which entails two additional restrictions: unit slope and zero intercept. The researchers address two additional questions: 1) what single probability-distortion function comes closest to rationalizing the choices of all households? And 2) what is the fraction of households whose behavior can be rationalized by this single probability distorting function? To answer these questions they propose an estimator that minimizes the expected Euclidean distance to each household's set of "stable" probability distortion functions. They find that: 1) the estimated probability-distortion function is remarkably similar to the one resulting from a fully parametric maximum likelihood approach; and 2) all three choices of 18 percent of "linear" households can be fully rationalized by this single probability distortion function.