January 24 and 25, 2014
Severin Borenstein, and Benjamin Handel, both NBER and University of California, Berkeley, Organizers
Nicola Lacetera, University of Toronto and NBER; Bradley Larsen, Stanford University; Devin Pope, University of Chicago and NBER; and Justin Sydnor, University of Wisconsin
Bid Takers or Market Makers? The Effect of Auctioneers on Auction Outcomes (NBER Working paper 19731)
A large body of research has explored the importance of auction design and information structure for auction outcomes. There is a smaller body of research on the importance of the auction process. For example, in many auctions, auctioneers are present and can impact the auction process by varying starting prices, the level of price adjustments, the speed of the auction, the way they interact with auction participants, or their characteristic chant that is intended to excite buyers. Lacetera, Larsen, Pope, and Sydnor explore the importance of the auction process by testing whether auctioneers can have a systematic difference on auction outcomes. The authors analyze more than 850,000 wholesale used car auctions and find large and significant differences in outcomes (probability of sale, price, and auction speed) across auctioneers. The performance heterogeneities are stable across time and correlate with subjective evaluations of auctioneers provided by the auction house. Although the available data here do not allow the authors to conclusively isolate mechanisms, a range of evidence suggests a role for tactics that generate excitement among bidders. Overall, these findings illustrate the complexities of auction environments and how outcomes can be affected by subtle changes in process.
Eric Anderson, Northwestern University; Emi Nakamura, Columbia University and NBER; Duncan Simester, MIT; and Jon Steinsson, Columbia University and NBER
Informational Rigidities and the Stickiness of Temporary Sales (NBER Working paper 19350)
Using a unique dataset from a large retailer, Anderson, Nakamura, Simester, and Steinsson study two types of retailer response to a demand or cost shock: 1, a change in the regular price; or 2, a change in the depth or frequency of discounts. The authors identify cost shocks using changes in the wholesale price together with changes in commodity prices, and identify demand shocks using cross-sectional variation in unemployment rates. They find that if the retailer responds, it does so through the regular price rather than through changes in the depth or frequency of discounts. To explain these findings the authors present several institutional facts that highlight differences in the ways that regular prices and discounts are planned, funded, and promoted. A key conclusion is that the frequent price changes associated with discounts are based on old information, and so they do not reflect a rapid response of prices to economic shocks. The authors argue that information "stickiness" for temporary sales means that temporary sales are "sticky plans" that are updated infrequently. Even though temporary sales lead to frequent price changes, the underlying mechanism that controls the frequency and depth of these price changes is sticky. Evidence that a retailer only responds to macro shocks through its regular price and not through its sale prices suggests that price changes resulting from sales should be excluded when evaluating macro price flexibility. This sharply lowers the observed frequency of price changes. The authors also show that the decision to include or exclude price changes resulting from sales affects other salient features of pricing behavior. They conclude that including price variation from sales when measuring macro price flexibility may be misleading.
Anderson, Kellogg, and Salant show that crude oil production from existing wells in Texas does not respond to current or expected future oil prices, contradicting a basic prediction of Hotelling's (1931) canonical model of exhaustible resource extraction. In contrast, the drilling of new wells exhibits a strong price response, as does the rental rate on drilling rigs. To explain these observations, the authors reformulate Hotelling's model as a drilling problem, in which firms choose when to drill new wells, but flow from existing wells is limited by a capacity constraint that decays toward zero as reservoir pressure declines. This drilling problem implies a modified Hotelling rule for discounted revenue flows net of drilling costs. The model rationalizes the empirical findings from Texas and can replicate several other well-known features of the oil industry: local production peaks, backwardated price expectations following unanticipated positive demand shocks, and expectations that prices will rise faster than the interest rate following large, unanticipated negative demand shocks.
Louis Kaplow, Harvard University and NBER
Optimal Regulation with Exemptions and Corrective Taxes*
Regulation produces enormous benefits and costs, both of which are greatly influenced by myriad exemptions and preferences for small firms that contribute a significant minority of output in many sectors. These firms may generate a disproportionate share of harm because they are exempt and because exemption induces additional harmful activity to be channeled their way. Kaplow analyzes optimal regulatory exemptions where firms have different productivities that are unobservable to the regulator, regulated and unregulated output each cause harm although at different levels, and regulation and the exemption level affect entry and the output choices of regulated and unregulated firms. He also analyzes the optimal use of output taxation alongside regulation that is, optimal regulation with taxation, in contrast to the traditional comparison of regulation versus taxation. In many settings, optimal schemes involve subtle effects and have counterintuitive features: for example, incentives of firms to drop output to become exempt can be too weak as well as too strong, and optimal output taxes may equal zero despite the presence of externalities. When all instruments under examination are admitted, a planner can achieve the first best, and in this regime optimal regulation is voluntary.
Aviv Nevo, Northwestern University and NBER, and John Turner and Jonathan Williams, University of Georgia
Usage-Based Pricing and Demand for Residential Broadband
Increased Internet use has focused attention on managing traffic while preserving equal treatment of content. Nevo, Turner, and Williams estimate demand for residential broadband using high-frequency data from subscribers facing a three-part tariff and use the estimates to study the welfare implications of usage-based pricing, a commonly offered solution to network congestion. The three-part tariff makes data usage during the billing cycle a dynamic problem, thus generating variation in the (shadow) price of usage during the month. The authors find that subscribers respond to this variation and use their dynamic decisions to estimate a flexible distribution of willingness to pay for different plan characteristics. Using these estimates, the authors show that usage-based pricing eliminates low-value traffic and improves overall welfare. Usage-based pricing might decrease consumer surplus, depending on which alternative is considered. Furthermore, the authors argue that the costs associated with investment in fiber-optic networks (an alternative for mitigating congestion) are likely recoverable in some markets.
Randall Lewis, Google, Inc., and Justin Rao, Microsoft Research
On the Near Impossibility of Measuring the Returns to Advertising
Classical theories assume the firm has access to reliable signals to measure the causal impact of choice variables on profit. For advertising expenditure and using 25 online field experiments with major U.S. retailers and brokerages ($2.8 million expenditure), Lewis and Rao show that this assumption typically does not hold. Evidence from the randomized trials is very weak because individual-level sales are incredibly volatile relative to the per capita cost of a campaign; a "small" impact on a noisy dependent variable can generate positive returns. A calibrated statistical argument shows that the required sample size for an experiment to generate informative confidence intervals is typically in excess of ten million person-weeks. This also implies that selection bias unaccounted for by observational methods only needs to explain a tiny fraction of sales variation to severely bias observational estimates. The authors discuss how weak informational feedback has shaped the current marketplace and the impact of technological advances moving forward.
Brian Chen, University of South Carolina, and Paul Gertler, University of California at Berkeley and NBER
Moral Hazard and Economies of Scope in Physician Ownership of Complementary Medical Services (NBER Working paper 19622)
When physicians own complementary medical service facilities such as clinical laboratories and imaging centers, they gain financially by referring patients to these service entities. This situation creates an incentive for physicians to exploit consumers' trust by recommending more services than they would demand under full information. However, this moral hazard cost may be offset by gains in economies of scope if the complementary services are integrated into a physician's practice. Chen, Gertler, and Yang assess the extent of moral hazard and economies of scope using data from Taiwan which introduced a "separating" policy similar to the Stark Law in the United States that restricts physician ownership of pharmacies unless they are fully integrated into a physician's practice. The authors find that physicians who own pharmacies prescribe 7.6 percent more drugs than those who do not own pharmacies. Overall, they find no evidence of economies of scope from integration in the treatment of patients with acute respiratory infections, diabetes, or hypertension. Also, the separating policy was ineffective at controlling drug costs as a large number of physicians choose to integrate pharmacies into their practices in order to become exempt from the policy.
Ricardo Cossa, Charles River Associates, and Mariano Tappata, University of British Columiba
Price Discrimination 2.0: Opaque Bookings in the Hotel Industry
Cossa and Tappata study opaque selling using a unique dataset from the lodging industry that matches bookings made on opaque platforms, Priceline and Hotwire, with their equivalent bookings on the transparent market. The variation in hotel rates across markets and selling channels is consistent with consumers sorting into platforms based on transaction costs and the dispersion of preferences over the transparent products. The authors argue that hotels use opaque selling channels to segment demand and substitute within-channel price discrimination strategies. Moreover, they reject the hypothesis of opaque selling as a last-minute resource to dispose of unsold inventory.
Sumit Agarwal, National University of Singapore; Souphala Chomsisengphet, Department of the Treasury ; Neale Mahoney, University of Chicago and NBER; and Johannes Stroebel, New York University
Regulating Consumer Financial Products: Evidence from Credit Cards (NBER Working paper 19484)
Agarwal, Chomsisengphet, Mahoney, and Stroebel analyze the effectiveness of consumer financial regulation by considering the 2009 Credit Card Accountability Responsibility and Disclosure (CARD) Act in the United States. Using a quasi-experimental research design and a unique panel dataset covering more than 150 million credit card accounts, the authors find that regulatory limits on credit card fees reduced overall borrowing costs to consumers by an annualized 1.7 percent of average daily balances, with a decline of more than 5.5 percent for consumers with the lowest FICO scores. Consistent with a model of low fee salience and limited market competition, the authors find no evidence of an offsetting increase in interest charges or a reduction in volume of credit. Taken together, they estimate that the CARD Act fee reductions have saved U.S. consumers $12.6 billion per year. They also analyze the CARD Act requirement to disclose the interest savings from paying off balances within 36 months rather than making only minimum payments. They find that this "nudge" increased the number of account holders making the 36-month payment by 0.5 percentage points.
Ulrich Doraszelski, University of Pennsylvania, and Gregory Lewis and Ariel Pakes, Harvard University and NBER
Just Starting Out: Learning and Price Competition in a New Market
Deregulation of the frequency response market in the United Kingdom allowed electricity firms to compete on price in an otherwise stable environment. Doraszelski, Lewis, and Pakes provide an analysis of the evolution of the deregulated market from its starting date. Initial activity was volatile, with some firms exploring different prices while others made few price changes. This was followed by a period in which prices fell and the variance in the cross-sectional distribution of bids declined markedly. By the end of the study, price changes had become relatively rare and small, consistent with convergence to a static Nash equilibrium. The authors examine how well models of learning predict play during the period prior to convergence but after the initial volatility. Models where perceptions of competitors' play depend on past play suggest that firms weight recent play disproportionately. The authors also find evidence of statistical learning about the underlying demand parameters conditional on competitors' play. A model that combines these two features fits quite well: it is able to explain 37 percent of the share-weighted variation in prices, even though none of the model parameters are chosen to fit the pricing behavior.
Hunt Allcott and Allan Collard-Wexler, New York University and NBER, and Stephen O'Connell, City University of New York
How Do Electricity Shortages Affect Productivity? Evidence from India*
Allcott, Collard-Wexler, and O'Connell develop a hybrid Leontief/Cobb-Douglas production function model that characterizes how input shortages affect firms. As a case study, they analyze how "power holidays" affect daily production at large Indian textile plants using data from Bloom et al. (2013). The authors then study the short-run effects of electricity shortages on all Indian manufacturing plants between 1992 and 2010 using archival data on shortages, previously unavailable panel data, and an instrument for shortages based on variation in hydro reservoir inflows. They estimate that electricity shortages are a substantial drag on Indian manufacturing, reducing output by about 5 percent. However, productivity effects are smaller: because electricity is a small share of costs, higher-cost self-generation increases energy costs by only about 0.15 to 0.5 percent of revenues, and because most inputs can be stored during outages the productivity loss is only a fraction of the output loss. The authors also show that because of economies of scale in self-generation, shortages impose much larger losses on small plants, suggesting an additional distortion to the firm size distribution in developing economies.