October 25-26, 2013
Eric Budish, University of Chicago; Peter Cramton, University of Maryland; and John Shim, Chicago Booth
Budish, Cramton, and Shim argue that the continuous limit order book is a flawed market design and suggest that financial exchanges could use frequent batch auctions: uniform-price sealed-bid double auctions conducted at frequent but discrete time intervals, for example, every second. The authors' argument has four parts. First, they use millisecond-level direct-feed data from exchanges to show that the continuous limit order book market design does not work in continuous time: market correlations that function properly at human-scale time horizons completely break down at high-frequency time horizons. Second, they show that this correlation breakdown creates frequent technical arbitrage opportunities available to whomever is fastest, which in turn creates an arms race to exploit such opportunities. Third, they show that the arms race is socially wasteful—a prisoner's dilemma built directly into the market design—and that its cost is ultimately borne by fundamental investors via wider spreads and thinner markets. Last, the authors show that frequent batch auctions eliminate the arms race, both because they reduce the value of tiny speed advantages and because they transform competition on speed into competition on price. Consequently, frequent batch auctions may lead to narrower spreads, deeper markets, and increased social welfare.
Nikhil Agarwal, MIT
Agarwal develops methods for estimating preferences in matching markets using only data on observed matches. He uses pairwise stability and a vertical preference restriction on one side to identify preferences for both sides of the market. Counterfactual simulations are used to analyze the antitrust allegation that the centralized medical residency match is responsible for salary depression. Because of residents' willingness to pay for desirable programs, salaries in a competitive wage equilibrium would remain $23,000 to $43,000 below the marginal product of labor. Therefore, capacity constraints, not the design of the match, are the likely cause of low salaries.
Mark Satterthwaite, Northwestern University; Steven Williams, University of Illinois at Urbana-Champaign; and Konstantinos Zachariadis, London School of Economics
Satterthwaite, Williams, and Zachariadis consider a market for indivisible items with m buyers, each of whom wishes to buy at most one item, and m sellers, each of whom has one item to sell. The traders privately know their values/costs, which are statistically dependent. Two mechanisms for trading are considered. The buyer's bid double auction collects bids and offers from traders and determines the allocation by selecting a market-clearing price. It fails to achieve all possible gains from trade because of strategic bidding by buyers. The designed mechanism is a revelation mechanism in which honest reporting of values/costs is incentive-compatible and all gains from trade are achieved in equilibrium. However, this optimality comes at the expense of plausibility: 1) the monetary transfers among the traders are defined in terms of the traders' beliefs about each other's value/cost; 2) a trader may suffer a loss ex post; and 3) the mechanism may run a surplus/deficit ex post. The authors compare the simple yet mildly inefficient buyer's bid double auction to the perfectly efficient designed mechanism.
Bo Cowgill, University of California at Berkeley, and Eric Zitzewitz, Dartmouth College and NBER
Despite the popularity of prediction markets among economists, businesses and policymakers have been slow to adopt them in decision making. Most studies of predictions markets outside the lab are from public markets with large trading populations. Corporate prediction markets face additional issues, such as thinness, weak incentives, limited entry and the potential for traders with ulterior motives, raising questions about how well these markets will perform. Cowgill and Zitzewitz examine data from prediction markets run by Google, Ford and Koch Industries. Despite theoretically adverse conditions, they find these markets are relatively efficient, and improve upon the forecasts of experts at all three firms by as much as a 25 percent reduction in mean squared error. The most notable inefficiency is an optimism bias in the markets at Google and Ford. The inefficiencies that do exist generally become smaller over time. More experienced traders and those with higher past performance trade against the identified inefficiencies, suggesting that the markets' efficiency improves because traders gain experience and less-skilled traders exit the market.
Yuichiro Kamada, Harvard University, and Fuhito Kojima, Stanford University
Many real matching markets are subject to distributional constraints. These constraints often take the form of restrictions on the numbers of agents on one side of the market matched to certain subsets on the other side. Real-life examples include restrictions imposed on regions in medical residency matching, academic master's programs in graduate school admission, and state-financed seats for college admission. Motivated by these markets, Kamada and Kojima study the design of matching mechanisms under distributional constraints. They show that the existing matching mechanisms around the world may result in avoidable inefficiency and instability, and describe a mechanism that has benefits in terms of efficiency, stability, and incentives while respecting the distributional constraints.
Yeon-Koo Che, Columbia University, and Johannes Hörner, Yale University
The goal of the recommender system is to facilitate social learning about a product based on experimentation by early users of the product. Without appropriating their social contribution, however, early users may lack the incentives to experiment on a product, and the presence of fully transparent social learning can only worsen this incentive problem. The associated "cold start" could then result in the demise of a potentially valuable product and the collapse of social learning via a recommender system. Che and Hörner study the design of the optimal recommender system focusing on this incentive problem and the pattern of dynamic social learning that emerges from the recommender system. The optimal design trades off fully transparent social learning to improve incentives for early experimentation by selectively over-recommending a product in the early phase of the product release. The over-recommendation "siphons" strict incentives users have for consumption in the event of good news (on the product privately observed by the designer) to the situation in which there is no good news, thereby encouraging users to experiment on the product to a degree to which they would not with the fully transparent recommender system. Under the optimal scheme, experimentation occurs faster than under full transparency but slower than under the first-best optimum, and the rate of experimentation increases over an initial phase and lasts until the posterior becomes sufficiently bad, in which case the recommendation stops along with experimentation on the product. Fully transparent recommendation may become optimal if the (socially-benevolent) designer faces an additional informational problem, for example, arising from the heterogeneity of users' experimentation costs.
Itai Ashlagi, MIT, and Yashodhan Kanoria and Jacob Leshno, Columbia University
Ashlagi, Kanoria, and Leshno characterize the core in random matching markets with unequal numbers of men and women. Preference lists are drawn uniformly at random, independently across agents. The authors find that, with high probability, a vanishing fraction of agents has multiple stable partners, that is, the core is small. Further, they find that being on the short side of the market confers a large advantage. For each agent, the authors assigned a rank of one to the agent's most preferred partner, a rank of two to the next most preferred partner and so forth. If there are n men and n + 1 women, the authors show that with high probability in each stable matching, men's average rank of wives is no more than 3 log n, whereas the women's average rank of husbands is at least n/(3 log n). If there are n men and (1 + Λ)n women for Λ > 0, with high probability, in any stable matching, men's average rank of wives is as small as O(1), whereas the women's average rank of husbands is as large as Ω(n). Simulations show that these features are observed even in small markets. These findings for unbalanced markets contrast sharply with known results for balanced markets: in particular, these results show that the large core in the balanced case is a knife-edge phenomenon that breaks with the slightest imbalance.
Federico Echenique, California Institute of Technology, and Bumin Yenmez, Carnegie Mellon University
Echenique and Yenmez characterize choice rules for schools that regard students as substitutes while at the same time express preferences for a diverse student body. The stable (or fair) assignment of students to schools requires the latter to regard the former as substitutes. Such a requirement is in conflict with the reality of schools' preferences for diversity. The authors show that the conflict can be useful, in the sense that certain unique rules emerge from imposing both considerations.
Hoyt Bleakley, University of Chicago and NBER, and Joseph Ferrie, Northwestern University and NBER
The Coase Theorem, with low transaction costs, shows the independence of efficiency and initial allocations in a market, while the recent "market design" literature stresses the importance of getting initial allocations right. Bleakley and Ferrie study the dynamics of land use in the two centuries following the opening of the frontier in the U.S. state of Georgia which, in contrast with neighboring states, was opened up to settlers with a pre-surveyed and pre-allocated grid in waves with differing parcel sizes. Using difference-in-difference and regression-discontinuity methods, the authors measure the effect of initial parcel sizes (as assigned by the surveyors' grid) on the evolution of farm sizes decades after the land was opened. Initial parcel size predicts farm size essentially one-for-one for 50 to 80 years after land opening. This effect of initial conditions attenuates gradually, and only disappears after 150 years. The authors estimate that the initial misallocation depressed the area's land value by 20 percent in the late nineteenth century. This episode suggests the relevance of the Coase Theorem in the (very) long run, but that bad market design can induce significant distortions in the medium term (over a century in this case).
Paul Asquith, MIT and NBER; Thom Covert, Harvard University; and Parag Pathak
Recently, many financial markets have become subject to new regulations requiring transparency. Asquith, Covert, and Pathak study how mandatory transparency affects trading in the corporate bond market. In July 2002, the Trade Reporting and Compliance Engine (TRACE) program began requiring the public dissemination of post-trade price and volume information for corporate bonds. Dissemination took place in Phases, with actively traded, investment grade bonds becoming transparent before thinly traded, high-yield bonds. Using new data and a differences-in-differences research design, the authors find that transparency causes a significant decrease in price dispersion for all bonds and a significant decrease in trading activity for some categories of bonds. The largest decrease in daily price standard deviation, 24.7 percent, and the largest decrease in trading activity, 41.3 percent, occurs for bonds in the final Phase, which consisted primarily of high-yield bonds. These results indicate that mandated transparency may help some investors and dealers through a decline in price dispersion, while harming others through a reduction in trading activity.
Nicole Immorlica and Gregory Stoddard, Northwestern University, and Vasilis Syrgkanis, Cornell University
Many websites rely on user-generated content to provide value to consumers. Often these websites incentivize user-generated content by awarding users badges based on their contributions. These badges confer value upon users as a symbol of social status. In this paper, Immorlica, Stoddard, and Syrgkanis study the optimal design of a system of badges for a designer whose goal is to maximize contributions. The authors assume users have heterogeneous abilities drawn from a common prior and choose how much effort to exert toward a given task. A user's ability and choice of effort determines the level of contribution he makes to the site. A user earns a badge if his contribution surpasses a pre-specified threshold. The problem facing the designer is how to set badge thresholds to incentivize contributions from users. The authors' main result is that the optimal total contribution can be well approximated with a small number of badges. Specifically, if status is a concave function of the number of players with lower rank, then a single badge mechanism that divides players into two status classes suffices to yield a constant approximation, while for more general functions the authors show that typically logarithmic, in the number of players, badges suffice. They also show that badge mechanisms with a small number of badges have structural stability properties for sufficiently large numbers of players.
Lawrence Ausubel, University of Maryland, and Oleg Baranov, University of Colorado Boulder
The combinatorial clock auction (CCA) has recently become a popular design choice for allocating valuable items such as spectrum licenses, and it has been used in many of the digital dividend auctions worldwide. However, the current implementation of the CCA has limitations stemming from two sources. First, the most common activity rules, while motivated by revealed preference, impose only some of the revealed-preference constraints on bidders. This may reduce the informativeness of the clock rounds and may leave excessive room for strategic bidding. Second, the CCA is an iterative second-price auction rather than an iterative first-price auction, unlike most other dynamic auction formats, including the English auction and the simultaneous multiple round auction (SMRA). In this paper, Ausubel and Baranov describe a new activity rule that can be characterized as the most restrictive activity rule that still permits straightforward bidding by bidders with quasilinear utilities. With this new activity rule, the price feedback of the CCA is greatly improved, enabling a transformation of the CCA to an iterative first-price format. The centerpiece of the innovation is a simple "bidder exposure" calculation, which provides an upper bound on the bidder's payment under second pricing. The notion of bidder exposure is broadly linked to the notion of "clinching" introduced in Ausubel (2004, 2006) for auctions without package bidding.