Market Design Working Group
October 19 and 20, 2012
William Fuchs, University of California at Berkeley, and Andrzej Skrzypacz, Stanford University
Fuchs and Skrzypacz study a dynamic market with asymmetric information that induces the lemons problem. They compare efficiency of the market under different assumptions about the timing of trade. They show when efficiency can be improved by temporarily closing the market as compared to continuous trading opportunities.
Jacob Leshno, Microsoft Research
In many assignment problems items arrive stochastically over time. When items are scarce, agents form an overloaded waiting list and items are dynamically allocated as they arrive; two examples are public housing and organs for transplant. Even when all the scarce items are allocated, there is the efficiency question of how to assign the right items to the right agents. Leshno develops a model in which impatient agents with heterogeneous preferences wait to be assigned scarce heterogeneous items that arrive stochastically over time. Social welfare is maximized when agents are appropriately matched to items, but an individual impatient agent may misreport her preferences to receive an earlier mismatched item. To incentivize an agent to avoid mismatch, the policy needs to provide the agent with a (stochastic) guarantee of future assignment, which Leshno models as putting the agents in a priority buffer-queue. He firsts consider a standard queue-based allocation policy and derives its welfare properties. To determine the optimal policy, he formulates the dynamic assignment problem as a dynamic mechanism design problem without transfers. The resulting optimal incentive compatible policy uses a buffer-queue of a new queueing policy, the uniform wait queue, to minimize the probability of mismatching agents. Finally, Leshno derives a policy that uses a simple rule: giving equal priority to every agent who declines a mismatched item (a SIRO buffer-queue). This policy is optimal in a class of robust mechanisms and has several good properties that make it a compelling market design policy recommendation.
Kenneth Hendricks, University of Wisconsin, Madison and NBER, and Daniel Quint, University of Wisconsin
Hendricks and Quint study the use of indicative bids -- non-binding preliminary bids -- to select a limited number of bidders for participation in an auction. Investment banks frequently use this type of selection mechanism when selling business assets. The authors show that if participation in the auction is costly, indicative bids can be informative: a (essentially unique) symmetric equilibrium exists in weakly-monotone strategies. However, bidder types "pool" over a finite number of bids, so the highest-value bidders are not always selected. The authors further show how equilibrium changes with the number of potential bidders and the participation cost. Finally, they characterize equilibrium play when the number of potential bidders is large, and show that both revenue and bidder surplus are higher than when entry into the auction is unrestricted.
Sergiu Hart and Noam Nisan, Hebrew University
Hart and Nisan consider the menu size of auctions as one measure of auction complexity and then study how it affects revenue. Their setting has a single revenue-maximizing seller selling k ≥ 2 heterogenous items to a single buyer whose private values for those items are drawn from a (possibly correlated) known distribution, and whose valuation for sets of items ("bundles") is additive over the items. The authors show that the revenue may increase arbitrarily with menu size and that a bounded menu size cannot ensure any positive fraction of the optimal revenue. (For bounded valuations, they show that a finite menu size can ensure an arbitrarily good additive approximation of revenue.) The menu size turns out to "nail down" the revenue properties of deterministic auctions: their menu size may be at most exponential in the number of items and indeed their revenue may be larger than what is achievable by the simplest types of auctions by a factor that is exponential in the number of items but no larger. In particular, these results imply an infinite separation between the revenues achievable by deterministic and general randomized auctions even when selling two items, answering a question left open in Briest et al. .
Yeon-Koo Che, Columbia University; Jinwoo Kim, Yonsei University; and Fuhito Kojima, Stanford University
Che, Kim, and Kojima study the "house allocation" problem in which n agents are assigned n objects, one each, when the agents have interdependent values. The authors show that there exists no mechanism that is Pareto efficient and ex post incentive compatible, and the only mechanism that is group ex post incentive compatible is constant across states. By contrast, they demonstrate that a Pareto efficient and Bayesian incentive compatible mechanism exists in the 2-agent house-allocation problem, given sufficient congruence of preferences and the standard single crossing property.
Tayfun Sonmez and M. Utku Unver, Boston College
In order to analyze welfare and distributional consequences of different allocation policies for organ transplantation, Sonmez and Unver introduce a dynamic, continuous arrival model of patients and donors that incorporates various practices in transplantation, including deceased donor allocation (for all organs), live donation (for kidneys and livers), and live donor exchange (for kidneys and also livers). In particular, they propose a new exchange policy for live donors that can substantially increase the number of pairs being matched through exchange while reducing inequality among blood types. Currently, compatible pairs generally do not participate in exchange, because the patient in the pair receives an organ directly from his donor. Earlier simulations have shown that less than 80 percent of all pairs can be matched without using compatible pairs in exchange. If compatible pairs also can participate in exchange, then the participation asymmetry will disappear, and exchange will benefit around 95 percent of the pairs.The authors propose to incentivize the participation of compatible donors by linking the deceased-donor wait list with the exchange pool. They also propose giving similar incentives to the patients of compatible donors who give up their own compatible donor's organ in exchange for another pair's compatible organ. In this manner, the patient of a compatible donor receives a "guarantee" not to wait in the deceased-donor wait list by getting a "priority" in case the organ he receives in exchange fails in the future. Another benefit of this policy can be seen in creating unified national programs for exchange. This policy can be used to encourage all pairs to participate in the national exchange program, which then can use the compatible pair exchange policy to attract compatible pairs, and discourage participation in other ones.
Peter Cramton, University of Maryland; Ulrich Gall, Stanford University; Pacharasut Sujarittanonta, Cramton Associates LLC; and Robert Wilson, Stanford University
Cramton, Gall, Sujarittanonta, and Wilson describe the prospect of using auctions to resolve conflicts among parties competing for the same top-level internet domains. In such auctions, the winner's payment is divided among the losers; if the conflict is not resolved, then ICANN conducts an auction and retains the winner's payment. The authors first characterize equilibrium bidding strategies and provide examples of first-price and second-price sealed-bid auctions, under the assumption that bidders' valuations are distributed independently and are either symmetrically or asymmetrically distributed. The qualitative properties of the equilibria reveal, for example, that in a second-price auction, a bidder might bid more than her valuation in order to drive up the winner's payment. Even so, in symmetric cases a bidder's expected profit may be the same in the two auction formats. Next the authors test two auction formats in the experimental lab, extending the setting from a single domain to the actual setting with many domains. The first format is a sequential, first-price sealed-bid auction; the second format is a simultaneous ascending clock auction. Both the framing and the subjects were chosen to closely match the actual setting: the subjects were Ph.D. students at the University of Maryland in Economics, Computer Science, and Computer Engineering, with training in game theory and auction theory. Each subject played the role of an actual company (for example, Google) and bid for domains that were consistent with the company's applications. The subjects were given instructions explaining the auction and the equilibrium theory for the single-item case in relevant examples. Both formats achieved auction efficiencies of 98 percent in the lab setting. This high level of efficiency is especially remarkable in the case with asymmetric distributions - the format performed better than the simple single-item equilibrium despite the presence of budget constraints in the lab. Together with previous results on the robustness of ascending auctions in general, and simultaneous ascending clock auctions in particular, this experiment suggests that the simultaneous ascending clock auction will perform best in this setting.
Aditya Bhave and Eric Budish, University of Chicago
Economists have long been puzzled by event-ticket underpricing: underpricing reduces revenue for the performer and encourages socially wasteful rent-seeking by ticket brokers. Why not use an auction to set price correctly? Bhave and Budish study the recent introduction of auctions into this market by Ticketmaster. By combining primary-market data from Ticketmaster with secondary-market resale value data from eBay, they show that Ticketmaster's auctions work well overall: the auctions substantially improve price discovery, roughly double performer revenues, and, on average, nearly eliminate the arbitrage profits associated with under-priced tickets. The data also suggest some ways that Ticketmaster's auction design might be improved, in particular by reducing its strategic complexity.
Atila Abdulkadiroglu, Duke University; Nikhil Agarwal, Harvard University; and Parag Pathak
Coordinated and centralized admissions processes have become an increasingly important part of recent school choice reforms. Abdulkadiroglu, Agarwal, and Patak estimate preferences for schools using rank-order lists from New York City's new high school assignment system in order to study the consequences of coordinating school admissions in a mechanism based on the student-proposing deferred acceptance algorithm. Compared to the prior mechanism with multiple offers and a limited number of choices, there is a 40 percent increase in enrollment at assigned schools. Students value proximity and school quality, but have substantially heterogeneous tastes. The estimates imply a significant role for the matching mechanism, and that some subgroups gained more from the new mechanism than others.
Lawrence Ausubel, University of Maryland; Jonathan D. Levin, Stanford University and NBER; and Paul Milgrom and Ilya Segal, Stanford University
Itai Ashlagi, MIT, and Alvin Roth, Stanford University and NBER
A kidney exchange network of transplant centers establishes a pool of patient-donor pairs, non-directed donors, and patients on deceased donor waiting lists, and seeks to arrange transplants among this pool. One of the goals of a network is to create a thick marketplace containing many patients and donors, and one way to achieve this might be to have a single Kidney Exchange that enrolled all transplant centers and patients. However, at present in the United States and around the world there are multiple kidney exchange networks, and in the United States, transplant centers and patient-donor pairs sometimes may be enrolled in more than one exchange network. Thus there is a need to understand how to organize collaborations among networks, which is related to the question of how often to match patients in the pool. Waiting longer allows the pool to become larger but imposes costs on those already in the pool. Using data from several American kidney exchange networks and from one in Asia, Ashlagi and Roth consider several questions (together with further sensitivity analyses to identify the dependence on arrival rates in each exchange, and the lengths of the chains they are able to manage): First, how often should exchanges be conducted within a single exchange network, as new patients and donors arrive? (That is, what are the benefits of delay in terms of increased number of transplants achieved, and what are the costs in terms of increased waiting times and other costs?) Second, what would be the benefits of different forms of collaboration between different U.S. networks?
Qingmin Liu, Columbia University, and Marek Pycia, University of California at Los Angeles
Efficiency and symmetric treatment of agents are the primary goals of resource allocation in environments without transfers. Focusing on ordinal mechanisms in which no small group of agents can substantially change the allocation of others, Liu and Pycia first show that uniform randomizations over deterministic efficient mechanisms are asymptotically ordinally efficient, that is, efficient ex ante. This implies that ordinal efficiency and ex-post Pareto efficiency become equivalent in large markets, and that many standard mechanisms are asymptotically ordinally efficient. The authors then show that all asymptotically ordinally efficient, symmetric, and asymptotically strategy-proof mechanisms lead to the same allocations in large markets.
Scott Duke Kominers, University of Chicago, and Tayfun Sonmez
To encourage diversity, schools often "reserve" some slots for students of specific types. Students only care about their school assignments and contractual terms like tuition, and hence are indifferent among slots within a school. Ad hoc approaches to resolving indifferences across slots can introduce subtle biases that can be corrected with more careful market design. Kominers and Sonmez illustrate how affirmative action programs in Chicago and Boston favor certain groups more than their designs at first suggest. They then introduce a two-sided, many-to-one matching-with-contracts model in which agents match with branches that 1) have priorities that vary by slot and 2) fill slots sequentially. In these matching markets with slot-specific priorities, branches' choice functions may not satisfy the substitutability conditions typically crucial for matching with contracts. Nevertheless, an embedding into a one-to-one agent-slot matching market shows that stable outcomes exist and can be found by a cumulative offer mechanism that is strategy-proof and respects unambiguous improvements in priority. These results suggest an affirmative action mechanism design that avoids the problems of the current Chicago and Boston systems.
Elisa Celis, University of Washington; Gregory Lewis, Harvard University and NBER; Markus Mobius, Iowa State University and NBER; and Hamid Nazerzadeh, University of Southern California
Online tracking technology allows platforms to offer advertisers targeted consumer demographics, improving match quality, but thinning markets. Bidding data from Microsoft Advertising Exchange exhibits a large gap in the top two bids, consistent with this intuition. This motivates a new mechanism: Bidders can "buy-it-now" or "take-a-chance" in an auction where the top d > 1 bidders are equally likely to win. The randomized allocation incentivizes high valuation bidders to buy-it-now. Running counterfactual simulations on their data, Celis, Lewis, Mobius, and Nazerzadeh find that it improves revenue by 4.4 percent and consumer surplus by 14.5 percent when compared to an optimal second-price auction.
Michael Kearns, Mallesh Pai, and Aaron Roth, University of Pennsylvania, and Jonathan Ullman, Harvard University
Kearns, Pai, Roth, and Ullman study the design of mechanisms satisfying two desiderata– incentive compatibility and privacy. The first is standard and requires that agents are incentivized to report their private information truthfully. The second, privacy, requires the mechanism not reveal 'much' about any agent's type to other agents, and hence maintains the privacy of each agent's private information. The authors propose a notion they call joint differential privacy, a variant of the robust differential privacy used in the theoretical computer science literature. They show by construction that mechanisms satisfying their desiderata exist when the game is 'large', that is, there are a large number of players and any player's action affects any other player's payoff by at most a small amount. This mechanism uses no-regret algorithms similar to those studied in Foster & Vohra and Hart & Mas-Colell, and maintains privacy by adding carefully selected noise to each computation step. The results imply that in large economies, privacy concerns of agents can be accommodated at no additional 'cost' to the standard incentive concerns.
David Rothschild and David Pennock, Microsoft Research
Rothschild and Pennock examine the extent to which prices for logically related contracts in prediction markets are contradictory, and the practical amount of arbitrage profits that can result. Price misalignment between identical contracts on different exchanges, representing between 1 and 5 percent net earnings, are common and can persist for months, even in the face of high liquidity. Observing the trading of almost $3,000 of contracts in a randomized trial, the authors uncover a significant shadow order book representing price support well beyond the published demand, implying that absolute arbitrage profits are significantly higher than what a purely observational analysis would reveal. Price misalignment even occurs among identical and logically related contracts listed on the same exchange, although profits are generally lower. Even when misalignment does not represent arbitrage, related markets react in ways that suggest bounded rationality. Some contracts simply shut down when there are high levels of activity in another related contract; traders unnecessarily withdraw orders that exceed even the most wildly pessimistic outcome of an ongoing event. There is a consistent asymmetry between buying and selling across many exchanges, leaving the average return for selling (which is cognitively more complex as implemented by the exchanges) higher than for buying. The authors find other evidence of weak-form inefficiency: prices on one exchange have significant correlation with 12-hour lagged prices on a second exchange. Despite these signs of departure from theoretical optimality, the markets studied, on balance, function well considering the sometimes complex and subtle relationships among contracts. Yet, the authors discuss how prediction markets can be designed better, moving the burden of finding and fixing logical contradictions into the exchange, making buying and selling symmetric, and providing trading wizards, thus freeing traders to focus on providing meaningful information in the form they find most natural.