Market Design Working Group
October 28-29, 2011
Tayfun Sonmez, Boston College, and Tobias B. Switzer, US Air Force
Branch selection is a key decision in a cadet's military career. At the U.S. Military Academy (USMA) cadets can increase their branch priorities at a fraction of slots by extending their service agreement. Sonmez and Switzer note that this real-life matching problem fills an important gap in market design literature. Although priorities fail a key "substitutes condition", the agent-optimal stable mechanism is well-defined, and in contrast to the current USMA mechanism, it is fair, stable, and strategy-proof -- adopting this mechanism benefits cadets and the Army. This application shows that a matching-with-contracts model is practically relevant beyond traditional domains that satisfy the substitutes condition.
John W. Hatfield, Stanford University, and Scott Duke Kominers, University of Chicago
Hatfield and Kominers introduce a matching model in which agents engage in joint ventures via multilateral contracts. This approach allows them to consider production complementarities previously outside the scope of matching theory. They show analogues of the first and second welfare theorems. Also, when agents' utilities are concave in venture participation, they show that competitive equilibria exist, correspond to stable outcomes, and yield core outcomes. Competitive equilibria exist in this setting even when externalities are present.
Eric Budish, University of Chicago, and Eduardo M. Azevedo, Harvard University
Budish and Azevedo distinguish between two ways that a mechanism can fail to be strategy-proof (SP). A mechanism may have profitable manipulations that persist with market size; and, a mechanism may have profitable manipulations that vanish with market size. The authors say that a non-SP mechanism is strategy-proof in the large (SP-L) if all of its profitable manipulations vanish with market size. Their main result is: suppose we are given some mechanism that has Bayes-Nash equilibria but is not SP-L; then, under some commonly satisfied conditions (semi-anonymity, private values, quasi-continuity) by construction there exists another mechanism that is SP-L, and which implements approximately the same outcomes as the original mechanism, with the approximation error vanishing in the large-market limit. Thus, while SP often severely limits what kinds of mechanisms are possible, SP-L is approximately costless, and hence may be a useful second-best. The authors illustrate with examples from assignment, matching, and auctions.
Clayton Featherstone, Harvard University
Many institutions that allocate scarce goods based on rank-order preferences gauge the success of their assignments by looking at rank distributions, that is, at how many participants get their first choice, how many get their second choice, and so on. For example, San Francisco Unified School District, Teach for America, and Harvard Business School all evaluate assignments in this way. Preferences over rank distributions capture the practical intuition that hurting one agent in order to help ten might be desirable. Motivated by this, Featherstone calls an assignment "rank efficient" if its rank distribution cannot feasibly be stochastically dominated. Rank efficient mechanisms are simple linear programs that can be solved either by a computer or through a sequential improvement process: at each step, the policymaker executes a potentially non-Pareto-improving trade cycle. In fact, both methods are used in the field. Rank efficiency also dovetails nicely with previous literature: it is a refinement of ordinal efficiency (and hence of ex post efficiency). Although rank efficiency is theoretically incompatible with strategy-proofness, rank efficient mechanisms can admit a truth-telling equilibrium in low information environments. Preference data from Featherstone and Roth's (2011) study of a strategy-proof match shows that if agents were to truthfully reveal their preferences, a rank efficient mechanism could significantly outperform alternatives, including random serial dictatorship and the probabilistic serial mechanism.
Haoxiang Zhu, Stanford University
Zhu discusses dark pools, which are equity trading systems that do not display orders publicly. The orders in dark pools are matched within the exchange bid-ask spread, without a guarantee of execution. Informed traders tend to cluster on one side of the market. Therefore, when submitting orders to a dark pool, they face a lower execution probability than uninformed traders, whose orders are relatively uncorrelated. Consequently, dark pools are more attractive to liquidity traders, whereas exchanges are more attractive to informed traders. Adding a dark pool tends to concentrate payoff-relevant information onto the exchange and, under natural conditions, improves price discovery.
Mark Satterthwaite, Northwestern University; Steven R. Williams, University of Illinois at Urbana-Champaign; and Konstantinos E. Zachariadis, London School of Economics
Within a novel model of correlated private values/costs (CPV) Satterthwaite, Williams, and Zachariadis investigate how well a particular market mechanism, the buyer's bid double auction (BBDA), performs in small and moderate sized markets: that is, do well behaved equilibria exist, and is there rapid convergence to efficient allocations and prices that accurately identify the market's underlying state? Using a combination of theorems and numerical experiment, they establish that for two plausible stochastic specifications of costs/values simple equilibria do exist even in small markets. As a function of market size, they bound traders' strategic behavior and prove rates of convergence to zero of 1) the allocation's inefficiency and 2) the error in estimating the market's underlying state. These rates imply that even in moderate sized markets, strategic behavior has an inconsequential effect on the estimate of the market state. They also show that the affiliation condition on traders' values and costs familiar from auction theory does not imply affiliation of bids and offers, and therefore does not play the same pivotal role as it plays in auction theory in proving existence. Nevertheless, in the cases they consider, a a solution to the first-order condition defines an equilibrium despite the failure of affiliation among bids and offers. The authors then extend their model to the case of correlated interdependent values (CIV). While so far they have been unable to obtain the same sequence of results as they have secured in the CPV case, they do show that the structure of equilibria in the CIV case is unchanged. As a consequence, they can straightforwardly compute a substantial sample of equilibria for different market sizes and show that the BBDA's convergence properties do not change qualitatively with the introduction of interdependence.
Steven Tadelis, University of California at Berkeley and eBay Research Labs, and Florian Zettelmeyer, Northwestern University and NBER
To measure the effect of information disclosure on market outcomes, Tadelis and Zettelmeyer use a large-scale field experiment that randomly discloses information about quality in wholesale automobile auctions. As the theoretical literature predicts, information disclosure increases expected revenues. However, in contrast with conventional theories, the biggest gains are for the best- and worst-quality cars. The authors argue that information disclosure causes better matching of heterogeneous buyers to different quality cars. This novel explanation both rationalizes patterns in their data and can be confirmed by additional tests. These findings have implications for the design of other markets, including online consumer auctions,procurement auctions, and labor markets.
Tayfun Sonmez and Utku Unver, Boston College
Although a pilot national live-donor kidney exchange program was recently launched in the United States, the kidney shortage is increasing faster than ever. A new solution paradigm can incorporate compatible pairs in exchange. Sonmez and Unver consider an exchange framework that has both compatible and incompatible pairs, and patients who are indifferent over compatible pairs. Only two-way exchanges are permitted because of institutional constraints. The researchers explore the structure of Pareto-efficient matching within this framework. The mathematical structure of their model turns out to be quite novel. They show that under Pareto-efficient matching, the same number of patients receive transplants. Also, it is possible to construct Pareto-efficient matching that combines the same incompatible pairs while matching the fewest compatible pairs. The authors extend the celebrated Gallai-Edmonds Decomposition in the combinatorial optimization literature to their new framework. They also conduct comparative static exercises on how this decomposition changes as new compatible pairs join the pool.
Itai Ashlagi and David Gamarnik, MIT, and Alvin E. Roth, Harvard University and NBER
It has been shown previously, both analytically and numerically, that for sufficiently large pools of patient-donor pairs, (almost) efficient kidney exchange can be achieved by using at most three-way exchanges, that is by using exchanges among no more than three patient-donor pairs. However, as kidney exchange has grown in practice, exchanges among more than three pairs have proved useful, and long chains initiated by non-directed, altruistic donors have proven to be very effective. Ashlagi, Gamarnik, and Roth explore why this is the case, both empirically and theoretically. They provide an analytical model of exchange when there are many highly sensitized patients, and show that large cycles of exchange or long chains can significantly increase efficiency. Because large cycles of exchange cannot be used in practice, long non-simultaneous chains initiated by altruistic donors significantly increase efficiency in patient pools of the size and composition that presently exist.
Susan Athey, Harvard University and NBER; Ittai Abraham and Moshe Babaioff, Microsoft Research; and Michael Grubb, MIT
Abraham, Athey, Babaioff, and Grubb study the role of information asymmetries in second price, common value auctions. Motivated by information structures that arise commonly in applications such as online advertising, they seek to understand what types of information asymmetries lead to substantial reductions in revenue for the auctioneer. One application of their results concerns online advertising auctions in the presence of "cookies", which allow individual advertisers to recognize advertising opportunities for users who, for example, are customers of their websites. Cookies create substantial information asymmetries both ex ante and at the interim stage, when advertisers form their beliefs. This paper first introduces a new refinement, which the authors call "tremble robust equilibrium" (TRE) -- it overcomes the problem of multiplicity of equilibria in many domains of interest. Next they consider a special information structure, where only one bidder has access to superior information, and show that the seller's revenue in the unique TRE is equal to the expected value of the object conditional on the lowest possible signal, no matter how unlikely it is that this signal is realized. Thus, if cookies identify especially good users, revenue may not be affected much, but if cookies can (even occasionally) be used to identify very poor users, then the revenue consequences are severe. The third part of the paper focuses on the case where multiple bidders may be informed, providing additional characterizations of the impact of information structure on revenue. Finally, the authors consider richer market designs that ensure greater revenue for the auctioneer, for example by auctioning the right to participate in the mechanism.
Liran Einav and Jonathan D. Levin, Stanford University and NBER; Theresa Kuchler, Stanford University; and Neel Sundaresan, eBay Research Labs
The internet has dramatically reduced the cost of varying prices, displays, and information provided to consumers, facilitating both active and passive experimentation. Einav, Kuchler, Levin, and Sundaresan document the prevalence of targeted pricing and auction-design variation on eBay, and identify hundreds of thousands of experiments conducted by sellers across a wide array of retail products. They show how this type of data can be used to address questions about consumer behavior and market outcomes, and provide illustrative results on price dispersion, the frequency of over-bidding, the choice of reserve prices, "buy now"options and other auction-design parameters, and on consumer sensitivity to shipping fees. They argue that leveraging the experiments of market participants takes advantage of the scale and heterogeneity of online markets and can be a powerful approach for testing and measurement.
Arpita Ghosh and Preston McAfee, Yahoo! Research
Ghosh and McAfee provide a game-theoretic model for studying the problem of incentivizing high quality user generated content (UGC, in such settings as online review forums, question-answer sites, and comments on news articles and blogs where contributors are strategic and motivated by exposure. In their model both the quality of contributions and the extent of participation is determined endogenously in a free-entry Nash equilibrium. As observed in practice, the model predict, that if exposure is independent of quality, there will be a flood of low quality contributions in equilibrium. In this context an ideal mechanism would elicit both high quality and high participation in equilibrium, with near-optimal quality as the available attention diverges, and such mechanism should be easily implementable in practice. The authors consider a very simple elimination mechanism which subjects each contribution to rating by some given number of viewers, and eliminates any contributions that are not uniformly rated positively. They construct and analyze free-entry Nash equilibria for this mechanism, and show that the number of viewers can be chosen in order to achieve quality that tends toward optimal, along with diverging participation, as the number of viewers diverges.
Sven Seuken, University of Zurich; David Parkes, Harvard University; Eric Horvitz, Mary Czerwinski, and Desney Tan, Microsoft Research; and Kamal Jain, eBay Research Labs
Despite the pervasiveness of electronic markets in our lives, very little is known about the role of user interfaces (UIs) in promoting good performance in market domains. How does the way we display market information to end users, and the set of choices we offer, influence economic efficiency? Seuken, Parkes, Horvitz, Jain, and Czerwinski introduce a new research agenda on "market user interface design." They take the domain of 3G bandwidth allocation as an illustrative example, and consider the design space of UIs in terms of varying the number of choices offered, fixed versus changing market prices, and situation-dependent choice sets. The UI design induces a Markov decision process, the solution to which provides a gold standard against which user behavior is studied. The authors provide a systematic, empirical study of the effect of different UI levers on user behavior and market performance, along with considerations of behavioral factors including loss aversion, incomplete search, and position effects. Finally, they fit a quantal-best response model to users' actions and evaluate an optimized market user interface.
John W. Hatfield and Fuhito Kojima, Stanford University, and Yusuke Narita, MIT
Hatfield, Kojima, and Narita study the effect of different school choice mechanisms on schools' incentives for improving quality. To do so, they introduce the following criterion: a mechanism respects improvements of school quality if each school becomes slightly better off whenever that school becomes more preferred by students. The authors first show that no stable mechanism, or mechanism that is Pareto-efficient for students (such as the Boston and top trading cycles mechanisms), respects improvements of school quality. Nevertheless, for large school districts, any stable mechanism approximately respects improvements of school quality; by contrast, the Boston and top trading cycles mechanisms fail to do so. Thus, a stable mechanism may provide better incentives for schools to improve themselves than the Boston and top trading cycles mechanisms.
Yan Chen, University of Michigan, and Onur Kesten, Carnegie Mellon University
Chen and Kesten characterize a parametric family of application-rejection school choice mechanisms, including the Boston, Shanghai, and Deferred Acceptance (DA) mechanisms as special cases, and spanning the parallel mechanisms for Chinese college admissions, the largest centralized matching in the world. They find that moving from one extreme member to the other results in systematic changes in manipulability and nested Nash equilibria. In the laboratory, participants are most likely to reveal their preferences truthfully under the DA mechanism, followed by the Shanghai, and then the Boston mechanisms. Furthermore, while DA is significantly more stable than Shanghai, which in turn is more stable than Boston, efficiency comparisons vary across environments.