June 8-9, 2014
Haluk Ergin, Duke University, and Tayfun Sonmez and Utku Unver, Boston College
Live donor transplants are carried out for both livers and lungs, where a lobe of these organs from a donor is transplanted into the patient. Since 2010, a handful of live donor liver exchanges have been conducted in South Korea and Hong Kong. In addition to blood-type compatibility, size compatibility is also essential for a successful transplant for both organs. Furthermore, live donor lung transplantation requires two donors for each patient where each donor donates a lung lobe to the patient. Ergin, Sönmez, and Unver describe live donor lung exchange as a potential new lung transplantation modality. Models of liver exchange and lung exchange are introduced and analyzed. Welfare gains of these organ exchanges are simulated.
Yeon-Koo Che, Columbia University; Jinwoo Kim, Yonsei University; and Fuhito Kojima, Stanford University
Complementarities of preferences have been known to jeopardize the stability of two-sided matching markets, yet they are a pervasive feature in many matching markets. Che, Kim, and Kojima revisit the stability issue with such preferences in a large market. Workers have preferences over firms while firms have preferences over distributions of workers and may exhibit complementarity. The authors demonstrate that if each firm's choice changes continuously as the set of available workers changes, then there exists a stable matching even with complementarity. Building on this result, the authors show that there exists an approximately stable matching in any large finite economy. They apply their analysis to show the existence of stable matching in probabilistic and time-share matching models with a finite number of firms and workers.
Nicolas Lambert, Michael Ostrovsky, and Mikhail Panov, Stanford University
Lambert, Ostrovsky, and Panov study trading behavior and the properties of prices in informationally complex markets. Their model is based on the single-period version of the linear-normal framework of Kyle (1985). They allow for essentially arbitrary correlations among the random variables involved in the model: the true value of the traded asset, the signals of strategic traders, the signals of competitive market makers, and the demand coming from liquidity traders. The authors first show that there always exists a unique linear equilibrium which can be characterized analytically, and they illustrate its properties in a series of examples. They then use this equilibrium characterization to study the informational efficiency of prices as the number of strategic traders becomes large. If the demand from liquidity traders is uncorrelated with the true value of the asset or is positively correlated with it (conditional on other signals), then prices in large markets aggregate all available information. However, if the demand from liquidity traders is negatively correlated with the true value of the asset, then prices in large markets aggregate all available information except that which is contained in liquidity demand.
Nima Haghpanah and Jason Hartline, Northwestern University
Myerson's 1981 characterization of revenue-optimal auctions for single-dimensional agents follows from an amortized analysis of the revenue from a single agent. To optimize revenue in expectation, he maps values to virtual values which account for expected revenue gain but can be optimized pointwise. For single-dimensional agents the appropriate virtual values are unique and their closed form can be easily derived from revenue equivalence. A main challenge of generalizing the Myersonian approach to multidimensional agents is that the right amortization is not pinned down by revenue equivalence.
For multidimensional agents, the optimal mechanism may be very complex. Complex mechanisms are impractical and rarely employed. Haghpanah and Hartline offer a framework for reverse mechanism design. Instead of solving for the optimal mechanism in general, they assume a (natural) specific form of the mechanism and then identify sufficient conditions for its optimality. As an example of the framework, for agents with unit-demand preferences, the authors restrict attention to mechanisms that sell each agent her favorite item or nothing. From this restricted form, they will derive multidimensional virtual values which suggest that this form of mechanism is optimal for a large class of distributions over types. As another example of the framework, for bidders with additive preferences, the authors derive conditions for the optimality of posting a single price for the grand bundle.
Umut Dur, North Carolina State University; Scott Duke Kominers, Harvard University; Parag Pathak; and Tayfun Sonmez, Boston College
School choice plans in many cities grant students higher priority for some (but not all) seats at their neighborhood schools. Dur, Kominers, Pathak, and Sonmez demonstrate how the precedence order, that is, the order in which different types of seats are filled by applicants, has quantitative effects on distributional objectives comparable to priorities in the deferred acceptance algorithm. While Boston's school choice plan gives priority to neighborhood applicants for half of each school's seats, the intended effect of this policy is lost because of the precedence order. Despite widely held impressions about the importance of neighborhood priority, the outcome of Boston's implementation of a 50-50 school split is nearly identical to a system without neighborhood priority. The authors formally establish that either increasing the number of neighborhood priority seats or lowering the precedence order positions of neighborhood seats at a school has the same effect: an increase in the number of neighborhood students assigned to the school. The authors then show that in Boston a reversal of precedence with no change in priorities covers almost three-quarters of the range between 0 percent and 100 percent neighborhood priority. Therefore, decisions about precedence are inseparable from decisions about priorities. Transparency about these issues in particular, how precedence unintentionally undermined neighborhood priority led to the abandonment of neighborhood priority in Boston in 2013.
Eric Budish, University of Chicago, and Judd Kessler, University of Pennsylvania
Budish and Kessler report on an experiment conducted at the Wharton School of the University of Pennsylvania testing the efficacy of a new mechanism for assigning students to course schedules. The experiment compared the new mechanism, Budish's (2011) approximate competitive equilibrium from equal incomes (CEEI), to Wharton's current mechanism, a fake-money auction that Wharton introduced in 1996 that was subsequently adopted by numerous other professional schools. In the experiment, CEEI outperforms Wharton's course auction on quantitative measures of efficiency and fairness as well as on qualitative measures of perceived strategic simplicity and student satisfaction. The experiment was also successful in the Roth (1986) sense of "whispering in the ears of princes" in that it persuaded the Wharton administration to adopt CEEI beginning in the academic year 2013-4, and results from the experiment have guided aspects of the real-world implementation. Consistent with the findings from the experiment, data from the implementation at Wharton show that CEEI has enhanced efficiency, fairness, and student satisfaction.
Susan Athey, Stanford University and NBER, and Denis Nekipelov, University of Virginia
Lawrence Ausubel, University of Maryland, and Oleg Baranov, University of Colorado Boulder
Activity rules are among the key innovations underlying modern dynamic multi-item auctions. Activity rules require bidders to bid on substantial sets of items early in the auction, in order to retain the right to bid on similarly-sized sets of items later in the auction. Without activity rules, bidders would have strong incentive to "bid snipe", working cross-purposes to the objectives of a dynamic auction. Ausubel and Baranov examine recent spectrum auctions that have used the Combinatorial Clock Auction (CCA) format to explore activity rules empirically. There has been considerable variation in the activity rules used in spectrum auctions. Traditional activity rules have required monotonicity in the eligibility points assigned to the various items. Some recent auctions have incorporated revealed-preference considerations; some have used the "relative cap", while others have additionally imposed the more stringent "final cap". The authors assess the difference that including more stringent constraints makes in practice, both for limiting the amounts of supplementary bids and on the extent of stability in allocations between the clock rounds and the supplementary round.
In addition, the authors have argued in recent work that traditional activity rules may be both too weak and too strong, and instead have advocated activity rules based upon the Generalized Axiom of Revealed Preference (GARP). Recent spectrum auctions provide the opportunity to explore this issue empirically. Either of two perspectives can be taken. Under one view, GARP violations exhibited in the bidding data may suggest that GARP-based activity rules are not viable. Under a second view, bidders' true preferences may be thought to be generally consistent with GARPand violations may spotlight instances in which bidders were attempting to exploit strategic opportunities. Ultimately, these two views are probably not empirically distinguishable, but their empirical examination may shed light on whether activity rules in past auctions have been appropriate and help give insight into the optimal structuring of activity rules for future auctions.
Nikhil Agarwal, Yale University, and Paulo Somaini, MIT and NBER
Agarwal and Somaini study estimation and identification of preferences using data from single unit assignment mechanisms that are not necessarily truthfully implementable. Their approach views the report made by an agent as a choice of a probability distribution over her assignments. Consistent estimates of these probabilities can be obtained for a large class of mechanisms, which the authors call report-specific priority + cutoff mechanisms. This class includes the Boston Mechanism and the Deferred Acceptance mechanism. The authors then study identification of a latent utility preference model under the assumption that agents play a limit Bayesian Nash Equilibrium (limit equilibria are approximate equilibria in finite markets). They show that this equilibrium assumption is testable using the available data. Preferences are nonparametrically identified under either sufficient variation in choice environments or sufficient variation in a special regressor. Finally, the authors illustrate their techniques using data from elementary school admissions in Cambridge, Massachusetts.
Paul Milgrom and Ilya Segal, Stanford University
Deferred-acceptance auctions choose allocations by an iterative process of rejecting the least attractive bid. Milgrom and Segal find that these auctions have distinctive computational and incentive properties that might make them suitable for application in some challenging environments, such as the planned U.S. auction to repurchase television broadcast rights. For any set of values, any deferred-acceptance auction with "threshold pricing" is weakly group strategy-proof, can be implemented using a clock auction, and leads to the same outcome as the complete-information Nash equilibrium of the corresponding paid-as-bid auction. A paid-as-bid auction with a non-bossy bid-selection rule is dominance solvable if and only if it is a deferred-acceptance auction.
Liran Einav and Jonathan Levin, Stanford University and NBER; Chiara Farronato, Stanford University; and Neel Sundaresan, eBay
Consumer auctions were very popular in the early days of internet commerce, but today online sellers mostly use posted prices. Data from eBay show that compositional shifts in the items sold, or the sellers offering these items, cannot account for this evolution. Instead, the returns to sellers using auctions have diminished. Einav, Farronato, Levin, and Sundaresan develop a model to distinguish between two hypotheses: a shift in buyer demand away from auctions, and a general narrowing of seller margins that favors posted prices. The authors' estimates suggest that the former is more important. They also provide evidence on where auctions still are used, and on why some sellers may continue to use both auctions and posted prices.
Bradley Larsen, Stanford University
Larsen quantifies the efficiency of a real-world bargaining game with two-sided incomplete information. Myerson and Satterthwaite (1983) and Williams (1987) derived the theoretical efficient frontier for bilateral trade under two-sided uncertainty, but little is known about how well real-world bargaining performs relative to the frontier. The setting is wholesale used-auto auctions, an $80 billion industry where buyers and sellers participate in alternating-offer bargaining when the auction price fails to reach a secret reserve price. Using 300,000 auction/bargaining sequences, the author nonparametrically estimates bounds on the distributions of buyer and seller valuations and then estimates where bargaining outcomes lie relative to the efficient frontier. Findings indicate that the observed auction-followed-by-bargaining mechanism is quite efficient, achieving 88 to 96 percent of the surplus, and 92 to 99 percent of the trade volume which can be achieved on the efficient frontier.