NATIONAL BUREAU OF ECONOMIC RESEARCH
NATIONAL BUREAU OF ECONOMIC RESEARCH
loading...

Market Design Working Group Meeting

May 15-16, 2009
Susan Athey and Parag Pathak, Organizers

Patrick Bajari, University of Minnesota and NBER, and Gregory Lewis, Harvard University and NBER
Procurement Contracting with Time Incentives: Theory and Evidence

In public sector procurement, social welfare often depends on the time taken to complete the contract. A leading example is highway construction, where slow completion times inflict a negative externality on commuters. Recently, highway departments have introduced innovative contracting methods that give contractors explicit time incentives. Bajari and Lewis characterize equilibrium bidding and efficient design of these contracts. They then gather a unique dataset of highway repair projects awarded by the Minnesota Department of Transportation that includes both innovative and standard contracts. Descriptive analysis shows that for both contract types, contractors respond to the incentives as the theory predicts, at the bidding stage and after the contract is awarded. Next the researchers build a structural econometric model that endogenizes project completion times and perform counterfactual policy analysis. Their estimates suggest that switching from standard contracts to designs with socially efficient time incentives would increase welfare by over 19 percent of the contract value; or in terms of the 2009 MN/DOT budget, $290 million. They conclude that large improvements in social welfare are possible through the use of improved contract design.


Francesco Decarolis, University of Chicago
When the Highest Bidder Loses the Auction: Theory and Evidence from Public Procurement

When bids do not represent binding commitments, the use of a first-price sealed bid auction favors those bidders who are penalized less from reneging on their bids. They are the most likely to win, but also the most likely to default on their bid. Decarolis studies two methods often used in public procurement to deal with this problem: 1) augmenting the first-price auction with an ex-post verification of the responsiveness of the bids and 2) using an average bid auction in which the winner’s bid is closest to the simple average of all the bids. The average-bid auction is new to economics but has been proposed in civil engineering literature. Decarolis shows that when penalties for defaulting are asymmetric across bidders and when their valuations are characterized by a predominant common component, the average bid auction is preferred over the standard first price by an auctioneer when the costs attributabale to the winner’s bankruptcy are high enough. Depending on the cost of the ex-post verification, the average bid auction can be dominated by the first price with monitoring. Decarolis uses a new dataset of Italian public procurement auctions, run alternately using a form of the average bid auction or the augmented first price, to structurally estimate the bids’ verification cost, the firms’ mark up, and the inefficiency generated by the average bid auctions. The estimation procedure proposed uses the informational content of the reserve price to account for unobserved heterogeneity in auctions.


Michael Ostrovsky, Stanford University
Information Aggregation in Dynamic Markets with Strategic Traders

Ostrovsky studies information aggregation in dynamic markets with a finite number of partially informed strategic traders. He shows that for a broad class of securities, information in such markets always gets aggregated. Trading takes place in a bounded time interval, and in every equilibrium, as time approaches the end of the interval, the market price of a “separable" security converges in probability to its expected value conditional on the traders' pooled information. If the security is “non-separable" then there exists a common prior over the states of the world, and an equilibrium, such that information does not get aggregated. The class of separable securities includes, among others, Arrow-Debreu securities, whose value is one in one state of the world and zero in all others, and “additive" securities, whose value can be interpreted as the sum of traders' signals.


Luis Rayo, University of Chicago, and Ilya Segal, Stanford University
Optimal Information Disclosure

A “Sender” (internet advertising platform, seller, rating agency, or school) has a probability distribution over “prospects” (internet ads, products, bonds, or students). Each prospect is characterized by its profitability to the Sender and its relevance to a Receiver (internet user, consumer, investor, or employer). The Sender privately observes the profitability and relevance of the prospect, whereas the receiver observes only a signal provided by the Sender (the prospect’s “rating”). The Receiver accepts a given prospect only if his Bayesian inference about its relevance exceeds a private opportunity cost that is uniformly drawn from [0,1]. Rayo and Segal characterize the Sender’s optimal information disclosure rule assuming commitment power on her behalf. While the Receiver’s welfare is maximized by full disclosure, the Sender’s profits are typically maximized by partial disclosure, in which the Receiver is induced to accept less relevant but more profitable prospects (“switches”) by pooling them with more relevant but less profitable ones (“baits”). Extensions of the model include maximizing a weighted sum of Sender profits and Receiver welfare, and allowing the Sender to subsidize or tax the Receiver.


Estelle Cantillon, ECARES, and Pai-Ling Yin, MIT
Competition between Exchanges: Lessons from the Battle of the Bund

In a famous episode of financial history that lasted over eight years, the market for the future on the Bund moved entirely from LIFFE, the incumbent London-based derivatives exchange, to DTB, the entering Frankfurt-based exchange. Cantillon and Yin study the determinants of traders’ exchange choice, using a novel panel dataset that contains individual trading firms’ membership status at each exchange together with other firms’ characteristics and pricing, marketing and product portfolio strategies by each exchange. Their data allows them to evaluate different sources of heterogeneity among trading firms and thus to distinguish between different explanations for the observed phenomenon. The story the data tells is one of horizontal differentiation and vertical differentiation through liquidity. As a result, DTB attracted a different set of traders than LIFFE, and those traders contributed to the market share reversal.


Itay Fainmesser, Harvard University
Community Structure and Market Outcomes: Towards a Theory of Repeated Games in Networks

Fainmesser investigates the role of community structure and institutions in determining outcomes of markets with asymmetric information and moral hazard. He introduces a framework for studying repeated games in buyer-seller networks, and shows that networks that facilitate cooperation are sparse, balanced with respect to the degrees of buyers and sellers, and they divide the society into segregated communities. Moreover, there is an inherent trade-off between sustaining cooperation and: 1) maximizing the volume of trade and 2) providing equal opportunities to market participants. When demand and supply fluctuations exist, the first best cannot be achieved and the second best is a balance between “old world” tight and sparse networks and a “global village.” Institutions such as reputation systems, litigation, and third-party evaluation services, can restore efficiency and equality of opportunities. These results are consistent with observations from labor and credit markets, supply networks, and development economics.

Maher Said, Yale University
Auctions with Dynamic Populations: Efficiency and Revenue Maximization

Said examines an environment where objects and privately-informed buyers arrive stochastically to a market. The seller in this setting faces a sequential allocation problem with a changing population. Said characterizes the set of incentive-compatible allocation rules and provides a generalized revenue equivalence result. In contrast to a static setting where incentive compatibility implies that higher-valued buyers have a greater likelihood of receiving an object, in this dynamic setting, incentive compatibility implies that higher valued buyers have a greater likelihood of receiving an object sooner. Said also characterizes the set of efficient allocation rules and shows that a dynamic Vickrey-Clarke-Groves mechanism is efficient and periodic ex post incentive compatible. Said then derives the revenue-maximizing allocation rule and shows that the optimal direct mechanism is a pivot mechanism with a reserve price. Finally, he considers sequential ascending auctions in this setting, both with and without a reserve price. He constructs memory-less equilibrium bidding strategies in this indirect mechanism. Bidders reveal their private information in every period, yielding the same outcomes as the direct mechanisms. Thus, the sequential ascending auction is a natural institution for achieving either efficient or optimal outcomes. Interestingly, this is not the case for sequential second-price auctions, as the bids in a second-price auction do not reveal sufficient information to realize either the efficient or the optimal allocation.


Alex Gershkov and Paul Schweinzer, University of Bonn
When Queueing is Better than Push and Shove

Gershkov and Schweinzer address the scheduling problem of reordering an existing queue into its efficient order through trade. To that end, they consider individually rational and balanced budget direct and indirect mechanisms. They show that this class of mechanisms allows them to form efficient queues, provided that existing property rights for the service are small enough to enable trade between the agents. In particular, they show on the one hand that no queue under a fully deterministic service schedule such as first-come, first-serve can be dissolved efficiently and meet their requirements. If, on the other hand, the alternative is full service anarchy, then every existing queue can be transformed into its efficient order.


Peter Cramton, University of Maryland
Spectrum Auction Design

Spectrum auctions are used by governments to assign and price licenses for wireless communications. The standard approach is the simultaneous ascending auction, in which many related lots are auctioned simultaneously in a sequence of rounds. Cramton analyzes the strengths and weaknesses of the approach with examples from U.S. spectrum auctions. He then presents a variation, the package clock auction, adopted by the United Kingdom, which addresses many of the problems of the simultaneous ascending auction while building on its strengths. The package clock auction is a simple dynamic auction in which bidders bid on packages of lots. Most importantly, the auction allows alternative technologies that require the spectrum to be organized in different ways to compete in a technology neutral auction. In addition, the pricing rule and information policy are carefully tailored to mitigate gaming behavior. An activity rule based on revealed preference promotes price discovery throughout the clock stage of the auction. Truthful bidding is encouraged, which simplifies bidding and improves efficiency. Experimental tests and early auctions confirm the advantages of the approach.


Jeremy Bulow and Jonathan Levin, Stanford University and NBER, and Paul Milgrom, Stanford University
Winning Play in Spectrum Auctions

Bulow and his co-authors describe factors that make bidding in large spectrum auctions complex, including exposure and budget problems, the role of timing within an ascending auction, and the possibilities for price forecasting. They also ask how economic and game-theoretic analysis can assist bidders in overcoming these problems. The researchers illustrate with the case of the FCC’s Advanced Wireless Service auction, in which a new entrant, SpectrumCo, faced all of these problems yet managed to purchase nationwide coverage at a discount of roughly one third relative to the prices paid by its incumbent competitors in the same auction, saving more than a billion dollars.


Eric Budish, Harvard University
The Combinatorial Assignment Problem: Approximate Competitive Equilibrium from Equal Incomes

The combinatorial assignment problem has three principal features: agents require bundles of indivisible objects; monetary transfers are prohibited; and the market administrator cares about both efficiency and fairness. One example of this problem is the assignment of course schedules to students. Impossibility theorems have established that the only efficient and strategy-proof mechanisms in this environment are dictatorships. Any non-dictatorship solution will involve compromise of efficiency or “strategyproofness.” Budish proposes a solution to the combinatorial assignment problem. Because we lack attainable criteria of fairness for this environment, he begins by formalizing two such criteria: the maximin share guarantee, based on the idea of divide-and-choose, generalizes and weakens fair share; envy bounded by a single good weakens envy freeness. Both criteria recognize that indivisibilities complicate fair division, but exploit the fact that bundles of indivisible objects are somewhat divisible. Dictatorships fail both criteria. Next, Budish proposes a specific mechanism, the Approximate Competitive Equilibrium from Equal Incomes Mechanism, which satisfies the fairness criteria while maintaining attractive compromises of efficiency and strategyproofness. An exact CEEI may not exist because of indivisibilities and complementarities. Budish proves existence of an approximate CEEI in which: incomes (in artificial currency) must be unequal but can be arbitrarily close together; and, the market clears with some error, which approaches zero in the limit and is small for realistic problems. He then shows that this approximation satisfies the fairness criteria, so long as income inequality is set sufficiently low. Also, the mechanism based on this approximation satisfies an intuitive relaxation of strategyproofness, strategy-proof in a large market. The theoretical case for the proposed mechanism is complemented with empirical analysis, using data on course allocation at Harvard.


John Kagel, Ohio State University, Yuanchuan Lien, California Institute of Technology, and Paul Milgrom, Stanford University
Ascending Prices and Package Bidding: A Theoretical and Experimental Analysis

Kagel and his co-authors use theory and experiment to explore the performance of multi-stage, price-guided, combinatorial auctions. Dynamics in combinatorial auctions can help bidders to identify relevant packages and encourage losing bidders to bid to their limits. This experiment compares a dynamic combinatorial mechanism to a simultaneous ascending auction. Unlike earlier experiments, the researchers report not only comparative efficiency and revenue but also statistics about bidder behavior that, according to theory, help to determine auction performance. They test the hypothesis that subjects in a combinatorial mechanism reach efficient and core allocations in the very environments where particular automated bidders perform similarly.


Lawrence Ausubel and Peter Cramton, University of Maryland
A Troubled Asset Reverse Auction and A Two-Sided Auction for Legacy Loans

 
Publications
Activities
Meetings
NBER Videos
Themes
Data
People
About

National Bureau of Economic Research, 1050 Massachusetts Ave., Cambridge, MA 02138; 617-868-3900; email: info@nber.org

Contact Us