1 / 6

Characterizing Dominant Strategy Implementation in Quasilinear Environments

This article explores the characterization of dominant strategy implementation in quasilinear environments, focusing on incentive compatibility and social choice functions. It also discusses the relationship between weak monotonicity and incentive compatibility, as well as the network interpretation of incentive compatibility constraints.

sharlenej
Télécharger la présentation

Characterizing Dominant Strategy Implementation in Quasilinear Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. (More on) characterizing dominant-strategy implementation in quasilinear environments(see, e.g., Nisan’s review: Chapter 9 of Algorithmic Game Theory book) Tuomas Sandholm Professor Computer Science Department Carnegie Mellon University

  2. Some characterization results (see Nisan’s review chapter) • Prop. A mechanism is incentive compatible iff • Agent i’s payment does not depend on his reported vi, but only on the alternative chosen, and • The mechanism picks an outcome (within its range) that optimizes for each player: f in argmaxo{ vi(o) – pi(o)} • Can also characterize in the space of social choice functions only: • Def. f satisfies Weak Monotonicity (WMON) if f(vi ,v-i) = a ≠ b = f(v’i ,v-i) => v’i(b) - vi(b) ≥ v’i(a) - vi(a) • In words: if social choice changes when a single agent changes his valuation, then it must be because the agent increased his value of the new choice relative to his value of the old choice. • Thm. If a mechanism is incentive compatible, then f satisfies WMON. If domains of preferences Vi are convex sets, then for every f that satisfies (even just local) WMON, there exists a payment rule such that the mechanism is incentive compatible. • First part is easy to prove, see page 227 of Algorithmic Game Theory book • Second part holds if • outcome space is finite [Saks and Yu EC-05], or • loop integral of f is zero around every sufficiently small triangle [Archer&Kleinberg EC-08] • They also show that the theorem applies to non-convex Vi by studying functions f that apply to the convex hull • They also show how a truthful f can be stitched together from locally truthful fi’s • Somewhat unsatisfactory: it is not clear exactly what the WMON functions are. WMON is a local condition. Is there a global condition? Yes for unrestricted or very restricted Vi. Largely open for practical problems that lie in between.

  3. Unrestricted Vi and affine maximizers • Affine maximizers are a generalization of Groves mechanisms • f in argmaxo{ co + iwivi(o) } • Prop. If the payment for agent i is hi(v-i) - j≠i (wj/wi) vj(o) – co/wi, then the mechanism is incentive compatible • Thm (Roberts 1979). If |O| ≥ 3, f is onto O, Vi = RO for every i, and the mechanism is incentive compatible, then f is an affine maximizer

  4. Single-parameter domains • Setting: • Vi is one-dimensional, i.e., Vi in R • For each agent, there is a set of equally-preferred “winning” outcomes and equally preferred “losing” outcomes • Assume “normalized”, that is, losing agents pay 0 • Thm. Mechanism is incentive compatible iff • f is monotone in every vi, and • every winning agent pays his critical value

  5. (Essentially) uniqueness of prices • Thm. • Assume the domains of Vi are connected sets (in the usual metric in Euclidean space) • Let (f, p1,…pn) be an incentive compatible mechanism • The mechanism (f, p’1,…p’n) is incentive compatible iff p’i(v1,…vn) = pi (v1,…vn) + hi (v-i)

  6. Network interpretation of incentive compatibility constraints • See, e.g., the overview article by Rakesh Vohra that is posted on the course web page • Similar approach also available for Bayes-Nash implementation [Weak monotonicity and Bayes–Nash incentive compatibilityGames and Economic Behavior, Volume 61, Issue 2, November 2007, Pages 344-358RudolfMüller, Andrés Perea, Sascha Wolf]

More Related