1 / 32

Multi-stage mechanisms (and how to automatically design them) ; Non-truth-promoting mechanisms

Learn about multi-stage mechanisms and how to design them automatically. Explore computational criticisms of the revelation principle and the challenges of elicitation in mechanism design. Discover the benefits of automated mechanism design and the potential for optimizing outcomes.

rucks
Télécharger la présentation

Multi-stage mechanisms (and how to automatically design them) ; Non-truth-promoting mechanisms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multi-stage mechanisms (and how to automatically design them); Non-truth-promoting mechanisms • Vincent Conitzer • Computer Science Department • Carnegie Mellon University • Guest lecture 15-892, 11/17/2005 • Papers: • Vincent Conitzer and Tuomas Sandholm. Computational Criticisms of the Revelation Principle. LOFT-04 • Tuomas Sandholm, Vincent Conitzer, and Craig Boutilier. Automated Design of Multistage Mechanisms. IBC-05

  2. mechanism actions outcome Mechanism design • One outcome must be selected • Select based on agents’ actions in the mechanism (typically preference revelation) • Agents report strategically, rather than truthfully • Mechanism is called truthful if truthful = strategic

  3. mechanism actions outcome new mechanism P1 types P2 P3 The revelation principle • Key tool in mechanism design • If there exists mechanism that performs well when agents act strategically, then there exists a truthful mechanism that performs just as well

  4. Computational criticisms of the revelation principle • Revelation principle says nothing about computational implications of using direct, truthful mechanisms • Does restricting oneself to such mechanisms lead to computational hassles? YES • If the participating agents have computational limits, does restricting oneself to such mechanisms lead to loss in objective (e.g. social welfare)? YES

  5. The elicitation problem • In general, every agent must report its whole type in direct, truthful mechanisms • This may be impractical in larger examples • In a combinatorial auction, each agent has values for exponentially many bundles • Computing one’s type may be costly • Privacy loss • For most mechanisms, this is not necessary • E.g. second-price auction only requires us to find the winner and the second-highest bidder’s valuation • Multistage mechanisms query the agents sequentially for aspects of their type, until they have determined enough information • E.g. an English auction

  6. Criticizing one-step mechanisms • Theorem. There are settings where: • Executing the optimal single-step mechanism requires an exponential amount of communication and computation • There exists an entirely equivalent two-step mechanism that only requires a linear amount of communication and computation • Holds both for dominant strategies and Bayes-Nash implementation

  7. Automatically designing multistage mechanisms • Automated mechanism design: design optimal mechanism specifically for setting at hand, as solution to an optimization problem • Generates optimal mechanisms in settings where existing general mechanisms do not apply/are suboptimal • If mechanism is allowed to be randomized, can be done in polynomial time using linear programming (if number of agents is small) • Can we automatically design multistage mechanisms?

  8. Small example: Divorce arbitration • Outcomes: • Each agent is of high type with probability 0.2 and of low type with probability 0.8 • Preferences of high type: • u(get the painting) = 100 • u(other gets the painting) = 0 • u(museum) = 40 • u(get the pieces) = -9 • u(other gets the pieces) = -10 • Preferences of low type: • u(get the painting) = 2 • u(other gets the painting) = 0 • u(museum) = 1.5 • u(get the pieces) = -9 • u(other gets the pieces) = -10

  9. high low Optimal randomized, dominant strategies, single-stage mechanism for maximizing sum of divorcees’ utilities high low .47 .4 .13 .04 .96 .04 .96

  10. A multistage mechanism corresponding to the single-stage mechanism low low high .04 .96 low .04 .96 high high .47 .4 .13

  11. Saving some queries low low .04 high .96 low .07 .93 high with probability .4, exit early with high .78 .22

  12. Asking the husband first low low high .04 .96 low .04 .96 high high .47 .4 .13

  13. Saving some queries (more this time) low low high .04 .96 low high with probability .51, exit early with high .82 .18 .08 .92

  14. Changing the underlying mechanism • For the given optimal single-stage mechanism, we can save more wife-queries than husband-queries • Suppose husband-queries are more expensive • We can changethe underlying single-stage mechanism to switch the roles of the wife and husband (still optimal by symmetry) • If we are willing to settle for (welfare) suboptimality to save more queries, we can change the underlying single-stage mechanism even further

  15. Fixed single-stage mechanism, fixed elicitation tree • As we saw: If all of a node’s descendants have at least a given amount of probability on a given outcome, then we can propagate this probability up • Theorem. Suppose both the single-stage mechanism and the elicitation tree (query order) are fixed. If we propagate probabilities up as much as possible, we get the maximum possible savings in terms of number of queries.

  16. What if the tree is not fixed? • Construct the tree first, then we can propagate up as before • Observation: The exit probability at a node does not depend on the structure of the tree after it • A greedy approach to asking queries: next query = query maximizing the probability of exiting right after it • Time complexity: O(|Q|*|A|*|O|*|Θ|) • Proposition. In various (small) examples, the greedy approach can save only an arbitrarily small fraction of the queries saved with the optimal tree

  17. Finding the optimal tree using dynamic programming • After receiving certain answers to certain questions, we are in some information state • Dynamic program computes the (minimum) expected number of queries needed from every state (given that we have not exited early) • Time complexity: O(|Q|*|A|*|O|*|Θ|*2|Θ|)

  18. What if underlying single-stage mechanism is not fixed (but elicitation tree is)? • Approach: design single-stage mechanism taking eventual query savings into account • Single-stage mechanism is designed using linear programming techniques • So, can we express query savings linearly? Yes: • For every vertex v in the tree, let • c(v) be the cost of the query at v • P(v) be the probability that v is on the elicitation path • e(v) the probability of exiting early at or before v given that v is on the elicitation path • Then, the query savings is Σvc(v)P(v)e(v) • All of these are constant except e(v) = Σominθv p(θ, o)

  19. What if nothing is fixed? • Could apply previous approach to all possible trees (inefficient) • No other techniques here yet…

  20. Auction example • One item, two bidders with values uniformly drawn from {0, 1, 2, 3} • Objective: maximize revenue • Optimal single-stage mechanism generated: (compare: Myerson auction)

  21. Multistage version of same mechanism • Using the dynamic programming approach for determining the optimal tree, we get:

  22. Changing the underlying single-stage mechanism • Using tree generated by dynamic program, we optimized the underlying mechanism for cost of 0.001 per query • Same expected revenue, fewer queries

  23. Changing the underlying single-stage mechanism • Same tree, but with a cost of 0.5 per query: • Lower expected revenue, fewer queries

  24. Beyond dominant-strategies single-stage mechanisms • So far, we have focused on dominant strategies incentive compatibility for the single-stage mechanism • Any corresponding multistage mechanism is ex-post incentive compatible • Weaker notion: Bayes-Nash equilibrium (BNE) • Truth-telling optimal if each agent’s only information about others’ types is the prior (and others tell the truth) • Multistage mechanisms may break incentive compatibility by revealing information • Proposition. There exist settings where • the optimal single-stage BNE mechanism is unique • the unique optimal tree for this mechanism is not incentive compatible • there is a tree that randomizes over the next query asked that is BNE incentive compatible and obtains almost the same query savings as the optimal tree, more than any other tree

  25. Conclusions on automatically designing multistage mechanisms • For dominant-strategies mechanisms, we showed how to: • turn a single-stage mechanism into its optimal multistage version when the tree is given (propagate probability up) • turn a single-stage mechanism into a multistage version when the tree is not given • greedy approach (suboptimal, but fast) • dynamic programming approach (optimal, but inefficient) • generate the optimal multistage mechanism when the tree is given but the underlying single-stage mechanism is not • BNE mechanisms seem harder (need randomization over queries)

  26. Criticizing truthful mechanisms • Theorem. There are settings where: • Executing the optimal truthful (in terms of social welfare) mechanism is NP-complete • There exists an insincere mechanism, where • The center only carries out polynomial computation • Finding a beneficial insincere revelation is NP-complete for the agents • If the agents manage to find the beneficial insincere revelation, the insincere mechanism is just as good as the optimal truthful one • Otherwise, the insincere mechanism is strictly better (in terms of s.w.) • Holds both for dominant strategies and Bayes-Nash implementation

  27. Proof (in story form) • k of the n employees are needed for a project • Head of organization must decide, taking into account preferences of two additional parties: • Head of recruiting • Job manager for the project • Some employees are “old friends”: • Head of recruiting prefers at least one pair of old friends on team (utility 2) • Job manager prefers no old friends on team (utility 1) • Job manager sometimes (not always) has private information on exactly which k would make good team (utility 3) • (n choose k) + 1 types for job manager (uniform distribution)

  28. Proof (in story form)… Recruiting: +2 utility for pair of friends Job manager: +1 utility for no pair of friends, +3 for the exactly right team (if exists) • Claim: if job manager reports specific team preference, must give that team in optimal truthful mechanism • Claim: if job manager reports no team preference, optimal truthful mechanism must give team without old friends to the job manager (if possible) • Otherwise job manager would be better off reporting type corresponding to such a team • Thus, mechanism must find independent set of k employees, which is NP-complete

  29. Proof (in story form)… Recruiting: +2 utility for pair of friends Job manager: +1 utility for no pair of friends, +3 for the exactly right team (if exists) • Alternative (insincere!) mechanism: • If job manager reports specific team preference, give that team • Otherwise, give team with at least one pair of friends • Easy to execute • To manipulate, job manager needs to solve (NP-complete) independent set problem • If job manager succeeds (or no manipulation exists), get same outcome as best truthful mechanism • Otherwise, get strictly better outcome

  30. u(t, o)? oracle u(t, o) = 3 Criticizing truthful mechanisms… • Suppose utilities can only be computed by (sometimes costly) queries to oracle • Then get similar theorem: • Using insincere mechanism, can shift burden of exponential number of costly queries to agent • If agent fails to make all those queries, outcome can only get better

  31. Is there a systematic approach? • Previous result is for very specific setting • How do we take such computational issues into account in general in mechanism design? • What is the correct tradeoff? • Cautious: make sure that computationally unbounded agents would not make mechanism worse than best truthful mechanism (like previous result) • Aggressive: take a risk and assume agents are probably somewhat bounded

  32. Thank you for your attention!

More Related