1 / 30

Regret Minimizing Equilibria of Games with Strict Type Uncertainty

Regret Minimizing Equilibria of Games with Strict Type Uncertainty. Stony Brook Conference on Game Theory Nathana ë l Hyafil and Craig Boutilier Department of Computer Science University of Toronto. Overview. 1. Motivation / Background Automated Mechanism design Strict Uncertainty

denis
Télécharger la présentation

Regret Minimizing Equilibria of Games with Strict Type Uncertainty

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Regret Minimizing Equilibria of Games with Strict Type Uncertainty Stony Brook Conference on Game Theory Nathanaël Hyafil and Craig Boutilier Department of Computer Science University of Toronto

  2. Overview • 1. Motivation / Background • Automated Mechanism design • Strict Uncertainty • Minimax Regret • 2. Games with Strict Type Uncertainty • Definition of equilibrium • Existence of equilibrium • 3. Applications / Conclusion • Partial Revelation Mechanism Design

  3. Automated MD (AMD) • VCG: always pick efficient outcome • Myerson auction: • not always optimal outcome • but maximizes expected objective (revenue) given a prior over agents’ types • AMD: • for general objectives (not just revenue) • general outcome space (not just auctions)

  4. Automated MD (AMD) • Given: • sets of types, outcomes • objective function f(,o) (SW, revenue, ...) • prior over types • Optimization problem: • find mechanism (outcome for each type vector) • maximize expected objective value • subject to Constraints: • Incentive Compatibility (BNE or DS) • ( Individual Rationality , Budget Balance , ...)

  5. Where do priors over types come from? • “Experts”? • Costly! • Can rule out inappropriate valuations • But hard to quantify probabilistically • simple distribution (unrealistic but needed) • Observation of past behavior? • Gives linear constraints on values • Not probability distributions

  6. Strict Uncertainty • No probability distribution but subset of possible types • Agents cannot maximize expected utility • use MiniMax Regret as decision criterion • Mechanism Designer: can’t use Bayes-Nash Eq., can’t maximize expected objective  Mech Designer minimizes his regret too

  7. MiniMax Regret • Different from: • regret used to converge to equilibrium in repeated games (e.g., Hart & MasColell) • regret of Regret Theory (Bell; Loomes & Sugden) • Savage’s MiniMax Regret criterion from Decision Theory • recently used for uncertainty about utilities (as opposed to outcomes)

  8. MiniMax Regret • Single agent: make decision dD with incomplete utility function u U

  9. Why MiniMax Regret? • In this context, MaxiMin not good: x’ x’ x’ x x x’ x’ x x x x x’ u1 u2 u3 u4 u5 u6

  10. 2. Games of Incomplete Information with Strict Type Uncertainty • N players, and for each: • Actions: Ai • Types: i • Utility: ui: A  i R • Each agent knows its type, not the others’, but: • Common prior: Strict: T • Strategy: i: i  (Ai)

  11. Regret definitions • Regret of strategy i for agent i of type i, given type -i and strategy -i of the others: • MaxRegret of strategy i for i of type i, given strategy -i of the others (for prior T):

  12. 0.25 0.5 0.75 0.25 (V-.25)/2 0 0 0.5 V-.5 (V-.5)/2 0 0.75 V-.75 V-.75 (V-.75)/2 Example • First-Price Auction • 2 agents ; • 3 actions: .25 , .5 , .75 • Ties broken randomly

  13. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ?

  14. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2=.2:

  15. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2 = .2:

  16. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2 = .2:

  17. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2 = .2:

  18. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2 = .2 0

  19. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • What is MR1(bid =.25|1=.4 ; 2) ? R1(bid = .25) if 2 = .2 0 R1(bid = .25) if 2 = .4 0 R1(bid = .25) if 2 = .6 0 R1(bid = .25) if 2 = .8 0

  20. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • MR1(bid =.25|1=.4 ; 2) = max { 0, 0, 0, 0} = 0

  21. Example: Agent 1’s reasoning • 2 {.2, .4, .6, .8} 2: .2  .25 {.4 , .6 }  .5 .8  .75 • MR1(bid =.25|1=.4 ; 2) = max { 0, 0, 0, 0} = 0 • so argmina MR1(a| 1=.4 ; 2) = .25 • and MMR = 0

  22. Equilibrium definitions • MiniMax Regret Best Response to -i : iTi, •  is a MiniMax Regret Equilibrium iff i is a MiniMax Regret best resp. to -i, i • i is a MiniMax Regret Dominant Strategy iff it is a MiniMax Regret best resp. to all -i

  23. Example First-Price Auction Ti = {.2 , .4 , .6 , .8} • MiniMaxRegret Equilibrium: (i,i) with i: .2  bid .25 (MMR = 0) .4  bid .25 (MMR = 0) .6  bid (.25,.5) with p=(.6,.4) (MMR = 0.03) .8  bid (.5,.75) with p=(10/11,1/11) (MMR =.0227)

  24. Existence Results • Theorem: There exists a MiniMax Regret Eq in all games with finite number of agents, actions and types • Proposition:  is a MiniMax Regret dominant strategy equ. for a Strict incomplete information game iff it is a DS for any corresponding Bayesian game • Observation:  is a MiniMax Regret Eq. with zero regret for all types of all agents iff it is an Ex-Post Eq.

  25. Non-finite Games? • Proof relies on Kakutani’s fixed point theorem • main difference with Bayesian games: expected utility is linear, Max Regret is not • so any extension (e.g., continuous games) that doesn’t require linearity should apply to MMR (e.g., Milgrom & Weber 1987)

  26. 3. Applications: • Strict Automated Mechanism Design: • designer is regret minimizer too • regret of mechanism M1 vs. M2: difference in objective value (SW, …) between M1 and M2 when an ‘adversary’ picks the types of the agents • (Hyafil & Boutilier, UAI 2004): • formulation as optimization subject to IC, IR, … • infinite number of constraints, some non-linear • algorithm to solve as sequence of linear problems

  27. Application:Partial Revelation MD • Revelation Principle  Direct, truthful mechanisms: • agents directly report their full type • But: • hard/costly valuation problem • privacy concerns • communication costs

  28. Partial Revelation MD • Instead: partial type • v  [.4 , .6] • Partial Revelation: • Type space is partitioned in finite number of sets • Report is the subset containing full type • Choose outcome despite remaining uncertainty

  29. Partial Revelation MD • For very general form of partitions, with no structure on (quasi-linear) outcome space: • “impossible” to impose truthfulness in Dominant Strategies and Bayes-Nash equilibrium • Use MiniMax Regret equilibrium concept in Partial Revelation MD

  30. Conclusion • Games with Strict Uncertainty: • definition • proposed MiniMax Regret as Rationality concept • proved Existence of MiniMaxRegret Equilibria • Applications: • Partial Revelation MD • Multi-Attribute Bargaining • Sequential Strict Automated MD

More Related