1 / 121

Incentives and Internet Algorithms

Incentives and Internet Algorithms. Joan Feigenbaum Yale University http://www.cs.yale.edu/~jf. Scott Shenker ICSI and U.C. Berkeley http://www.icir.org/shenker. Slides: http://www.cs.yale.edu/~jf/IPCO04. { ppt , pdf } Acknowledgments: Vijay Ramachandran (Yale)

neil-mercer
Télécharger la présentation

Incentives and Internet Algorithms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Incentives andInternet Algorithms Joan FeigenbaumYale Universityhttp://www.cs.yale.edu/~jf Scott ShenkerICSI and U.C. Berkeleyhttp://www.icir.org/shenker Slides: http://www.cs.yale.edu/~jf/IPCO04.{ppt,pdf} Acknowledgments: Vijay Ramachandran (Yale) Rahul Sami (MIT)

  2. Outline • Motivation and Background • Example: Multicast Cost Sharing • Overview of Known Results • Three Research Directions • Open Questions

  3. Three Research Traditions • Theoretical Computer Science: complexity • What can be feasibly computed? • Centralized or distributed computational models • Game Theory: incentives • What social goals are compatible with selfishness? • Internet Architecture: robust scalability • How to build large and robust systems?

  4. Different Assumptions • Theoretical Computer Science: • Nodes are obedient, faulty, or adversarial. • Large systems, limited comp. resources • Game Theory: • Nodes are strategic(selfish). • Small systems, unlimited comp. resources

  5. Internet Systems (1) • Agents often autonomous (users/ASs) • Have their own individual goals • Often involve “Internet” scales • Massive systems • Limited comm./comp. resources • Both incentives and complexity matter.

  6. Internet Systems (2) • Agents (users/ASs) are dispersed. • Computational nodes often dispersed. • Computationis (often)distributed.

  7. Internet Systems (3) • Scalability and robustness paramount • sacrifice strict semantics for scaling • many informal design guidelines • Ex: end-to-end principle, soft state, etc. • Computationmust be “robustly scalable.” • even if criterion not defined precisely • If TCP is the answer, what’s the question?

  8. Fundamental Question What computations are (simultaneously): • Computationally feasible • Incentive-compatible • Robustly scalable TCS Game Theory Internet Design

  9. Game Theory and the Internet • Long history of work: • Networking: Congestion control [N85], etc. • TCS: Selfish routing [RT02], etc. • Complexity issues not explicitly addressed • though often moot

  10. TCS and Internet • Increasing literature • TCP [GY02,GK03] • routing [GMP01,GKT03] • etc. • No consideration of incentives • Doesn’t always capture Internet style

  11. Game Theory and TCS • Various connections: • Complexity classes [CFLS97, CKS81, P85, etc.] • Price of anarchy, complexity of equilibria, etc.[KP99,CV02,DPS02] • Algorithmic Mechanism Design (AMD) • Centralized computation [NR01] • Distributed Algorithmic Mechanism Design (DAMD) • Internet-based computation [FPS01]

  12. DAMD: Two Themes • Incentives in Internet computation • Well-defined formalism • Real-world incentives hard to characterize • Modeling Internet-style computation • Real-world examples abound • Formalism is lacking

  13. Outline • Motivation and Background • Mechanism Design • Internet Computation • Example: Multicast Cost Sharing • Overview of Known Results • Three Research Directions • Open Questions

  14. System Notation Outcomes and agents: • is set of possible outcomes. • o represents particular outcome. • Agents have valuation functionsvi. • vi(o) is “happiness” with outcome o.

  15. Societal vs. Private Goals • System-wide performance goals: • Efficiency, fairness, etc. • Defined by set of outcomesG(v)   • Private goals: Maximize own welfare • vi is private to agent i. • Only reveal truthfully if in own interest

  16. Mechanism Design • Branch of game theory: • reconciles private interests with social goals • Involves esoteric game-theoretic issues • will avoid them as much as possible • only present MD content relevant to DAMD

  17. v1 vn . . . Agent 1 Agent n p1 a1 an pn Mechanism O Mechanisms Actions: aiOutcome: O(a)Payments: pi(a) Utilities: ui(a)=vi(O(a)) + pi(a)

  18. Mechanism Design • AO(v) = {action vectors} consistent w/selfishness • ai “maximizes” ui(a) = vi(O(a)) + pi(a). • “maximize” depends on information, structure, etc. • Solution concept: Nash, Rationalizable, ESS, etc. • Mechanism-design goal: O(AO (v))  G(v) for all v • Central MD question: For given solution concept, which social goals can be achieved?

  19. Direct Strategyproof Mechanisms • Direct: Actions are declarations of vi. • Strategyproof: ui(v) ≥ ui(v-i, xi), for all xi ,v-i • Agents have no incentive to lie. • AO(v) = {v} “truthful revelation” • Which social goals achievable with SP?

  20. Strategyproof Efficiency Efficient outcome: maximizes vi VCG Mechanisms: • O(v) = õ(v) where õ(v) = arg maxovi(o) • pi(v) = ∑j≠i vj(õ(v)) + hi(v-i)

  21. Why are VCG Strategyproof? • Focus only on agent i • vi is truth; xi is declared valuation • pi(xi) = ∑j≠i vj(õ(xi)) + hi • ui(xi) = vi(õ(xi))+ pi(xi) = jvj(õ(xi))+ hi • Recall: õ(vi) maximizes jvj(o)

  22. Group Strategyproofness Definition: • True: vi Reported: xi • Lying set S={i: vi ≠ xi}  iSui(x) > ui(v)   jSuj(x) < uj(v) • If any liar gains, at least one will suffer.

  23. Outline • Motivation and Background • Mechanism Design • Internet Computation • Example: Multicast Cost Sharing • Overview of Known Results • Three Research Directions • Open Questions

  24. Algorithmic Mechanism Design [NR01] Require polynomial-time computability: • O(a) and pi(a) Centralized model of computation: • good for auctions, etc. • not suitable for distributed systems

  25. Complexity of Distributed Computations (Static) Quantities of Interest: • Computation at nodes • Communication: • total • hotspots • Care about both messages and bits

  26. “Good Network Complexity” • Polynomial-time local computation • in total size or (better) node degree • O(1) messages per link • Limited message size • F(# agents, graph size,numerical inputs)

  27. Dynamics (partial) • Internet systems often have “churn.” • Agents come and go • Agents change their inputs • “Robust” systems must tolerate churn. • most of system oblivious to most changes • Example of dynamic requirement: • o(n) changes trigger (n) updates.

  28. Protocol-Based Computation • Use standardized protocol as substrate for computation. • relative rather than absolute complexity • Advantages: • incorporates informal design guidelines • adoption does not require new protocol • example: BGP-based mech’s for routing

  29. Outline • Motivation and Background • Example: Multicast Cost Sharing • Overview of Known Results • Three Research Directions • Open Questions

  30. Multicast Cost Sharing (MCS) Receiver Set Source Which users receive the multicast? 3 3 1,2 3,0 Cost Shares 1 5 5 2 How much does each receiver pay? 10 6,7 1,2 • Model [FKSS03, §1.2]: • Obedient Network • Strategic Users Users’ valuations: vi Link costs: c(l)

  31. Notation P Users (or “participants”) R Receiver set (i= 1if i  R) pi User i’s cost share (change in sign!) uiUser i’s utility (ui =ivi–pi) W Total welfare W(R)V(R)– C(R) ∆ = C(R)c(l) V(R)vi ∆ ∆ = = lT(R) iR

  32. “Process” Design Goals • No Positive Transfers (NPT): pi≥ 0 • Voluntary Participation (VP): ui≥ 0 • Consumer Sovereignty (CS): For all trees and costs, there is a cs s.t. i = 1ifvi≥cs. • Symmetry (SYM): If i,jhave zero-cost path and vi = vj, then i = j and pi = pj.

  33. Two “Performance” Goals • Efficiency (EFF): R = arg maxW • Budget Balance (BB): C(R) = iRpi

  34. Impossibility Results Exact[GL79]: No strategyproof mechanism can be both efficient and budget-balanced. Approximate[FKSS03]: No strategyproof mechanism that satisfies NPT, VP, and CS can be both -approximately efficient and -approximately budget-balanced, for any positive constants , .

  35. Efficiency Uniqueness [MS01]: The only strategyproof, efficient mechanism that satisfies NPT, VP, and CS is the Marginal-Cost mechanism (MC): pi = vi–(W– W-i), where Wis maximal total welfare, and W-i is maximal total welfare without agent i. • MC also satisfies SYM.

  36. Budget Balance (1) General Construction [MS01]: Any cross-monotonic cost-sharing formula results in a group-strategyproof and budget-balanced cost-sharing mechanism that satisfies NPT, VP, CS, and SYM. • R is biggest set s.t. pi(R) vi, for all i R.

  37. Budget Balance (2) • Efficiency loss [MS01]: The Shapley-value mechanism (SH) minimizes the worst-case efficiency loss. • SH Cost Shares:c(l)is shared equally by all receivers downstream of l.

  38. Network Complexity for BB Hardness [FKSS03]: Implementing a group-strategyproof and budget-balanced mechanism that satisfies NPT, VP, CS, and SYM requires sending (|P|) bits over (|L|) links in worst case. • Bad network complexity!

  39. Network Complexity of EFF “Easiness” [FPS01]: MC needs only: • One modest-sized message in eachlink-direction • Two simple calculations per node • Good network complexity!

  40. Computing Cost Shares pivi–(W–W-i) Case 1: No difference in treeWelfare Difference = viCost Share = 0 Case 2: Tree differs by 1 subtree.Welfare Difference = W(minimum welfare subtree above i)Cost Share = vi–W

  41. Two-Pass Algorithm for MC Bottom-up pass: • Compute subtree welfares W. • If W < 0, prune subtree. Top-down pass: • Keep track of minimum welfare subtrees. • Compare vi to minimal W.

  42. Σ W≡ vα + Wβ–cα β Ch(α)s.t.Wβ≥ 0 Computing the MC Receiver Set R Proposition:res(α) Riff Wγ≥ 0, γ {anc. ofαinT(P)} Additional Notation: {α, β, γ}  P Ch(α) children of αin T(P) res(α) all users “resident” at node α loc(i) node at which user i is “located” ∆ = ∆ = ∆ =

  43. Bottom-Up Traversal of T(P) , after receiving W,  Ch(): { COMPUTE W IF W ≥ 0, i1 i  res() ELSE i←0 i  res() SEND W TO parent() }

  44. 13 7 3 -1 13 1 1 4, 1 0 1 3 1 1 8 3 0 0 0 1, 2 3 1 2 1 1 1 1 9 0 1 1 1 1 1 1 0 0 1 3 2 1 1 10 1 0 0 1 1 1, 10 1, 1 3 2, 0

  45. Computing Cost Shares pivi–(W–W-i) Case 1: No difference in trees.Welfare Difference = viCost Share = 0 Case 2: Trees differ by 1 subtree.Welfare Difference = W(minimum welfare anc. of loc(i) )Cost Share = vi–W

  46. Need Not Recompute Wfor each i  P W4 Subtractvior W1from W2? W3 W2 … vi… W1

  47. 13 7 3 -1 13 1 1 4, 1 0 1 3 1 1 8 3 0 0 0 1, 2 3 1 2 1 1 1 1 9 0 1 1 1 1 1 1 0 0 1 3 2 1 1 10 1 0 0 1 1 1, 10 1, 1 3 2, 0

  48. 11 7 3 -1 11 1 1 4, 1 0 1 3 1 1 8 1 0 0 0 1, 0 3 1 2 1 1 1 1 9 0 1 1 1 1 1 1 0 0 1 3 2 1 1 10 1 0 0 1 1 1, 10 1, 1 3 2, 0

  49. 5 7 3 -1 5 1 1 4, 1 0 1 3 1 1 -1 3 0 0 0 1, 2 3 1 2 1 1 1 1 -1 0 1 1 1 1 1 1 0 0 1 3 2 1 1 0 1 0 0 1 1 1, 0 1, 1 3 2, 0

  50. Top-Down Traversal of T(P) s (Nodes have “state” from bottom-up traversal) Init: Root ssends W to Ch(s) ≠s, after receiving A from parent() : IF i = 0, i res(), OR A < 0 {pi0i0,i res() SEND -1 TO ,  Ch() } ELSE {A min(A, W) FOR EACH i res() IF vi≤A, pi0 ELSE pivi–A SEND A TO ,  Ch() }

More Related