1 / 45

Mechanism Design and Computer Security

Mechanism Design and Computer Security. John Mitchell Vanessa Teague Stanford University. The Internet. Three kinds of behavior: Blind obedience, rational self-interest, malicious disruption. Outline for this workshop talk . Some network problems

anoki
Télécharger la présentation

Mechanism Design and Computer Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Mechanism Design and Computer Security John Mitchell Vanessa Teague Stanford University

  2. The Internet Three kinds of behavior: Blind obedience, rational self-interest, malicious disruption

  3. Outline for this workshop talk • Some network problems • Congestion control, Interdomain routing • Algorithmic mechanism design • Pricing function provides incentives • Distributed mechanisms and security • Distributed impl by rational agents • Prevent malicious acts by rational agents • Open problem: irrational malicious agents Warning: bait and switch

  4. TCP/IP Transmission • TCP guarantees packet delivery • Source packets have sequence number • Destination acknowledges • If packet lost, source resends Source Destination

  5. TCP Congestion Control • If packets are lost, assume congestion • Reduce transmission rate by half, repeat • If loss stops, increase rate very slowly Design assumes routers blindly obey this policy Source Destination

  6. Competition • Amiable Alice yields to boisterous Bob • Alice and Bob both experience packet loss • Alice backs off • Bob disobeys protocol, gets better results Source A Destination Source B Destination

  7. What’s the point? • TCP/IP assumes honesty • If everyone follows protocol, transmission rates adapt to load • Incentive for dishonesty • Dishonest TCP works better, as long as others follow standard TCP backoff • Security risks • Vulnerable to denial of service, IP-spoofing, etc.

  8. Goal : More robust networking • Introduce economic incentives • Routers administered autonomously • Reward good behavior • Prevent tragedy of the commons • Include security measures • Economics => adaptive behavior • Better load balancing to increase welfare • Accounting => increased instrumentation • Detect, quarantine malicious behavior

  9. Interdomain Routing earthlink.net Stanford.edu Exterior Gateway Protocol Autonomous System Interior Gateway Protocol connected group of one or more Internet Protocol prefixes under a single routing policy (aka domain)

  10. Transit and Peering Transit: ISP sells access Peering: reciprocal connectivity BGP protocol: routing announcements for both Peering Peering Transit

  11. BGP overview • Iterative path announcement • Path announcements grow from destination to source • Subject to policy (transit, peering) • Packets flow in reverse direction • Protocol specification • Announcements can be shortest path • Nodes allowed to use other policies • E.g., “cold-potato routing” by smaller peer • Not obligated to use path you announce

  12. 3 2 7 2 7 3 4 4 3 2 6 5 2 6 5 2 6 5 5 2 3 4 2 7 6 5 7 2 3 4 4 7 2 6 5 2 7 7 2 3 4 6 2 7 7 2 3 4 2 6 5 6 2 3 4 5 BGP example [D. Wetherall] • Transit: 2 provides transit for 7 • 7 reaches and is reached via 2 • Peering: 4 and 5 peer • exchange customer traffic 3 4 1 8 2 5 6 7

  13. Issues • BGP convergence problems • Protocol allows policy flexibility • Some legal policies prevent convergence • Even shortest-path policy converges slowly • Incentive for dishonesty • ISP pays for some routes, others free • Security problems • Potential for disruptive attacks

  14. Evidence: Asymmetric Routes Alice Bob • Alice, Bob use cheapest routes to each other • These are not always shortest paths • Asymmetic routes are prevalent • AS asymmetry in 30% of measured routes • Finer-grained asymmetry far more prevalent

  15. Mechanism Design • Charge for goods • Assume agents have rational self-interest • Provide incentives via pricing function • Traditional use • Maximize social welfare • Make honesty the best policy (revelation principle) • Network applications • Maximize throughput, resilience to attack • Fake money as good as real money

  16. Grand Plan Goal

  17. Multicast cost sharing Node link Node • Distribute some good • Each node has some utility for the good • Each link has some cost • Which nodes get the transmission? link link Node Node

  18. Multicast solutions • Centralized scheme [FPS] • Pricing algorithm that elicits true utility • Controlled distributed scheme [FPS] • Works for tamper-resistant nodes • Problems if nodes are dishonest • Autonomous distributed scheme • Use signatures to verify data • Verifying node must not share incentive to cheat

  19. Traditional Goals • Efficient • Maximize overall welfare • Welfare = total utility of agents that get good  total network costs for links used • Strategyproof • Agent cannot gain by lying about its utility May not maximize profit for sender

  20. FPS Network Assumptions • Nodes and agents • Each node has trusted router • Router connected to untrusted agents • Transmission costs • Link cost known to the two nodes at each end Simplification: will assume one agent per node

  21. Centralized Scheme • Data collection • Agent reports utility to central authority • Computation • Compute welfare of each subtree • Routing decision • Transmit good to subtree if welfare  0

  22. Welfare of Subtree • Welfare of a subtree T i with cost ci • Wi = u i – ci if node i is leaf • Wi = ui – ci +  max(Wk, 0) otherwise Welfare is aggregate benefit minus cost k child of i

  23. Welfare 3-2 +0+4 = 5 Welfare 2-4 = -2 Welfare 1-3 +6 = 4 Welfare 7-1 = 6 Example: Maximum welfare cost 2 utility 3 cost 3 cost 4 utility 2 utility 1 cost 1 utility 7 If welfare is secret, how do we determine outcome?

  24. How much should a node pay? • Announced utility? • Agent may gain by lying Leaf will announce utility 2 since this is enough to get the good cost 2 utility 5 • Similar incentive for internal nodes

  25. FPS Pricing Mechanism • If agent does not receive the good • Agent pays nothing • If agent receives the good • Agent pays: the minimum bid needed to get the transmission, given the other players’ bids This is a VCG mechanism

  26. Welfare 3-2 +0+4 = 3 0 1 Welfare 2-4 = -2 0 2 3 2 3 Example price calculations cost 2 utility 3 Welfare 1-3 +6 = 4 cost 3 cost 4 Agent pays 0 utility 2 utility 1 Welfare 7-1 = 6 cost 1 Agent pays 3 utility 7

  27. Strategyproof and Efficient • Efficient (max welfare) by construction • Add omitted subtree -> decrease welfare • Remove routed subtree -> decrease welfare This argument assumes agents tell truth • Agent can bid true utility • Payment is independent of bid, given outcome • Bid more than utility  • doesn’t help, or pay too much • Bid less than utility  • doesn’t help, or don’t get the transmission

  28. Tell truth if you buy the good Don’t get transmission min bid to get transmission Get transmission utility bid true u Don’t get good you want

  29. Tell truth if you don’t buy good Pay more than u Don’t get transmission min bid to get transmission Get transmission utility bid true u

  30. Welfare 0-100 +100+100 = 100 Welfare 100-0 = 100 0 0 Welfare 100-0 =100 0 0 0 Profit for content distributor? • What’s the worst-case return? • Marginal-cost pricing does not guarantee profit • May lose money, fail to capture utility cost 100 utility 0 cost 0 cost 0 utility 100 Agent pays 0 utility 100

  31. Wmin = 5 Welfare 3 - 2 + 4 = 5 Wmin = 5 Welfare 1-3 + 6 = 4 Welfare 2-4 = -2 “No transmission” Wmin = 4 Welfare 7-1 = 6 Distributed implementation cost 2 utility 3 cost 3 cost 4 utility 1 utility 2 cost 1 1) Send welfare up tree 2) Send min welfare Wmindown tree 3) Compute payment = utility -Wmin utility 7

  32. Autonomous distributed model • Agents control nodes • They can use different utilities for different messages • An agent with children can lie about the children’s utilities • There is nothing to force an agent to pay the correct amount

  33. Welfare 2-3+2 = 1 Welfare 2-3+2 = 1 Wmin = 1 Wmin = 1 Welfare 7-5 = 2 Welfare 2 Wmin = 1 Wmin = 0 Node can cheat its children The truth The cheat source source cost 3 cost 3 utility 2 utility 2 cost 5 cost 5 utility 7 utility 7 Parent pays 0 Child pays 7 Parent pays 1 Child pays 6 Child can’t see that parent doesn’t pay

  34. More ways to cheat • Second example • Node can cheat but all messages look consistent • Conclusion • Need to use payment and messages to detect cheating

  35. Wmin = 0 Welfare 2 - 2 + 0 = 0 Wmin = 0 Welfare 1 - 1 = 0 Welfare 1 - 1 = 0 Wmin = 0 Pay: 2 1 1 Second Example Truthful computation source cost 2 utility 2 1 cost 1 cost 1 utility 1 3 utility 1 2

  36. Wmin = 2 Wmin = 2 Welfare 2 Welfare 2-2+0+2=2 Welfare 1-1=0 Welfare 3-1=2 Wmin = 2 Wmin = 2 Wmin = 2 Wmin = 2 Welfare 1-1=0 Welfare 1-1=0 Pay: Pay: 0 0 1 1 1 1 Example 2 Agent 1 behaves as if utility=4 until time to pay, then utility=2 Each child thinks other has utility 3 What agent 3 thinks Deception source source cost 2 cost 2 utility 4? utilty 2 1 1 cost 1 cost 1 cost 1 cost 1 3 3 2 2 utility 1 utility 1 utility 1 utility 3

  37. Prevent cheating • Assume public-key infrastructure • Each node has verifiable signature • Augment messages • Sign data from FPS algorithm • Parent returns signed W to child • Nodes send payment + proof • Proof is signed data showing payment is calculated correctly Two improvements yet to come

  38. Node J sends payment and proof New data – used in j’s proof p Sign(p, Wmin), Sign(p, W j ) Sign(j, W j) j Sign(d2, W d2 ) Sign (d1, W d1 ) Sign(j, Wmin) Sign(j, Wmin) utility Wd2 d2 utility Wd1 d1 Agent j pays Pj = Uj – min(Wmin, Wj) where Uj = cj + Wj – (Wd1 + Wd2) Calculation of Pj is verifiable from messages signed by p, d1, d2.

  39. Node J sends payment and proof • Lemma • If parent p and children d1, …, dk are honest, then node j cannot improve own welfare by not sending correct values • Proof idea • If node does not send correct proof, we punish j  node sends correct W j • Node j cannot gain by sending incorrect data down tree, since these do not change P j

  40. Shortcomings • Proof checked by central authority • Node can be mischievous • Node cannot increase own welfare by sending bad values down tree • But node can make life worse for others Wmintoo low => nodes below pay too much Wmintoo high => pay too little, distributor loses

  41. Randomized checking • Nodes pay and save proof • Randomly select node to audit • If node has correct proof, OK • If node cannot show proof, punish • Fine node, or prohibit from further transmission (route around bad node) • Make punishment high enough so expected benefit of cheating is negative • Reduce traffic, same outcome Bombay bus fine…

  42. Prevent Mischief • Receive signed confirmation from child • Confirmation is required as part of proof p j Sign(j, Wmin) Sign(d1, Wmin) d2 d1

  43. Status of Multicast Cost Sharing • Pricing function provides incentive • Distributed algorithm computes price • Techniques to encourage compliance • Nodes save signed confirmation of msgs • Randomized auditing incents compliance • Alternative: neighbors rewarded for turning in cheaters • Route around nodes that cause trouble

  44. Grand Plan Goal

More Related