1 / 69

Game theory and security

Game theory and security. Jean-Pierre Hubaux EPFL With contributions (notably) from M. Felegyhazi , J. Freudiger , H. Manshaei , D. Parkes , and M. Raya. Security Games in Computer Networks. Security of Physical and MAC Layers Mobile Networks security Anonymity and Privacy

mariel
Télécharger la présentation

Game theory and security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Game theory and security Jean-Pierre Hubaux EPFL With contributions (notably) from M. Felegyhazi, J. Freudiger, H. Manshaei, D. Parkes, and M. Raya

  2. Security Games in Computer Networks • Security of Physical and MAC Layers • Mobile Networks security • Anonymity and Privacy • Intrusion Detection Systems • Sensor Networks Security • Security Mechanisms • Game Theory and Cryptography • Distributed Systems • More information: http://lca.epfl.ch/gamesec

  3. Alice Bob J1 Payment J2 Eavesdropper JN Security of physical and MAC layers (1/2) Players: Bob and his Jamming Friends Objective: Avoid eavesdropping on Bob  Alice communication Cost: Payment to jammer and interference to Bob and Alice Game model: Stackelberg game Game results: Existence of equilibrium and design a protocol to avoid eavesdropping • Zhu Han, N. Marina, M. Debbah, A. Hjorungnes, “Physical layer security game: How to date a girl with her boyfriend on the same table,” in GameNets 2009. • E. Altman, K. Avrachenkov, A. Garnaev, “Jamming in wireless networks: The case of several jammers,” GameNets 2009. • W. Trappe, A. Garnaev, “An eavesdropping game with SINR as an object function,” SecureComm 2009.

  4. Players (Ad hoc or Infrastructure mode): • Well-behaved (W) wireless modes • Selfish (S) - higher access probability • Malicious (M) - jams other nodes (DoS) • Objective: Find the optimum strategy against M and S nodes • Reward and Cost: Throughput and Energy • Game model: A power- controlled MAC game solved for • Bayesian Nash equilibrium • Game results: Introduce Bayesian learning mechanism • to update the type belief in repeated games Optimal defense mechanisms against denial of service attacks in wireless networks Security of physical and MAC layers (2/2) S W M W S Y.E. Sagduyu, R. Berry, A. Ephremides, “MAC games for distributed wireless network security with incomplete information of selfish and malicious user types,” GameNets 2009.

  5. Sensor networks security Players: 1. IDS (residing at the base station) 2. Sensor nodes (some nodes could act maliciously: drop packets) IDS at BS Miss Catch Sensor Node Normal (Fwr Pkts) Least damage False Positive Malicious (Drop Pkts) False Negative Best Choice for IDS Game: A repeated game between IDS and nodes to detect the malicious nodes Game Results: Equilibrium calculation in infinite repeated game and using the results to evaluate reputation of nodes A. Agah, M. Asadi, S. K. Das, “Prevention of DoS Attack in Sensor Networks using Repeated Game Theory,” ICWN 2006.

  6. But players observe their opponent’s actions with error Game Theory and Security Mechanism: dealing with uncertainty Players: Attacker and Defender Objective: Find the optimal strategy given the strategy of opponent Strategies: “Attack” or “Not to attack”, “Defend” or “Not to defend” Decision Process: Fictitious Play (FP) game Game model: Discrete-time repeated nonzero-sum matrix game Study of the effect of observation errors on the convergence to the NE with classical and stochastic FP games • K. C. Nguyen, T. Alpcan, and T. Basar, “Security games with incomplete information,” in ICC 2009. • A. Miura-Ko, B. Yolken, N. Bambos, and J. Mitchell, "Security Investment Games of Interdependent Organizations," Allerton Conference on Communication, Control, and Computing, Allerton, IL, September 2008.

  7. Cryptography Vs. Game Theory Y. Dodis, S. Halevi, T. Rubin. A cryptographic Solution to a Game Theoretic Problem. Crypto 2000

  8. Crypto and Game Theory Implement GT mechanisms in a distributed fashion Example: Mediator (in correlated equilibria) Dodis et al., Crypto 2000 Cryptography Game Theory Design crypto mechanisms with rational players Example: Rational Secret Sharing and Multi-Party Computation Halpern and Teague, STOC 2004

  9. Improving Nash equilibria (1/2) Player 2 Chicken Dare Chicken Player 1 Dare 3 Nash equilibria: (D, C), (C, D), (½ D + ½ C, ½ C+ ½ D) Payoffs: [5, 1] [1, 5] [5/2, 5/2] The payoff [4, 4] cannot be achieved without a binding contract, because it is notan equilibrium Possible improvement 1: communication Toss a fair coin  if Head, play (C, D); if Tail, play (D, C)  average payoff = [3, 3] Y. Dodis, S. Halevi, and T. Rabin. A Cryptographic solution to a game theoretic problem, Crypto 2000

  10. Improving Nash equilibria (2/2) Player 2 Chicken Dare Chicken Player 1 Dare • Possible improvement 2: Mediator • Introduce an objective chance mechanism: choose V1, V2, or V3 with probability 1/3 each. Then: • Player 1 is told whether or not V1 was chosen and nothing else • Player 2 is told whether or not V3 was chosen and nothing else • If informed that V1 was chosen, Player 1 plays D, otherwise C • If informed that V3 was chosen, Player 2 plays D, otherwise C • This is a correlated equilibrium, with payoff [3 1/3, 3 1/3] • It assigns probability 1/3 to (C, C), (C, D), and (D, C) and 0 to (D, D) • How to replace the mediator by a crypto protocol: see Dodis et al.

  11. Design of cryptographic mechanisms with rational players: secret sharing Reminder on secret sharing a. Share issuer b. Share distribution Agent 1 S1 S1 S3 S2 Agent 2 S3 Secret Agent 3 S2 c. Secret reconstruction Agent 1 S1 S2 Agent 2 S3 Agent 3

  12. The temptation of selfishness in secret sharing Agent 1 S1 • Agent 1 can reconstruct the secret • Neither Agent 2 nor Agent 3 can S2 Agent 2 S3 Agent 3 • Model as a game: • Player = agent • Strategy: To deliver or not one’s share (depending on what the other players did) • Payoff function: • a player prefers getting the secret • a player prefers fewer of the other get it • Impossibility result: there is no practical mechanism that would work Proposed solution: randomized mechanism • J. Halpern and V. Teague. Rational Secret Sharing and Multi-Party Computation. • STOC 2004

  13. Intrusion Detection Systems Subsystem 1 Attacker Subsystem 2 Subsystem 3 Players: Attacker and IDS Strategies for attacker: which subsystem(s) to attack Strategies for defender: how to distribute the defense mechanisms Payoff functions: based on value of subsystems + protection effort T. Alpcan and T. Basar, “A Game Theoretic Approach to Decision and Analysis in Network Intrusion Detection”, IEEE CDC 2003

  14. Two detailed examples • Revocation Games in Ephemeral Networks [CCS 2008] • On Non-Cooperative Location Privacy: A Game-Theoretic Analysis [CCS 2009]

  15. Misbehavior in Ad Hoc Networks Traditional ad hoc networks Ephemeral networks A B M • Packet forwarding • Routing • Large scale • High mobility • Data dissemination Solution to misbehavior: Reputation systems ?

  16. Reputation vs. Local Revocation • Reputation systems: • Often coupled with routing/forwarding • Require long-term monitoring • Keep the misbehaving nodes in the system • Local Revocation • Fast and clear-cut reaction to misbehavior • Reported to the credential issuer • Can be repudiated

  17. Tools of the Revocation Trade • Wait for: • Credential expiration • Central revocation • Vote with: • Fixed number of votes • Fixed fraction of nodes (e.g., majority) • Suicide: • Both the accusing and accused nodes are revoked Whichtool to use?

  18. How much does it cost? • Nodes are selfish • Revocation costs • Attacks cause damage How to avoid the free rider problem? Game theory can help: models situations where the decisions of players affect eachother

  19. Example: VANET • CA pre-establishes credentials offline • Each node has multiple changing pseudonyms • Pseudonyms are costly • Fraction of detectors =

  20. Revocation Game • Key principle: Revoke only costly attackers • Strategies: • Abstain (A) • Vote (V): votes are needed • Self-sacrifice (S) • benign nodes, including detectors • attackers • Dynamic (sequential) game

  21. Game with fixed costs 1 A S V A: Abstain S: Self-sacrifice V: Vote 2 2 A A S V S V 3 3 3 A S V A S V A S V Costof abstaining Cost of self-sacrifice Cost of voting All costs are in keys/message

  22. Game withfixedcosts: Example 1 Equilibrium 1 A S V 2 2 Backward induction A A S V S V 3 3 3 A S V A S V A S V Assumptions:c > 1

  23. Game withfixedcosts: Example 2 Equilibrium 1 A S V 2 2 A A S V S V 3 3 3 A S V A S V A S V Assumptions:v < c < 1, n = 2

  24. Game with fixed costs: Equilibrium Theorem 1: For any given values of ni,nr,v, and c, the strategy of player i that results in a subgame-perfect equilibrium is: ni=Number of remaining nodes that can participate in the game nr =Number of remaining votes that is required to revoke Revocation is left to the end, doesn’t work in practice

  25. Game with variable costs 1 A S V 2 2 A S V 3 S Number of stages Attack damage

  26. Game with variable costs: Equilibrium Theorem 2:For any given values of ni,nr,v, and δ, the strategy of player i that results in a subgame-perfect equilibrium is: Revocation has to be quick

  27. Optimal number of voters • Minimize: Abuse by attackers Duration of attack

  28. Optimal number of voters • Minimize: Abuse by attackers Duration of attack Fraction of active players

  29. RevoGame Estimation of parameters Choice of strategy

  30. Evaluation • TraNS, ns2, Google Earth, Manhattan • 303 vehicles, average speed = 50 km/h • Fraction of detectors • Damage/stage • Cost of voting • False positives • 50 runs, 95 % confidence intervals

  31. Revoked attackers

  32. Revoked benign nodes

  33. Maximum time to revocation

  34. Conclusion • Local revocation is a viable mechanism for handling misbehavior in ephemeral networks • The choice of revocation strategies should depend on their costs • RevoGame achieves the elusive tradeoff between different strategies

  35. Two detailed examples • Revocation Games in Ephemeral Networks [CCS 2008] • On Non-Cooperative Location Privacy: A Game-Theoretic Analysis [CCS 2009]

  36. Pervasive Wireless Networks Vehicular networks Mobile Social networks Human sensors Personal WiFi bubble

  37. Peer-to-Peer Communications WiFi/Bluetooth enabled 1 2 Identifier Message Signature || Certificate

  38. Location Privacy Problem Passive adversary monitors identifiers used in peer-to-peer communications 1 13h00: Lunch 11h00: Art Institute 10h00: Millenium Park

  39. Previous Work Message Pseudonym • Pseudonymity is not enough for location privacy [1, 2] • Removing pseudonyms is not enough either [3] Spatio-Temporal correlation of traces Identifier Message [1] P. Golle and K. Partridge. On the Anonymity of Home/Work Location Pairs. Pervasive Computing, 2009 [2] B. Hoh et al. Enhancing Security & Privacy in Traffic Monitoring Systems. Pervasive Computing, 2006 [3] B. Hoh and M. Gruteser. Protecting location privacy through path confusion. SECURECOMM, 2005

  40. Location Privacy with Mix Zones Spatial decorrelation: Remain silent Temporal decorrelation: Change pseudonym ? y 1 1 x 2 2 Mix zone Why should a node participate? [1] A. Beresford and F. Stajano. Mix Zones: user privacy in location aware services. Percom, 2004

  41. Mix Zone Privacy Gain B D 1 x 2 y t- t=T Number of nodes in mix zone

  42. Cost caused by Mix Zones • Turn off transceiver • Routing is difficult • Need new authenticated pseudonyms + + =

  43. Problem Tension between cost and benefit of mix zones When should nodes change pseudonym?

  44. Method Rational Behavior Selfish optimization Security protocols Multi-party computations • Game theory • Evaluatestrategies • Predictevolutionofsecurity/privacy • Example • Cryptography • Revocation • Privacymechanisms

  45. Outline • User-centric Model • Pseudonym Change Game • Results

  46. Mix Zone Establishment • In pre-determined regions [1] • Dynamically [2] • Distributed protocol [1] A. Beresford and F. Stajano. Mix Zones: user privacy in location aware services. PercomW, 2004 [2] M. Li et al. Swing and Swap: User-centric approaches towards maximizing location privacy . WPES, 2006

  47. User-Centric Location Privacy Model Privacy = Ai(T) – PrivacyLoss Privacy Ai(T1) Ai(T2) t Traceable

  48. Pros/Cons of user-centric Model • Pro • Control when/where to protect your privacy • Con • Misaligned incentives

  49. Outline • User-centric Model • Pseudonym Change Game • Results

  50. Assumptions Pseudonym Change game • Simultaneous decision • Players want to maximize their payoff • Consider privacy upperbound Ai(T) = log2(n(t)) 2 1

More Related