1 / 49

Research Direction Introduction

Research Direction Introduction. Advisor: Frank , Yeong -Sung Lin Presented by Hui -Yu, Chung. Agenda. Paper review Contest success function Worm Characteristics Worm propagation Problem descriptions Defender attributes Attacker attributes Attack-defense scenarios.

uta
Télécharger la présentation

Research Direction Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Research Direction Introduction Advisor: Frank, Yeong-Sung Lin Presented by Hui-Yu, Chung

  2. Agenda • Paper review • Contest success function • Worm Characteristics • Worm propagation • Problem descriptions • Defender attributes • Attacker attributes • Attack-defense scenarios

  3. Contest success function (CSF) • The idea of CSF came from the problem of “rent-seeking” in economic field • Which refers to efforts to capture special monopoly privileges • The phenomenon of rent-seeking in connection with monopolies was first formally identified in 1967 by Gordon Tullock • To identify the probability that certain party wins the privilege Tullock, Gordon (1967). "The Welfare Costs of Tariffs, Monopolies, and Theft". Western Economic Journal5 (3): 224–232

  4. Contest success function (CSF) • For 2 players in Tullock’s basic model • Original form: (Ratio form) • Since p1 + p2= 1, the original form can be transferred to: • In our scenario, CSF is transformed as follow:

  5. About contest intensity • Contest intensity m • m=0 • The efforts have equal impact on the vulnerability regardless of their size • 0<m<1 • Disproportional advantage of investing less than one’s opponent. • m=1 • The investment have proportional impact on the vulnerability →Random →Fighting to win or die →Normal case

  6. About contest intensity • Contest intensity m • m>1 • Disproportional advantage of investing more than one’s opponent. • m=∞ • A step function where “winner-takes-all” • The most popular versions of the Tullock CSF are the lottery(m = 1) and the all-pay auction(m = ∞) →God is on the side of larger battalions →Like Auction Jack Hirshleifer "Conflict and rent-seeking success functions - Ratio vs difference models of relative success," Proc. Public Choice 63, 1989, pp.101-112 Jack Hirshleifer "The Paradox of Power," Proc. Economics and Politics Volume 3 November 1993, pp.177-200

  7. About contest intensity • The result came from “Lanchester's laws” • Which is used to calculating the relative strengths of a predator/prey pair by Frederick Lanchesterin 1916, during the height of World War I. • Lanchester'sLinear Law • for ancient combatwhich one man could only ever fight exactly one other man at a time. • Lanchester's Square Law • for modern combat with long-range weapons such as firearms

  8. About contest intensity Inflection Point

  9. Worm Characteristics • Informationcollection • Collect information about the local or target network. • Probing • Scans and detects the vulnerabilities of the specified host, determines which approach should be taken to attack and penetrate. • Communication • Communicate between worm and hacker or among worms. • Attack • Makes use of the holesgained by scanning techniques to create apropagationpath. • Self-propagating • Uses various copies ofworms and transfers these copies among differenthosts.

  10. Wormpropagation model • Classical epidemic model • Does not consider any countermeasures • Used to analyze complicated scenario Su Fei, Lin Zhaowen, Ma Yan “A survey of internet worm propagation models” Proc. IC-BNMT2009, pp.453-457 Stefan Misslinger “Internet worm propagation”, Departement for Computer Science TechnischeUniversitÄatMÄunchen

  11. Wormpropagation model • Kermack-Mckendrickmodel(SIR model) • Takes remove process into consideration • susceptible • susceptible → infectious → removed • But doesn’t take network congestion into account # of infectious hosts including removed hosts

  12. Wormpropagation model • Two-factor Model • Considers human countermeasures and network countermeasures into account • Increasing removable rate • Decreasing infectious rate • More accurate model # of removed host from infectious hosts People’s awareness of the worm # of removed host from susceptible hosts

  13. Worm propagation time • Two-factor fit(Code Red Worm in July 2001) • Take both I → R and S → R into account • Decreased infectious rate • About 120,000 hosts are infected in 8 hours Cliff Changchun Zou, Weibo Gong, Don Towsley, "Code Red Worm Propagation Modeling and Analysis"

  14. Node compromise time • Using State-space predator model to be the attack model and estimate the MTTC (Mean Time-to-Compromise)of the system • Three levels of attacker capabilities • Beginner • Intermediate attacker • Expert attacker David John Leversage, Eric James “Estimating a System’s Mean Time-to-Compromise”, IEEE Computer Security & Privacy Volume 6, Number 1 pp. 52-60, January/February 2008

  15. Node compromise time • Divide the attacker’s actions into three statistical processes • Process 1 – The attacker has identified one or more known vulnerabilities and has one or more exploits on hand • Process 2 – The attacker has identified one or more known vulnerabilities but doesn’t have an exploit on hand • Process 3 – No known vulnerabilities or exploits are available • Mean time-to-compromise

  16. Node compromise time • Time-to-compromise • t1, t2, t3: expected mean time of process 1,2,3 • P1: prob. of a finding a vulnerability • u: failure probability to find an exploit • t1 is hypothesized to be 1 working day (8 hrs) • t2 is hypothesized to be 5.8*(expected tries) working days • t3= ((1/s)-0.5)*30.42+5.8 days, where s = AM/V

  17. Node compromise time • Estimated number or tries, ET • AM: avg # of vulnerabilities for which an exploit can be found or created by the attacker whose skill level is given • V: avg # of vulnerabilities per node within a zone • NM: the # of vulnerabilities an attacker with given skill won’t be able to use • NM = V-AM • Expected avg time needed in process 2: • ET*5.8 working days

  18. Node compromise time • Skill indicator s = AM/V • Prob. that attacker in process 1: • M: # of exploits readily available to the attacker • K: total # of nonduplicatevulnerabilities • Prob. That process 2 is unsuccessful

  19. Node compromise time • Results Measured inworking days

  20. Agenda • Paper review • Contest success function • Worm Characteristics • Worm propagation • Problem descriptions • Defender attributes • Attacker attributes • Attack-defense scenarios

  21. Attack-Defense scenario • Collaborative attack • One commander who has a group of attackers • Different attackers has different attributes • Budget, Capability • The commander has to decide his attack strategy at every round • ex. # of attackers, resource used • Once the strategy is given, all the attackers will exercise the attack simultaneously

  22. Defender attributes • Objective • Protect provided services • Budget • General defense resources(ex: Firewall, IDS) • Worm profile distribution mechanisms • Worm source identification methods

  23. Defender attributes • General defense mechanisms • Defense resource on each node • Dynamic topology reconfiguration • If the QoS is not satisfied, the disconnected link must be reconnect back • Worm defense mechanisms • Decentralized information sharing system • Unknown worm detection & profile distribution • Worm origin identification • Rate limiting • To slow down worm propagation • Firewall reconfiguration • May decrease QoS at the same time

  24. Defender attributes • Fixed defense resource • General defense resource on each node • Detection system on specific nodes • Dynamic defense resource • Generating worm signatures • Without expending budget • Worm origin identification • Rate limiting • Firewall reconfiguration • Dynamic topology reconfiguration

  25. Attacker attributes • Objective • To decrease the QoS of the defender • To steal information (by attacking some specific nodes) • Budget • Preparing Phase: worm injection • Attacking Phase: node compromising

  26. Attacker attributes • Attack mechanisms • Compromising Nodes • The goal is to finally compromise core nodes, which reduce the QoS of those core nodes to below certain level or steal sensitive information • Worm injection • The purpose is to get further topology information • After a node is compromised, the commander will decide whether to inject worms

  27. Attacker attributes • Process

  28. Compromising nodes • How to select the attackers? • The commander has to select the attackers who have enough attack resource • The resource required is computed via contest success function • During decision phase, all that commander has to do is to find out the interval of defense resource whose values are near the defense resource on that node • After every round the table will be updated by the new resource owned by the attacker selected

  29. How to select the attackers? • A corresponding defense resource table is created right after the defender had constructed his network topology • The value of an attacker resource Tis computed by the budget and attack time of that attacker • Attack power • Aggressiveness • The value of the defense resource tis the defense resource on a node in the network • The table is sorted in ascending order of t

  30. How to select the attackers? The budget, capability, and aggressiveness of the attackers is predetermined. The value of contest intensity m is given

  31. Aggressiveness • High Aggressiveness (Risk avoidance) • Often used to compromise nodes • Before worm injection • Higher when approaching core nodes • Low Aggressiveness (Risk tolerance) • Used to pretend to attack • Ex. To lower the risk level of certain core node

  32. Worm injection • Used to get more topology information behind nodes before compromising them • After compromising one node, the attacker can decide whether to inject a worm into it • Often choose a node with high link degree to inject worms • Worm Immune • Once a worm is detected by the defender, the defender may take some defense mechanism to immune from it • In that case, the attacker has to inject another type worm to get new information • Different types of worms • Scanning method, propagation rate, capability

  33. Terminate Condition

  34. Scenarios Q M R AS Node N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T K C F B L G D A

  35. One attacker to compromise node A Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F B L G D A Compromised

  36. Two attackers to compromise node C &D Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F B Compromised L G D A Compromised

  37. Inject Type I worm to node C Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B L G D A

  38. Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B L G D Self-propagation of the worm A

  39. Two attackers to compromise node I & F Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker Compromised K C F Type I Worm B Compromised L G D A

  40. Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker Compromised K C F Type I Worm B Compromised Detection alarm L G D A

  41. Two attackers to compromise node N & J Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Compromised Decentralized Information Sharing System I E J P T Attacker Compromised K C F Type I Worm B Detection alarm L G D A

  42. Inject type II worm to node N and J Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D A

  43. Scenarios Q M R AS Node Commander N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D A

  44. Scenarios Worm origin identification Dynamic topology reconfiguration Q M R AS Node Commander Rate limiting N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D Firewall reconfiguration A

  45. Two attackers to compromise node Q & P Scenarios Q M R AS Node Commander Rate limiting N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D Firewall reconfiguration A

  46. Scenarios Dynamic topology reconfiguration Q M R AS Node Commander Rate limiting N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D Firewall reconfiguration A Reconnect to satisfy QoS

  47. One attacker to compromise node O Scenarios Q M R AS Node Commander Rate limiting N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D Firewall reconfiguration A

  48. Two attackers to compromise core node R & S Scenarios Q M R AS Node Commander Rate limiting N Core AS Node H S O Firewall Decentralized Information Sharing System I E J P T Attacker K C F Type I Worm B Detection alarm L G Type II Worm D Firewall reconfiguration A

  49. ~Thanks for your attention~

More Related