1 / 22

Where Do All the Attacks Go?

Where Do All the Attacks Go?. Dinei Florencio and Cormac Herley Microsoft Research, Redmond. Why isn’t everyone hacked every day?. Webroot Survey: 90% share passwords across accounts 41% share passwords with others 20% use pet’s name as password Endless stream of new attacks every year

seda
Télécharger la présentation

Where Do All the Attacks Go?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Where Do All the Attacks Go? Dinei Florencio and Cormac Herley Microsoft Research, Redmond

  2. Why isn’t everyone hacked every day? • Webroot Survey: • 90% share passwords across accounts • 41% share passwords with others • 20% use pet’s name as password • Endless stream of new attacks every year • E.g. read LCD screens from reflections etc • If things are so bad, how come they’re so good?

  3. Traditional Threat Model • Alice is a user • Charles attacks • Phishing, keyloggers, guessing, password-reuse • Malware, rootkits, • Physical side-channels, ………… • Security as good as weakest link Attacks • Charles • Charles • Alice

  4. Problems with the threat model • It is numerically impossible (2 billion users) • At 1000:1 ratio (i.e. 2 million attackers) • Attackers = 1/3 as many as sw developers • US undergrad gets 50x more attention from Profs than Alice gets from Charles. • Idea that someone identifies/exploits weakest-link does not scale. • Fails to explain the observations • 20% choose dog’s name as password • Avoiding Harm ≠ Security

  5. A Threat Model that Scales Internet Users Alice(i) Attackers Charles(j) • Population of users • Population of attackers • Attacker doesn’t know you from a honeypot • Attack when Expected{Gain} > Expected{Cost} Attacks

  6. Attacks • Alice(i) exerts effort ei(k) against Attack(k) • Probability she succumbs: Pr{ei(k)} • Pr{ei(k)} monotonically decreasing with effort • Gain to Charles(j) from Alice(i): Gi • Cost for Attack(k), N users: Cj(N,k) Pr{ei(k)} Cost ei(k) # Users

  7. Charles(j) Expected Return Uj(k) So, Charles(j) gain: (1-Pr{SP}) - (N,k) Uj(k) = Prob. fraud detected Prob. Alice(i) succumbs Gain from Alice(i) Cost of Attack(k) For N users • Charles(j) selects Attack(k) that maximizes Uj(k)

  8. Sum-of-efforts Defense (1-Pr{SP}) ΣiPr{ei(k)} Gi - Cj(N,k) Sum over all attacked users of weighted efforts against Attack(k) • Recall as ei(k) increasesPr{ei(k)} decreases • Increasing effort from users decreases return

  9. Followed by Best-Shot Defense (1-Pr{SP}) ΣiPr{ei(k)} Gi - Cj(N,k) Fraud detection at Service Provider: Charles(j) must evade all detection measures

  10. So, where do all the attacks go?

  11. Average Success Rate Too Low • Attack unprofitable if: (1-Pr{SP}) ΣiPr{ei(k)} Gi< Cj(N,k) • If average success = 1/N ΣiPr{ei(k)} is too low then whole attack unprofitable. • Even if many profitable targets exist • Similarly, if average value too low • i.e. Gi small

  12. Attackers Collide Too Often Alice(i) • Recall attackers compete for vulnerable users • Suppose Attack(k) has deterministic outcome 1 if ei(k) < ε 0 otherwise • Example: brute-force using 10 popular pwds • abcdef, password, 123456, password1, etc • Every attacker who tries succeeds in same places • If ei(k) < ε Alice(i) ends up with M attackers in acct • In general share Gi with MPr{ei(k)} other attackers Charles(j) Pr{ei(k)} =

  13. Attack(k) too expensive (relative to alternatives) • Attack(k’) is cheaper Uj(k) < Uj(k’) for all attackers • Example: real-time MITM vs. pwd stealing

  14. Fraud Detection Too High (1-Pr{SP}) ΣiPr{ei(k)} Gi - Cj(N,k) • Pr{SP}  1 then return  0 • Example: • Alice(i)’s bank detects 99% of attempted fraud • True protection is not Alice(i)’s effort

  15. The Free-Rider Effect • Suppose brute-forcing is a profitable attack • All-but-one Internet users (finally) decide to get serious and choose strong passwords • Alice(i0) continues with “abcdef” • Profitability of brute-forcing plummets • Alice(i0)’s risk of harm  0 (w\o action on her part)

  16. Choosing Your Dog’s Name as Password • User chooses bank password = dog’s name • Easy money, right? • How many users have……… • Bank password = dog’s name? Say, 1% • Auto discover dog’s name? Say, 1% • Auto discover userID? Say, 1% • How many other Charles(j) use strategy? Say, 100 • Return is reduced by 108

  17. Dog’s Name as Password • Suppose instead: • 10 mins to discover dog’s name • 10 mins to discover userID • Thus 20 mins on average to get 1% of accts. • Compete with 10 other attackers • Bank catches 90% of attempted fraud • At $7.25/hour acct should be worth Gi > (10x10x100/3)x7.25 = $24200 • Suppose he makes (US min wage)/10 • Needs: Gi > $2420/acct • Exercise: find profitable assumptions

  18. Domino Effect of Acct. Escalation • Leveraging low-value accts to high • Password re-use across accts, etc. “One weak spot is all it takes to open secured digital doors and online accounts causing untold damage and consequences.” Ives etal 2004

  19. Leverage Low-Value Account To High? • Is this profitable on average • Given N webmails… • X% are contact email for bank • Y% userID can be determined automatically • Z% of banks email pwd reset link • W% the Secret Questions auto determined • Return dramatically reduced. For example • 0.1 x 0.01 x 0.1 x 0.05 = 0.00005 (1 in 200,000) • So 5 bank accts for every million webmails

  20. Diversity is more Important than Strength • Password is ………… • Dog’s name, cat’s name • Significant date, sports team • Written under keyboard • How common a strategy is matters more than how secure it is

  21. Conclusions Alice(i) • Avoiding Harm ≠ Security • Internet attackers face sum-of-effort defense • Avoiding harm is much less expensive than being secure • “Thinking like an attacker” doesn’t end when an attack is found. Charles(j)

  22. “And then what?”

More Related