1 / 34

Trust Online and the Phishing Problem: why warnings are not enough

Trust Online and the Phishing Problem: why warnings are not enough. M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz) Department of Computer Science & SECReT Doctoral Training Centre, UCL. <event name> 22/09/2011. Outline.

harris
Télécharger la présentation

Trust Online and the Phishing Problem: why warnings are not enough

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Trust Online and the Phishing Problem: why warnings are not enough M. Angela Sasse (based on work by Iacovos Kirlappos, Katarzyna Krol, Matthew Moroz) Department of Computer Science & SECReT Doctoral Training Centre, UCL <event name> 22/09/2011

  2. Outline • Basics of trust • 2 lab studies on an anti-phishing tool and security warnings • … which explain why current signals don’t work • What can we do? • Design • Communication to user

  3. What is trust? Trust is only required in the presence of risk and uncertainty “… willingness to be vulnerable, based on positive expectations about the actions of others” M. Bacharach & D. Gambetta 2001. Trust as Type Detection. In: Castelfranchi, C. & Tan, Y. Trust and Deception in Virtual Societies.

  4. Why? Economic Benefits

  5. Ignore these at your peril … • trust = split-second assessment, rather than thorough risk analysis and assurance • reliance = after several successful transactions, no perceived vulnerability = split of a split-second assessment

  6. How do we decide when to trust? • People assessment of transaction partner’s ability and motivation [Deutsch, 1956] • We look for cues (trust signals) that indicate these • This assessment can be based on • cognitive elements (rational) • affective reactions(pre-cognitive)

  7. TRUSTEE TRUSTOR

  8. TRUSTOR TRUSTEE 1 Signals

  9. TRUSTOR TRUSTEE Outside Option 1 Signals 2a Trusting Action 2b Withdrawal RISK

  10. TRUSTOR TRUSTEE Outside Option 1 Signals 2a Trusting Action 2b Withdrawal RISK 3b Defection 3a Fulfilment

  11. Dis-embedding Interaction is stretched over time and space and involves complex socio-technical systems [Giddens, 1990] … pervasive in modern societies (e.g. catalogue shopping) So – what’s so special about trust online? • Increased risk • Privacy (more data required) • Security (open system) • Own ability (errors) • Increased uncertainty • Inexperienced with decoding cues • Fewer surface cues available • Traditional cues no long useful J. Riegelsberger, M. A. Sasse, & J. D. McCarthy: The Mechanics of Trust. Int J of Human-Computer Studies 2005.

  12. Study 1: phishing • Passive phishing indicators (Spoofstick etc.) have limited effect • Users don’t look at indicators • Users don’t know what indicators mean • Require users to disrupt their main task • Time-consuming and error-prone R. Dhamija et al.: Why Phishing Works. Procs ACM CHI 2006 Schechter et al.: The Emperor’s New Security Indicators IEEE Security & Privacy 2007

  13. Are active anti-phishing tools better? • Example: SOLID by First Cyber Security • Traffic Light approach: • Passive indicator when no risk exists • Becomes active when a risk is identified

  14. Safe Website Green

  15. “Extreme Caution” • Shows up only when the website the users attempt to visit is certainly unsafe • Presents three options: • Redirection to the authentic website (Default option) • Close the window • Proceed to the risky site

  16. Results – Active Warning • “Extreme Caution” window resulted to 17 out of 18 participants visiting the genuine website. • Clear information • Right timing • Context-specific • Safe Default is important. • Users clicked “OK” without fully understanding the meaning of the message they have been presented with • They were redirected to the genuine website

  17. Results – did they still take risks? • Tool reduced number of participants taking risks, • But: some still take risks

  18. Why do users ignore the recommendation? • Price = main factor for ignoring the tool  Need And Greed Principle (Stajano & Wilson: Understanding Scam Victims Comm ACM March 2011) • General advice like “If it is too good to be true, it usually is” doesn’t work

  19. “I know better …” • Participants believe they can rely on their own ability to identify scam websites, and ignore the tool • Past experience with high false-positives creates a negative attitude towards security indicators • Cormac Herley: security tools/advice offering a poor cost-benefit will be rejected by users C. Herley: So Long, And No Thanks for all the Externalities Procs NSPW 2009

  20. Other trust cues • Perceived familiarity (reliance) • Mentioning other entities – Facebook and Twitter logos • Ads – “Why would anyone pay to advertise on a dog site?”, mention of charities • Lots of info, privacy policies, and good design

  21. Symbols of trust • arbitrarily assigned meaning • specifically created to signify the presence of trust-warranting properties • must be difficult to forge (mimicry) and sanctions in the case of misuse • expensive • trustor has to know about their existence and how to decode them. At the • trustees need to invest in emitting them and in getting them known

  22. Symptoms of trust • not specifically created to signal trust-warranting properties – rather, by-products of the activities of trustworthy actors • e.g. trustworthy online retailer has large customer base, repeat business • exhibiting symptoms of trust incurs no cost for trustworthy actors, whereas untrustworthy actors would have to invest effort mimic those signals

  23. Study 2: pdf warnings Most common file types in targeted attacks in 2009. Source: F-Secure (2010)

  24. The experiment • Two conditions: between-subjects design • Participant task: reading two articles and evaluating their summaries • choosing the first article: no warning • choosing the second article: a warning with each article the participants tried

  25. General results • 120 participants (64 female, mean age 25.7) χ2=1.391 p=0.238 df=1

  26. Gender differences • Women were more cautious and less likely to download an article with a warning

  27. Eye-tracking data • Fixation time in seconds • By warning type • 6.13 for generic warnings • 6.33 for specific warnings • By subsequent reaction • 6.94 for those who subsequently refused to download • 5.63 for those who subsequently downloaded the article No significant difference between the length of fixation, all participants were fairly attentive to the warning regardless of the text, but just took different decisions

  28. Hypothetical vs. observed behaviour Generic warning Specific warning

  29. Reasons for ignoring warning • Desensitisation (55 participants): past experience of false positives

  30. Reasons for ignoring warning • Trusting the source (29) “It depends on what the source was, if I was getting it from a dodgy website, I probably wouldn’t download it. But if something was sent to me by a friend or a lecturer or I was downloading it from a library catalogue, I would have opened it anyway.”

  31. Reasons for ignoring warning • Trusting anti-virus (18) I trusted that the anti-virus on my computer would pick anything up. • Trusting PDF (15) I don’t think PDF files can have this kind of harm in them. It says ‘PDF files can harm your computer’ and I know they can’t.

  32. Why security warnings don’t work • Warnings are not reliable and badly designed • more noise than signals • interrupt users’ primary task • pop-ups are associated with adverts and updates = ANNOYING!!! • Users have misconceptions: • about risks and indicators • about their own competence

  33. Conclusions: What can be done? • Re-design the interaction: eliminate choice, automatically direct users to safe sites • More effective trust signalling: develop symptoms of trust and protect symbols better • Get rid of useless warnings • Better communication about risks, correct misconceptions about trust signals

  34. Good Human Factors – by a security person • The system must be substantially, if not mathematically, undecipherable; • The system must not require secrecy and can be stolen by the enemy without causing trouble; • It must be easy to communicate and remember the keys without requiring written notes, it must also be easy to change or modify the keys with different participants; • The system ought to be compatible with telegraph communication; • The system must be portable, and its use must not require more than one person; • Finally, regarding the circumstances in which such system is applied, it must be easy to use and must neither require stress of mind nor the knowledge of a long series of rules. Auguste Kerckhoffs, ‘La cryptographie militaire’, Journal des sciences militaires, vol. IX, pp. 5–38, Jan. 1883, pp. 161–191, Feb. 1883.

More Related