1 / 25

Users vs. security

Users vs. security. Cyberdefence seminar, Tallinn Technical University. Maksim Afanasjev, 2011. Weakest link. It is not difficult to make a secure system. In theory. Reality is, however, different. Experiment on the weakest link.

issac
Télécharger la présentation

Users vs. security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Users vs. security Cyberdefence seminar, Tallinn Technical University Maksim Afanasjev, 2011

  2. Weakest link It is not difficult to make a secure system. In theory. Reality is, however, different

  3. Experiment on the weakest link The U.S. Department of Homeland Security ran a test in 2011 to see how hard it was for hackers to corrupt workers and gain access to computer systems: staff secretly dropped computer discs and USB thumb drives in the parking lots of government buildings and private contractors.  Of those who picked them up: • 60 percent plugged the devices into office computers • If the drive or CD case had an official logo, 90 percent were installed.

  4. Why humans are weakest link? Humans are not perfect. But security systems expect us to be. On average, 25 accounts, login 8 times per day They’re not thinking, ‘I want to be secure,’ They’re thinking, ‘I want to do my banking.’ “Users will knowingly install spyware if the tradeoffs are good for them,” research shows.

  5. Users' story Users usually • do not understand the importance of software, hardware and systems for their organizations • do not believe assets are at risk • do not understand they their behavior puts assets under risk (i.e. they will be attacked) • have problems using security tool correctly (e.g. PGP) Often users can not behave in way required or users do not want to behave in way required. Examples to follow:

  6. Users stories continued Policy "Lock the screens whenever you leave your computer" Outcome: does not work. People do not use it. Reason: "What my colleagues will think? That will ruin trusting relationship with colleagues." Lesson: Users will no comply with policies and mechanisms that are at odds with values they hold. Person that uses complex passwords, changing them often and uses locking screens will look paranoid in our culture. If behavior requires conflicts with norms, values, self-image- users will not comply.

  7. Users' workarounds • write passwords down (e.g. on ATM panel). Can not take care of own banking card! • share passwords • choose passwords that are insecure (but easy to remember)

  8. User solution If possible to avoid at all- avoid! If system allows easy password- use it! If too complex to use (e.g. demands too complex passwords), impossible to avoid- find a workaround!

  9. Is the tradeoff inherent? If system is secure, but not usable, people will start using other (completely) insecure system that is usable.  If system is usable, but not secure, it will not last long (compromised, hacked) There is agreement that systems must be designed secure and usable, but no agreement- how? It is pretty clear how to make system more or less secure to a needed degree. There are guidelines and libraries. But not clear how to make them usable?

  10. Tradeoff When trying to make secure systems usable, we add complexity. Complexity leads to higher chances to doing something wrong, i.e. less secure!

  11. Tradeoff example- relay attacks Passive keyless entry to cars -higher usability, but what about security? The car sends beacons on the LF channel periodically. These beacons are short wake-up messages for key. When the key detects the signal on the LF channel, it wakes up the microcontroller, demodulates the signal and interprets it. After computing a response to the challenge, the key replies on the UHF channel. This response is received and verified by the car. In the case of a valid response the car unlocks the doors.

  12. Relay attacks on keyless entry The challenge-response algorithm is tested and proven in conventional key systems. If a proper (length) key is used along with strong algorithm,- difficult to interfere.  LF requests work within 1-2m outside the car. UHF response works up to 100 meters.

  13. Relay attack car side with antennae key side

  14. Relay attack It works! Out of 10 cars, all were opened and started. Cars will continue running after they detect key is missing (for safety reasons) Very safe attack- no physical contact neither with key nor the car. If it does not work- nobody is risking. Countermeasures: • shielding the key (usability?) • removing battery from the key (usability?) • RF distance bounding protocol: a class of protocols in which one entity (the verifier) measures an upper-bound on its distance to another (trusted or untrusted) entity (the prover).

  15. Relay attack conclusion Manufacturers took an established system (challenge-response e.g. keeloq), snapped-on usability features. As a result,- absolutely new security flaws introduced. Outcome: usable, but less secure.

  16. Improving security at cost On August 25, 2004, Microsoft releases Service Pack 2 for Windows XP. This is a "security" SP. By default, Windows Firewall was enabled in this release.

  17. SP2 Firewall Were users expecting this? No! They just updated. Were users knowing what to do with it? No! Outcome: many found a way to disable the bugger. Problem: you can do this only once with a user. No ways to make it more gentle next time. Result: dissatisfaction with security measures, some disabled firewalls and maybe more secure overall.

  18. Ways to deal with security complexity In an ideal world, all of the security complexity is hidden from the user. In reality, not possible. 2 approaches: • to communicate an accurate conceptual model of the security to the user as quickly as possible. The smaller and simpler that conceptual model is, the more plausible it will be that it can be successful.  • user does not need to understand the "big picture". He must clearly understand the sequence of steps need to perform a task (Wizard?).

  19. Complexity Finding ways to both maximize security and usability has been a longstanding problem: According to Saltzer and Schroeder [Saltzer 75] in "Basic Principles of Information Protection" : Psychological acceptability: It is essential that the human interface be designed for ease of use, so that users routinely and automatically apply the protection mechanisms correctly. Also, to the extent that the user's mental image of his protection goals matches the mechanisms he must use, mistakes will be minimized. If he must translate his image of his protection needs into a radically different specification language, he will make errors. In practice, the principle is interpreted to mean that the security mechanism may add some extra burden, but that burden must be both minimal and reasonable. 

  20. Possible solutions Increasing the security without undermining usability and vv. Example: • locking screens. Make it a part of professional culture, not personal. Make absolutely clear that it is a part of professional behavior, not in any way connected to personal issues. Once users realize that locking screen has nothing to do with trust among colleagues, it will work Overall: security must be designed with usability in mind and vv. Once we start introducing security features when usability guidelines have been established- it will cause problems. Also, all security policies should take human psychology into account.

  21. Questions?

  22. Why Johnny Can’t Encrypt:A Usability Evaluation of PGP 5.0 Alma Whitten J. D. Tygar1 1999 "and the user testdemonstrated that when our test participants were given90 minutes in which to sign and encrypt a messageusing PGP 5.0, the majority of them were unable to doso successfully." specific definition ofusability for security Three of the twelve test participants (P4, P9, and P11)accidentally emailed the secret to the team memberswithout encryption.

  23. Definition: Security software is usable if thepeople who are expected to use it:1. are reliably made aware of the securitytasks they need to perform;2. are able to figure out how to successfullyperform those tasks;3. don’t make dangerous errors; and4. are sufficiently comfortable with theinterface to continue using it.

  24. Passwords: easily guessed, diffuclt to remember default passwords usually passwords that are easy to remember, are diffiult to guess People have difficult perception of what constitutes a "password that is difficult to guess". For example- do not use names, user will use Barbara1 People use foreign words (american would not expect an attacker to try Japanese word). Or, organization circulated a memo, explaining what a good password is, with samples. Attackers used the samples.

  25. Identification/Authentification 1. The unmotivated user. Security is usually a secondary goal. 2. The abstraction property abstractive layer 3. The lack of feedback propertyThe need to prevent dangerous errors makes itimperative to provide good feedback to the user,but providing good feedback for securitymanagement is a difficult problem. The state of asecurity configuration is usually complex, andattempts to summarize it are not adequate.

More Related