Download
security n.
Skip this Video
Loading SlideShow in 5 Seconds..
Security PowerPoint Presentation

Security

149 Views Download Presentation
Download Presentation

Security

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Security Ross Anderson Computer Science Tripos part 2 Cambridge University

  2. Aims • Give you a thorough understanding of information security technology • Policy (what should be protected) • Mechanisms (cryptography, electrical engineering, …) • Attacks (malicious code, protocol failure …) • Assurance – how do we know when we’re done? • How do we make this into a proper engineering discipline?

  3. Objectives By the end of the course, you should be able to tackle an information protection problem by drawing up a threat model, formulating a security policy, and designing specific protection mechanisms to implement the policy.

  4. Outline • Four guest lectures • Nov 5: Sergei Skorobogatov, physical security • Nov 12: Robert Watson, concurrency • Nov 20: Joe Bonneau, web authentication • Nov 17 or 24: Steven Murdoch, anonymity • Two competitive class exercises (afternoon) • Nov 12: Hacking a web server, Robert Watson • Nov 17 or 24: Traffic analysis, Steven Murdoch

  5. Resources • My book “Security Engineering” – developed from lecture notes • Web page for course • Menezes, van Oorschot and Vanstone “Handbook of Applied Cryptography” (reference) • Stinson: “Cryptography: theory and practice” • Schneier “Applied cryptography” • Gollmann: “Computer Security”

  6. Resources (2) • History: • David Kahn “The Codebreakers” • Gordon Welchmann “The Hut Six Story” • Specialist: • Biham and Shamir “Differential Cryptanalysis” • Koblitz “Course in number theory and cryptography” • Lab: • Security seminars, typically Tuesdays, 4.15 • Security group meetings, Fridays, 4

  7. What is Security Engineering? Security engineering is about building systems to remain dependable in the face of malice, error and mischance. As a discipline, it focuses on the tools, processes and methods needed to design, implement and test complete systems, and to adapt existing systems as their environment evolves.

  8. What are we trying to do? How? With what? Design Hierarchy Policy Protocols … Hardware, crypto, …

  9. Security vs Dependability • Dependability = reliability + security • Reliability and security are often strongly correlated in practice • But malice is different from error! • Reliability: “Bob will be able to read this file” • Security: “The Chinese Government won’t be able to read this file” • Proving a negative can be much harder …

  10. Methodology 101 • Sometimes you do a top-down development. In that case you need to get the security spec right in the early stages of the project • More often it’s iterative. Then the problem is that the security requirements get detached • In the safety-critical systems world there are methodologies for maintaining the safety case • In security engineering, the big problem is often maintaining the security requirements, especially as the system – and the environment – evolve

  11. Clarifying terminology • A system can be: • a product or component (PC, smartcard,…) • some products plus O/S, comms and infrastructure • the above plus applications • the above plus internal staff • the above plus customers / external users • Common failing: policy drawn too narrowly

  12. Clarifying terminology (2) • A subject is a physical person • A person can also be a legal person (firm) • A principal can be • a person • equipment (PC, smartcard) • a role (the officer of the watch) • a complex role (Alice or Bob, Bob deputising for Alice) • The level of precision is variable – sometimes you need to distinguish ‘Bob’s smartcard representing Bob who’s standing in for Alice’ from ‘Bob using Alice’s card in her absence’. Sometimes you don’t

  13. Clarifying terminology (3) • Secrecy is a technical term – mechanisms limiting the number of principals who can access information • Privacy means control of your own secrets • Confidentiality is an obligation to protect someone else’s secrets • Thus your medical privacy is protected by your doctors’ obligation of confidentiality

  14. Clarifying terminology (4) • Anonymity is about restricting access to metadata. It has various flavours, from not being able to identify subjects to not being able to link their actions • An object’s integrity lies in its not having been altered since the last authorised modification • Authenticity has two common meanings – • an object has integrity plus freshness • you’re speaking to the right principal

  15. Clarifying Terminology (5) • Trust is the hard one! It has several meanings: • a warm fuzzy feeling • a trusted system or component is one that can break my security policy • a trusted system is one I can insure • a trusted system won’t get me fired when it breaks • I’m going to use the NSA definition – number 2 above – by default. E.g. an NSA man selling key material to the Chinese is trusted but not trustworthy (assuming his action unauthorised)

  16. Clarifying Terminology (6) • A security policy is a succinct statement of protection goals – typically less than a page of normal language • A protection profile is a detailed statement of protection goals – typically dozens of pages of semi-formal language • A security target is a detailed statement of protection goals applied to a particular system – and may be hundreds of pages of specification for both functionality and testing

  17. What often passes as ‘Policy’ • This policy is approved by Management. • All staff shall obey this security policy. • Data shall be available only to those with a ‘need-to-know’. • All breaches of this policy shall be reported at once to Security. What’s wrong with this?

  18. Policy Example – MLS • Multilevel Secure (MLS) systems are widely used in government • Basic idea: a clerk with ‘Secret’ clearance can read documents at ‘Confidential’ and ‘Secret’ but not at ‘Top Secret’ • 60s/70s: problems with early mainframes • First security policy to be worked out in detail following Anderson report (1973) for USAF which recommended keeping security policy and enforcement simple

  19. Levels of Information • Levels include: • Top Secret: compromise could cost many lives or do exceptionally grave damage to operations. E.g. intelligence sources and methods, battle plans • Secret: compromise could threaten life directly. E.g. weapon system performance, combat reports • Confidential: compromise could damage operations • Restricted: compromise might embarrass • Resources have classifications, people (principals) have clearances. Information flows upwards only

  20. Context of Multilevel Security • Hierarchy: Top Secret, Secret, Confidential, … • Information mustn’t leak from High down to Low • Enforcement must be independent of user actions! • Perpetual problem: careless staff • 1970s worry: operating system insecurity • 1990s worry: virus at Low copies itself to High and starts signalling down (e.g. covert channel)

  21. Context of Multilevel Security (2) • Nowadays: see our paper ‘The Snooping Dragon’ • September 2008: Dalai Lama’s office realised there had been a security failure • Initial break: targeted email with bad pdf • Then: took over the mail server and spread it • About 35 or their 50 PCs were infected • Fix (Dharamsala): take ‘Secret’ stuff offline • Fix (UKUSA agencies): use MLS mail guards and firewalls to prevent ‘Secret’ stuff getting out

  22. Information Flows Secret Confidential Unclassified

  23. Formalising the Policy • Bell-LaPadula (1973): • simple security policy: no read up • *-policy: no write down • With these, one can prove that a system which starts in a secure state will remain in one • Ideal: minimise the Trusted Computing Base (set of hardware, software and procedures that can break the security policy) so it’s verifiable • 1970s idea: use a reference monitor

  24. Objections to BLP • Some processes, such as memory management, need to read and write at all levels • Fix: put them in the trusted computing base • Consequence: once you put in all the stuff a real system needs (backup, recovery, comms, …) the TCB is no longer small enough to be easily verifiable

  25. Objections to BLP (2) • John MacLean’s “System Z”: as BLP but lets users request temporary declassification of any file • Fix: add tranquility principles • Strong tranquility: labels never change • Weak tranquility: they don’t change in such a way as to break the security policy • Usual choice: weak tranquility using the “high watermark principle” – a process acquires the highest label of any resource it’s touched • Problem: have to rewrite apps (e.g. license server)

  26. Covert Channels • In 1973 Butler Lampson warned BLP might be impractical because of covert channels: “neither designed not intended to carry information at all” • A Trojan at High signals to a buddy at Low by modulating a shared system resource • Fills the disk (storage channel) • Loads the CPU (timing channel) • Capacity depends on bandwidth and S/N. So: cut the bandwidth or increase the noise • But it’s really hard to get below 1bps or so…

  27. Objections to BLP (3) • High can’t acknowledge receipt from Low • This blind write-up is often inconvenient: information vanishes into a black hole • Option 1: accept this and engineer for it (Morris theory) – CIA usenet feed • Option 2: allow acks, but be aware that they might be used by High to signal to Low • Use some combination of software trust and covert channel elimination (more later …)

  28. Variants on Bell-LaPadula • Noninterference: no input by High can affect what Low can see. So whatever trace there is for High input X, there’s a trace with High input Ø that looks the same to Low (Goguen and Messeguer 1982) • Nondeducibility: weakens this so that Low is allowed to see High data, just not to understand it – e.g. a LAN where Low can see encrypted High packets going past (Sutherland 1986)

  29. Variants on Bell-LaPadula (2) • Biba integrity model: deals with integrity rather than confidentiality. It’s “BLP upside down” – high integrity data mustn’t be contaminated with lower integrity stuff • Domain and Type Enforcement (DTE): subjects are in domains, objects have types • Role-Based Access Control (RBAC): current fashionable policy framework

  30. The Cascade Problem

  31. Composability • Systems can become insecure when interconnected, or when feedback is added

  32. Composability • So nondeducibility doesn’t compose • Neither does noninterference • Many things can go wrong – clash of timing mechanisms, interaction of ciphers, interaction of protocols • Practical problem: lack of good security interface definitions (we’ll talk later about API failures) • Labels can depend on data volume, or even be non-monotone (e.g. Secret laser gyro in a Restricted inertial navigation set)

  33. Consistency • US approach (‘polyinstantiation’): • UK approach (don’t tell low users):

  34. Downgrading • A related problem to the covert channel is how to downgrade information • Analysts routinely produce Secret briefings based on Top Secret intelligence, by manual paraphrasis • Also, some objects are downgraded as a matter of deliberate policy – an act by a trusted subject • For example, a Top Secret satellite image is to be declassified and released to the press

  35. Downgrading (2) Text hidden in least significant bits of image

  36. Downgrading (3) Picture hidden in three least significant bits of text

  37. Examples of MLS Systems • SCOMP – Honeywell variant of Multics, launched 1983. Four protection rings, minimal kernel, formally verified hardware and software. Became the XTS-300 • Used in military mail guards • Motivated the ‘Orange Book’ – the Trusted Computer System Evaluation Criteria • First system rated A1 under Orange Book

  38. Examples of MLS Systems (2) • Blacker – series of encryption devices designed to prevent leakage from “red” to “black”. Very hard to accommodate administrative traffic in MLS! • Compartmented Mode Workstations (CMWs) – used by analysts who read Top Secret intelligence material and produce briefings at Secret or below for troops, politicians … Mechanisms allow cut-and-paste from L  H, L  L and H  H but not H  L • The Australian failure

  39. Examples of MLS Systems (3) • The NRL Pump was designed to copy data continuously up from Low to High with minimal covert channel leakage

  40. Examples of MLS Systems (4) • LITS – RAF Logistics IT System – a project to control stores at 80 bases in 12 countries. Most stores ‘Restricted’, rest ‘Secret’, so two databases connected by a pump • Other application-level rules, e.g. ‘don’t put explosives and detonators in the same truck’ • Software project disaster, 1989–99! • Eventual solution: almost all stuff at one level, handle nukes differently

  41. Examples of MLS Systems (5) • DERA’s ‘Purple Penelope’ was an attempt to relax MLS to accountability for lower levels of stuff • Driver: people determined to use Office • Solution: wrapper around Windows that tracks current level using high watermark • Downgrading allowed, but system forces authentication and audit • Now called ‘Sybard Suite’

  42. Multilevel Integrity • The Biba model – data may flow only down from high-integrity to low-integrity • Dual of BLP: • Simple integrity property: subject may alter object only at same or lower level • *-integrity property: subject that can observe X is allowed to alter objects dominated by X • So you have low watermark properties, etc • Example: medical equipment with two levels, “calibrate” and “operate”

  43. Multilevel Integrity (2) • Big potential application – control systems • E.g. in future “smart grid” • Safety: highest integrity level • Control: next level • Monitoring (SCADA): third level • Enterprise apps (e.g. billing): fourth level • Complexity: prefer not to operate plant if SCADA system down (though you could) • So a worm attack on SCADA can close an asset

  44. Multilevel Integrity (3) • LOMAC was an experimental Linux system with system files at High, network at Low • A program that read traffic was downgraded • Vista adopted this – marks objects Low, Medium, High or System, and has default policy of NoWriteUp • Critical stuff is System, most other stuff is Medium, IE is Low • Could in theory provide good protection – in practice, UAC trains people to override it!

  45. Decoupling Policy, Mechanism • Role-based Access Control is what we always used to do in banking! • Formalised by Ferraiolo & Kuhn of NIST • Extra indirection layer: ‘officer of the watch’, ‘branch accountant’, ‘charge nurse’ • You still need a policy – such as Bell-LaPadula or Biba • The policy may end up being expressed differently

  46. Decoupling Policy, Mechanism (2) • SELinux – Linux hardened with help from the NSA – uses an architecture called Flask • Policy separated from mechanism; security server in the kernel manages attributes • Default security models: TE and MLS with RBAC • Red Hat uses it to separate services – a web server compromise doesn’t automatically get DNS too • Trusted BSD – now in iPhones • Common problem: complexity of software means it’s hard to design useful security policies

  47. Decoupling Policy, Mechanism (3) • VMWare, Xen used to provide separation in hosting centres. Can we do the same client-side? • NetTop lets you run multiple Windows instances on a Linux box • Big problem: complexity of the modern PC! • Which VM does the mic/camera belong to right now? • What’s going on in the graphics card? • Other big problem: managing flow between levels. Typically done server-side by governments • What about commercial / domestic users?

  48. Multilateral Security • Sometimes the aim is to stop data flowing down • Other times, you want to stop lateral flows • Examples: • Intelligence • Competing clients of an accounting firm • Medical records by practice or hospital

  49. The Lattice Model • This is how intelligence agencies manage ‘compartmented’ data – by adding labels • Basic idea: BLP requires only a partial order

  50. The Lattice Model (2) • The levels “Secret Crypto” and “Top Secret” are incomparable – no data flow either way • Codewords can have special handling rules – e.g. wartime “Ultra” (now “Umbra”) meant you could not risk capture • Problem: either you end up with millions of compartments, or you end up with all the good stuff in one pot (the CIA’s problem with Ames) • Post-9/11, the big-pot model is gaining ground …