html5-img
1 / 39

Security Modelling : What is Security?

Security Modelling : What is Security?. for Tsinghua University Clark Thomborson 12 March 2010. Questions to be (Partially) Answered. What is security? What is trust? “What would be the shape of an organisational theory applied to security?” [Anderson, 2008]

tiara
Télécharger la présentation

Security Modelling : What is Security?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Security Modelling: What is Security? for Tsinghua University Clark Thomborson 12 March 2010

  2. Questions to be (Partially) Answered • What is security? What is trust? • “What would be the shape of an organisational theory applied to security?” [Anderson, 2008] • How can an organisation control itself, and its environment, to increase its functionality and security? • How can an organisation exploit, and nurture, its trusting relationships?

  3. The Importance of Modelling • Assertion: A human can analyse simple systems (≤ 7 elements or concepts). • Implications: • If we want to analyse complex systems, we must use models (simplifications). • If we want to have confidence in our analyses, we must validate our models. • Validation: Do our analytic results (predictions) match our observations? • Error sources: model, application, observation.

  4. Still more questions... • What are the most important parts of a security model? • How can we validate a security model? • How can we validate an application of a security model? • How can we validate our observations of a secure system? • A journey of a thousand miles! We’ll take some initial steps...

  5. Human-based security! • Axioms: A1. Security and distrust are determined by human fears. A2. Functionality and trust are determined by human desires. • If nobody could be harmed or helped by a system, then ... • How could this system be secure or insecure? • How could it be functional or non-functional?

  6. Systems and Actors: Definitions • A system is a structured entity that interacts with other systems. • Every system is composed of atomic units called actors. • Every system has a distinguished actor called its constitution, which specifies • its constituent actors and their relationships; its interactional behaviour; and how the constitution will change as a result of its system’s interactions. • A constitution is rarely a complete specification. • If we insisted on completeness, we could not include humans in our models.

  7. System Architecture • Three types of relationships between actors • Hierarchical: a superior (owning) actor and its inferior actors (subsystems). • Peering: anonymous equals, with voting rights. • Aliased: to represent the different roles played by the same human or real-world system.

  8. Interactions • Axiom A3: System activity can be decomposed into interactions: A: M(B) → C • A, B, and C are systems. • Note: A, B, or C may be null, e.g. M → C. • M is a message: information (mass, or energy) that is transmitted from A to C, and which may be a function of B. • B is the subject of the message. For example, “A introduces B to C”.

  9. The Caja Project at Google • Rewrite JavaScript, to enforce capabilities. Alice: foo(Carol) → Bob Alice authorises Carol to provide “foo” to Bob.

  10. Modelling a Caja Guard • Alice has authority to call foo(Carol). • Carol is an external service provider. • foo() is a JavaScript object in Alice’s secure browser. • Bob is an untrusted JavaScript object. • Alice uses Caja to build gfoo(foo(Carol)). • Alice gives gfoo() to Carol. • Bob is unable to access foo(Carol) except by calling gfoo(), because • Caja uses a capability-safe subset of JavaScript. Alice Gift(gfoo()) Bob foo() gfoo() Carol Granovetter Diagram:

  11. Owners and Sentience • Axiom 4: Every system has an owner, and every owner is a system. • If a constitutional actor C is a subsystem of itself (i.e. if C owns C,and |C| = 1), then we say that “C is a sentient actor”. • We use sentient actors to model humans.

  12. Judgement Actors • Axiom A5: Every system has a distinguished actor called its “judgement actor”, which specifies its security and functionality requirements. • When a judgement actor is sent a message containing a list of actions, it may reply to the sender with a judgement. • A list of actions resulting in a positive judgement is a functional behaviour. • A list of actions resulting in a negative judgement is a security fault.

  13. Analyses • A descriptive and interpretive report of a judgement actor's (likely) responses to a (possible) series of system events is called an analysis of this system. • If an analysis considers only security faults, then it is a security analysis. • If an analysis considers only functional behaviour, then it is a functional analysis. • We can model an analyst as an actor in our systems!

  14. The Hierarchy • Control is exerted by a superior power. • Prospective controls are not easy to evade. • Retrospective controls are punishments. • The Hierarch grants allowances to inferiors. King, President, Chief Justice, Pope, or … Peons, illegal immigrants, felons, excommunicants, or … • The Hierarch can impose and enforce obligations. • In the Bell-LaPadula model, the Hierarch is concerned with confidentiality. Inferiors are prohibited from reading superior’s data. Superiors are allowed to read their inferior’s data.

  15. The Alias (in an email use case) • We use aliasesevery time we send personal email from our work computer. • We have a different alias in each organisation. • We are prohibited from revealing “too much” about our organisations. • We are prohibited from accepting dangerous goods and services. AgencyX Gmail C, acting as a governmental agent C, acting as a Gmail client • Each of our aliases is in a different security environment. • Managing aliases is difficult, and our computer systems aren’t very helpful…

  16. The Peerage • The peers define the goals of their peerage. • If a peer misbehaves, their peers may punish them only by ignoring them (shunning). • Peers can trade goods and services. Peers, Group members, Citizens of an ideal democracy, … Facilitator, Moderator, Democratic Leader, … • The trusted servants of a peerage do not exert control over peers. • The trusted servants may be aliases of peers, or they may be automata.

  17. Example: A Peerage Exerting Audit Control on a Hierarchy OS Root Administrator Auditor • Peers elect one or more Inspector-Generals. • The OS Administrator makes a Trusting appointment when granting auditor-level Privilege to an alias of an Inspector-General. • The Auditor discloses an audit report to their Inspector-General alias. • The audit report can be read by any Peer. • Peers may disclose the report to non-Peers. Users/ Peers Inspector-General (an elected officer) IG1 IG2 Chair of User Assurance Group

  18. Owner-Centric Security • Axiom A6. The judgement actor of a system is a representation of the desires and fears of its owner. • Requirements are poorly defined, if the analyst’s point of view isn’t stated. • Stakeholder analysis: The analyst should consider the (likely) security requirements of anyone who is (likely to be) affected by a system, when helping an owner define the judgement actor for their system. • The stakeholder analysis may reveal that the owner has some privacyrequirements – if the owner fears that their system will reveal private information about its users.

  19. What can an owner do? • An owner might pursue their desires by modifying their system, or by controlling its environment. • These are functional enhancements. • A fearful owner may seek security enhancements • by modifying their own system, or • by exerting control over other systems. • Security enhancements may cause functional degradations, and vice versa. • Separating the two analyses may help an owner understand their options. • Technologically-oriented analysts may not consider a full range of control options.

  20. Governments make things legal or illegal. Legal Illegal Moral Inexpensive Expensive Immoral Our culture makes things moral or immoral. Easy Difficult Lessig’s Taxonomy of Control The world’s economy makes things inexpensive or expensive. Computers make things easy or difficult.

  21. Temporal & Organisational Dimensions • Prospective controls: • Architectural security (easy/hard) • Economic security (inexpensive/expensive) • Retrospective controls: • Legal security (legal/illegal) • Normative security (moral/immoral) • Temporality = {prospective, retrospective}. • Organisation = {hierarchy, peerage}.

  22. Security Requirements (Traditional) • Confidentiality: no one is allowed to read, unless they are authorised. • Integrity: no one is allowed to write, unless they are authorised. • Availability: all authorised reads and writes will be performed by the system. • Authorisation: giving someone the authority to do something. • Authentication: being assured of someone’s identity. • Identification: knowing someone’s name or ID#. • Auditing: maintaining (and reviewing) records of security decisions.

  23. Micro to Macro Security Req’ts • “Static security”: system properties (confidentiality, integrity, availability). • “Dynamic security”: system processes (Authentication, Authorisation, Audit). • Beware the “gold-plated” system design! • “Security Governance”: human oversight • Specification, or Policy (answering the question of what the system is supposed to do), • Implementation (answering the question of how to make the system do what it is supposed to do), and • Assurance (answering the question of whether the system is meeting its specifications).

  24. Clarifying Static Security • Confidentiality, Integrity, and Availability are appropriate for read/write data. • What about security for executables? • Unix directories have “rwx” permission bits: XXXity! • What about security for directories, services, ...? • Each level of a taxonomy should have a few categories which cover all the possible cases. • Each case should belong to one category. • Confidentiality, Integrity, XXXity, “etc”ity are all Prohibitions. • Availability is a Permission.

  25. Prohibitions and Permissions • Prohibition: forbid something from happening. • Permission: allow something to happen. • There are two types of P-secure systems: • In a prohibitive system, all operations are forbidden by default. Permissions are granted in special cases. • In a permissive system, all operations are allowed by default. Prohibitions are special cases. • Prohibitive systems have permissive subsystems. • Permissive systems have prohibitive subsystems. • Prohibitions and permissions are properties of hierarchies, such as a judicial system. • Most legal controls (“laws”) are prohibitive. A few are permissive.

  26. Extending our Requirements Taxonomy • Contracts are non-hierarchical: agreed between peers. • Obligations are promises to do something in the future. • Exemptions are exceptions to an obligation. • There are two types of O-secure systems. • Obligatory systems have exemptive subsystems. • Exemptive systems have obligatory subsystems. • If a party alleges that another party has not met an obligation, then the contract’s enforcement clauses are invoked. Typically... • Arbitration: a mutually-trusted peer attempts to find a mutually-acceptable resolution to the contractual difficulty. • Litigation: the contract specifies a legal person (i.e. an alias of the obligated peer) who is ultimately responsible for contract fulfilment.

  27. Enforceable Contracts are OP-secure! • A legal person can petition the Judge. • The Judge controls all legal persons, and may require or prohibit specific actions and inactions: P-secure. • A typical contract includes an obligation to submit to a binding arbitration, during the dispute-resolution process: O-secure. • Contracts are based on trust between peers, with OP-security as a backstop. • Cloud security is currently problematic, in part because of a lack of contractual trust. Judge Legal persons Peers Arbitrator (a Trusted Third Party) Contract

  28. Review: Inactions and Actions • Four types of static security requirements: • Obligations are forbidden inactions, e.g. “I.O.U. $1000.” • Exemptions are allowed inactions, e.g. “You need not repay me if you have a tragic accident.” • Prohibitions are forbidden actions. • Permissions are allowed actions. • Two classification axes: • Strictness = {forbidden, allowed}, • Activity = {action, inaction}. • “Natural habitat” of these requirements: • Peerages typically forbid and allow inactions, • Hierarchies typically forbid and allow actions.

  29. Review: Today’s Questions • What is security? • Three layers: static, dynamic, governance. • Static security requirements: (forbidden, allowed) x (action, inaction). • Unanswered: how to characterise dynamic and governance requirements? • How can owners understand and improve the security and functionality of their systems? • Controls: (prospective, retrospective) x (hierarchy, peerage). • What is trust?

  30. NiklasLuhmann, on Trust • A prominent, and controversial, sociologist. • Thesis: Modern systems are so complex that we must use them, or avoid using them, without carefully examining all risks, benefits, and alternatives. • Trust is a reliance without an assessment. • We cannot control any risk we haven’t assessed  We trust any system which might harm us. (This is the usual definition.) • Distrust is an avoidance without an assessment.

  31. Security, Trust, Distrust, ... • Dimensions 1-2 are the requirements: (forbidden, allowed) x (action, inaction). • Dimensions 3-4 are the controls: (prospective, retrospective) x (hierarchy, peerage). • The fifth dimension in our framework is assessment, with three cases: • Cognitive assessment (of security & functionality), • Optimistic non-assessment (of trust & coolness), • Pessimistic non-assessment (of distrust & uncoolness).

  32. Security vs. Functionality • Sixth dimension: Feedback (negative vs. positive) to the owner of the system. • We treat security as a property right. • Every system has an owner, otherwise we cannot define its security or functionality. • The owner reaps the benefits from functional behaviour, and pays the penalties for security faults. (Controls are applied to the owner, ultimately.) • The analyst must understand the owner’s desires and fears.

  33. Summary of our Taxonomy • Requirements: • Strictness = {forbidden, allowed}, • Activity = {action, inaction}, • Feedback = {negative, positive}, • Assessment = {cognitive, optimistic, pessimistic}. • Controls: • Temporality = {prospective, retrospective}, • Organisation = {hierarchy, peerage}. • Layers = {static, dynamic, governance}.

  34. Application: Access Control • An owner may fear losses as a result of unauthorised use of their system. • This fear induces an architectural requirement (prospective, hierarchical): • Accesses are forbidden, with allowances for specified users. • It also induces an economic requirement, if access rights are traded in a market economy. • If the peers are highly trusted, then the architecture need not be very secure.

  35. Access Control (cont.) • Legal requirement (retrospective, hierarchical): Unauthorised users are prosecuted. • Must collect evidence – this is another architectural requirement. • Normative requirement (retrospective, peering): Unauthorised users are penalised. • Must collect deposits and evidence, if peers are not trusted.

  36. Functions of Access Control • If an owner desires authorised accesses, then there will be functional requirements. • Forbidden inaction, positive feedback (reliability) • If an owner fears losses from downtime, then there are also security requirements. • Forbidden inaction, negative feedback (availability) • Security and functionality are intertwined! • The analyst must understand the owner’s motivation, before writing the requirements. • The analyst must understand the likely attackers’ motivation and resources, before prioritising the requirements.

  37. Summary • What is security? What is trust? • Four qualitative dimensions in requirements: Strictness, Activity, Feedback, and Assessment. • Two qualitative dimensions in control: Temporality, and Power. • Can security be organised? Can organisations be secured? • Yes: Static, Dynamic, and Governance levels. • Hybrids of peerages and hierarchies seem very important.

  38. Open Questions • Can our framework be extended to dynamic systems, e.g. Clark-Wilson? • How should we model introspection? • How should changes to architectures, and to judgement actors, be specified and controlled? • Would an analysis, in our framework, be helpful in the debate over ECMA (JavaScript) harmonisation? • Capabilities (as in Caja) are natural in our models, but will be difficult to specify if analysts aren’t able to describe them to owners...

  39. Lecture Plan • Techniques for software watermarking and fingerprinting. • Techniques for software obfuscation and tamperproofing. • Steganography: functions and threats. • Axiomatic and behavioural trust.

More Related