1 / 54

Chapter 2 An Historical Perspective

Chapter 2 An Historical Perspective. I Sue Fitzgerald Metropolitan State University CS 328 Computer Security Fall 2008. Overview. The Orange Book Models Information flow – Bell-LaPadula Biba - integrity Access Control Object Reuse Covert Channels. History of Computer Security.

Télécharger la présentation

Chapter 2 An Historical Perspective

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 An Historical Perspective I Sue Fitzgerald Metropolitan State University CS 328 Computer Security Fall 2008

  2. Overview • The Orange Book • Models • Information flow – Bell-LaPadula • Biba - integrity • Access Control • Object Reuse • Covert Channels

  3. History of Computer Security • Military and other governmental agencies • Had lots of sensitive information • Adversaries were real (other countries) • US wanted to use computers for storage and processing • Led to concepts and models, rigid standards on features and assurance/validation (1970’s and 1980’s)

  4. The Rainbow Series • Published by the US govt (DoD) in the 1980’s • Provided a standard to measure security • A guide to manufacturers • A guide for what computers to buy for DoD applications • Published as a set of books with colored covers

  5. The Orange Book • Set out the criteria for trusted computer systems • Defined • Security features – what systems must do • Assurance – are the features present and working properly? • Vendors submitted their products to the govt for evaluation and rating

  6. Historical Security Models • Real systems too complex and fragile to experiment on • Solution: Abstract the real system to a model • Early models focused on information flow • Models were restricted and mathematically tractable • Correctness can be proved

  7. Information Flow • Information confinement: Make sure the data does not ‘flow’ to the wrong place • Clearance levels indicate trustworthiness or sensitivity • Top secret (most trustworthy/sensitive) • Secret • Confidential • Unclassified (least trustworthy/sensitive)

  8. Clearance Levels • Label each subject with a clearance level that indicates his/her trustworthiness • Label each object with a clearance level that indicates its sensitivity • Ensure that information flows only upward (toward more trustworthy/more sensitive) • Less privileged subjects cannot read more sensitive data

  9. Categories • Add “need to know” criteria • Categories are more complex than just 4 levels • AKA compartments • E.g., Categories = { “VT”, “NH”, “ME”, “NY” } • Label each subject with the topics they are allowed to have knowledge of • Label each object with the topic it belongs to

  10. Categories (continued) • An object can belong to more than one topic (a set of topics) • The more topics in the set, the more restrictive the clearance needed for that object • Sets can be ordered – { VT } subset { VT, NH }

  11. Categories (continued) • System makes sure information only flows upward (more restrictive) – subjects cannot read about topics which they don’t need to know about

  12. Partial Orders and Lattices • However, there may be sets of labels that have no order relative to one another • See Figure 2.2, page 27 • A partial order is the mathematical construct for orders that can leave some sets unordered. • A lattice is a partial order (with a few more properties)

  13. Why Do We Care? • In military and defense environments, information should be ordered according to both its sensitivity level and a ‘need to know’ • Combine both the clearance level and a subset of the categories • See Figure 2.3 on page 28

  14. Multilevel Security • Multilevel security (MLS) refers to this general approach of partial orders and information flow • Systems that enforce flow rules within this framework are said to practice mandatory access control (MAC) • Systems that allow owners of objects to designate access control rules are said to practice discretionary access control (DAC)

  15. Bell-LaPadula Model • Bell and LaPadula built a formal model to describe the information flow (confidentiality) in a system • Subjects and objects are placed in clearance/category classes. • The model has 3 security properties

  16. Simple Security Property • “A subject can read an object only if the class of the subject dominates or equals the class of the object.” p. 29 • “Also known as the no-read-up rule”, p. 29 • “A subject cannot read data from an object “above” it in the lattice”, p. 29

  17. The *-Property • “If a subject has simultaneous read access to object O1 and write access to object O2, then the class of O2 must dominate or equal the class of O1.”, p. 29 • “AKA the no-write-down rule”, p. 29 • A subject cannot write data below it in the lattice. • “A subject that can read data in some class cannot write to lower class; such a write would violate the information flow rules.”, p. 29

  18. The *-Property (continued) • “This is also known as the high-water mark. The subject is marked by the highest level of data to which it has had access and thus by which it may have been contaminated.”, p. 30

  19. The Discretionary Security Property • “A subject S can perform an access on an object O only if htat access is permitted in the S-O entry of the current access control matrix”, p. 30

  20. Basic Security Theorem • Bell and LaPadula proved that “if we start in a secure state and if each transition abides by ]these properties,] then the system remains in a secure state.”, p. 30

  21. Biba Model • Bell-LaPadula addresses confidentiality, not integrity • Biba model pertains to integrity • Subjects and objects are organized according to integrity levels • Subjects cannot read objects with lesser integrity nor write to objects with greater integrity • Prevents contamination from flowing to stronger parties.

  22. Clark and Wilson • Developed a formal model for guaranteeing integrity based on well-formed transactions

  23. Chinese Wall • Prevents conflict of interest (e.g., a law firm representing both sides of a litigation) • Put objects into conflict classes • Subject can look at any object in the conflict class. • But once having looked at one object in the conflict class, subject cannot look at another • Timed Chinese Wall (times out)

  24. RBAC • Role-based access control • Set up access control matrix based on domains or roles rather than on specific subjects • Assign people to one or more roles • Roles can be organized into hierarchies (e.g., generic employee, manager, system administrator)

  25. Separation of Duties • Prevents abuse of system by separating control (e.g., trader vs. auditor; student vs. teacher) • RBAC supports this

  26. Access Control Techniques using the Access Control Matrix • Discretionary Access Control (DAC) – subjects choose restrictions on their objects • Mandatory Access Control (MAC) – follows the lattice model; control is enforced based on labels, objects and subjects • Identification and authentication (I&A) – associate user with a name, permit access based on user identity

  27. Techniques (continued) • Labels – label objects with security level (include printers, I/O devices) • Reference monitor (security kernel) – checks the subjects, objects, labels and access rules • Complete mediation – reference monitor must be invoked (rules applied) for every access • Audit – log all actions

  28. Outside the Matrix • All objects, subject and permissible accesses are supposed to be in the access control matrix • The reference monitor is supposed to check every access • But there can be leaks in real systems

  29. Object Reuse • Operating systems reuse objects • Virtual memory • Several users (or processes) share physical memory • When a process (P1) is not using memory, another process (P2) may acquire and use it • If not scrubbed, P2 may see P1’s data

  30. Object Reuse – The Stack • Information can flow between elements of the same program • Local variables (variables local to one procedure) are stored on the program stack • When the control returns from the procedure to the calling program, the old data may remain on the stack • When a new procedure call is done, the new local variables may take their initial values from the old data on the stack

  31. The Stack (continued) • The data stored on the stack could be sensitive – something like a cryptographic key • Even if the programmer zeros out the values at the end of the procedure, a smart compiler may optimize out those instructions

  32. Object Reuse – The Heap • Memory that is dynamically allocated at runtime (malloc, new) is allocated from the heap • After this memory is freed (or garbage collected), it can be re-used. It may still have old, sensitive data in it when it is reallocated

  33. Worse Yet • The operating system itself uses local variables, subroutines, stacks, heaps, and static (permanent, reusable) buffers

  34. Nonvolatile Storage • “Permanent” storage • When a file is deleted, space is deallocated • Parts of abandoned files may be reused • In ‘log-structured’ file systems, parts of files are stored in a log as a chain of updates. The ‘file’ is reconstructed from the log. All data previously written persists in the log

  35. Nonvolatile Storage (continued) • Flash memory – memory is not volatile (is not cleared when computer is turned off) • Virtual memory leaves a copy of running programs’ memory space on disk • Audit trails – yet another copy of data is logged

  36. Covert Channels • Operating systems allow interprocess communication (IPC) • IPC channels should be in the access control matrix • Covert channels are IPC channels that are not in the access control matrix • Covert channels are not checked by the reference monitor

  37. Covert Channels (continued) • Convert channels are used to violate policy • Information is inferred • Storage channels • File names or disk usage are observed, even if the contents of the file cannot be seen • For example, create or delete a certain file • Grow or shrink a certain file

  38. Timing Channels • Messages sent via timing or frequency of page faults • Often inadvertant

  39. System Structure • Trusted computing base (TCB) • Part of the computer that must be trusted • Must be small • Perimeter must be well-defined • Security perimeter • Separates the TCB from the rest of the system

  40. System Structure (continued) • Security kernel • The reference monitor • Module that checks the access rules • Trusted path • Mechanism by which user can be sure they are communicating with the TCB • Path from TCB to user • Secure attention key – path from user to TCB

  41. System Structure (continued) • Trusted channel • Trusted path between a user or process and a trusted application

  42. Best Engineering Practices • Layering • Software should be designed in well-defined layers • Interaction between layers is limited to specific interfaces between adjacent layers • Abstraction and data hiding • Layers hide their implementation details • Configuration control • Control updates

  43. System Assurance • Orange Book defines how to measure ‘security’ level of system • Trusted facility management – protection of source code, compilers, build process • Trusted distribution – distribution channel and its protections • Testing – both for what the system should do and for what it should not do

  44. System Assurance (continued) • Documention – what the system does • Formal top-level specification • Define what the system is supposed to do (spec) • Verify that system matches its security policy • Verify that spec matches code

  45. Orange Book Divisions • D – minimal protection (least secure) • C – discretionary protection • B – mandatory protection • A – verified protection (most secure)

  46. Classes • There are classes within each division • C1 – has ways for users to restrict access to data • C2 – has login and auditing • B1 – has labeling, MAC, informal security policy • B2 – more formal, TCB • B3 – stronger reference monitor

  47. Real Secure Systems • GEMSOS • Layered • Uses Bell-LaPadula for confidentiality • Uses Biba + hardware for integrity • Earned an A1 rating

  48. Acronyms • INFOSEC – information security • COMSEC – communication security • OPSEC – operations security

  49. Comprehensive Model of Information Systems Security • Developed by NSA • Confidentiality – integrity – availability • Information states • In transmission • In storage • Processing • Approaches – technology, policy and practices, education/training/awareness

  50. Summary • US DoD developed the Rainbow Series • The Orange Book set out the criteria for trusted computer systems • Confining information (information flow) was the first concern • Bell-LaPadula developed an information flow model based on the access control matrix

More Related