1 / 82

463.8 Information Flow

463.8 Information Flow. UIUC CS463 Computer Security. Overview. Motivation Non-interference model Information flow and language analysis Confinement and covert channels. Required. Reading Chapter 8 up to the beginning of 8.2.1 Goguen / Meseguer 82 Chapter 16 sections 16.1 and 16.3

heidi
Télécharger la présentation

463.8 Information Flow

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 463.8 Information Flow UIUC CS463 Computer Security

  2. Overview • Motivation • Non-interference model • Information flow and language analysis • Confinement and covert channels

  3. Required • Reading • Chapter 8 up to the beginning of 8.2.1 • Goguen / Meseguer 82 • Chapter 16 sections 16.1 and 16.3 • Chapter 17 • Garfinkel / Pfaff / Rosenblum 03 • Exercises • Section 8.9 exercise 1 • Section 16.9 exercises 2, 4-6 • Section 17.7 exercises 2a, 3-4, 6-7

  4. Example 1 • Downloadable financial planner: Disk Network AccountingSoftware • Access control insufficient • Encryption necessary, but also insufficient

  5. Noninterference Disk Network AccountingSoftware • Private data does not interfere with network communication • Baseline confidentiality policy

  6. Example 2 • Smart card for financial transactions • Card contains sensitive financial information used to make a transaction • Used in the terminal of a merchant • Card is tamper-resistant, protects card secrets from both the merchant and card holder • Java cards provide limited card-based computation • Must split the computation between the card, the terminal, and network connections

  7. Model of Noninterference • Represent noninterference as a relation between groups of users and commands • Users in group G do not interfere with those in group G’ if the state seen by G’ is not affected by the commands executed by members of G Goguen Meseguer 82

  8. State Automaton • U – Users • S – States • C – Commands • Out – Outputs • do : S £ U £ C ! S – state transition function • out : S £ U ! Out – output function • s0 – initial machine state

  9. Capability System • U, S, Out – users, states, commands, and outputs as before (in a state machine) • SC – State commands • Capt – Capability tables • CC – Capability commands • out : S £ Capt £ U ! Out • do : S £ Capt £ U £ SC ! S • cdo : Capt £ U £ CC ! Capt – Capability selection function • s02 S and t02 Capt – Initial state and capability tables

  10. Transition Function • C = SC ] CC - Commands • csdo : S £ Capt £ U £ C ! S £ Capt • csdo(s,t,u,c) = (do(s,t,u,c),t) if c 2 SC • csdo(s,t,u,c) = (s,cdo(s,t,u,c)) if c 2 CC • csdo* : S £ Capt £ (U £ C)* ! S £ Capt • csdo*(s,t,nil) = (s,t) • csdo*(s,t,w.(u,c)) = csdo(csdo*(s,t,w),u,c) • [[w]] = csdo*(s0,t0,w) • [[w]]u = out([[w]],u)

  11. Projection • Let G µ U and A µ C and w 2 (U £ C)* • PG(w) = subset of w obtained by eliminating pairs (u,c) where u 2 G • PA(w) = subset of w obtained by eliminating pairs (u,c) where c 2 A • PG,A(w) = subset of w obtained by eliminating pairs (u,c) where u 2 G and c 2 A

  12. Noninterference • M state machine and G, G’ µ U and A µ C • G :| G’ iff 8 w 2 (U £ C)*. 8 u 2 G’. [[w]]u = [[pG(w)]]u • A :| G iff 8 w 2 (U £ C)*. 8 u 2 G. [[w]]u = [[pA(w)]]u • A,G :| G’ iff 8 w 2 (U £ C)*. 8 u 2 G’. [[w]]u = [[pA,G(w)]]u

  13. Security Policies • Noninterference assertions have the forms G :| G’ A :| G A,G :| G’ • A security policy is a set of noninterference assertions

  14. Example 1 • A :| {u} • The commands in A do not interfere with the state of user u

  15. Levels L v Top Secret Secret Unclassified Example 2 MLS • Level : U ! L - assignment of security levels in L • Above(l) = { u 2 U | l v Level(u)} • Below(l) = { u 2 U | Level(u) v l} • M is multi-level secure with respect to L if, for all l @ l’ in L, Above(l’) :| Below(l)

  16. MLS Continued • G is invisible if G :| Gc where Gc is the complement of G in U • Proposition 1: If M,L is multi-level secure, then Above(l) is invisible for every l 2 L.

  17. Example 3 Isolation • A group of users G is isolated if: G :| Gc and Gc :| G. • A system is completely isolated if every user in U is isolated.

  18. Example 4 Channel Control • View a channel as a set of commands A • We can assert that groups of users G and G’ can only communicate through channel A with the following two noninterference assertions: • Ac,G :| G’ and • Ac,G’ :| G

  19. A1 u1 A u u’ A2 u2 Example 5 Information Flow u’,u1,u2 :| u u1,u2 :| u’ u1 :| u2 u2 :| u1 Ac,u :| {u’,u1,u2} A1c,u’ :| {u1} A2c,u’ :| {u2}

  20. Example 6 Security Officer • Let A be the set of commands that can change the security policy • seco 2 U is the only individual permitted to use these commands to make changes • This is expressed by the following policy: A,{seco}c :| U

  21. Entropy and Information Flow • Idea: info flows from x to y as a result of a sequence of commands c if you can deduce information about x before c from the value in y after c • Formalized using information theory

  22. Entropy • Entropy: measure of uncertainty in a random variable • H(X) = -åi p(xi) log(p(xi)) • constant: H(c) = 0 • uniform distribution: H(Un) = log(n) • Conditional entropy: how much uncertainty left about X knowing Y • H(X | Y) = -åj p(yj) åi p(xi | yj) log(p(xi | yj)) • H(X | Y) = åj p(yj) H(X | Y=yj) • H(X | Y) = H(X) if X,Y are independent

  23. Entropy and Information Flow • Definitions: • Sequence of commands c • xs, ysare (distributions) of values of x,y before commands c • xt, yt are values after c • Commands c leak information about x into y if: • H(xs | yt) < H(xs | ys) • If no y at time s, then H(xs | yt) < H(xs)

  24. Example 1 • y := x • If we learn y, then we know x • H(xs | yt=y) = 0 for all y, so H(xs | yt) = 0 • Information flow exists if xs not a (known) constant

  25. Example 2 • Destroyed information 1: r1 := x 2: … 3: r1 := r1 xor r1 4: y := z + r1 • H(xs | yt) = H(xs | ys) since after line 3, r1 = 0

  26. Example 3 • Command is • ifx = 1 theny := 0 elsey := 1; where: • x, y equally likely to be either 0 or 1 • H(xs) = 1 as x can be either 0 or 1 with equal probability • H(xs | yt) = 0 as if yt = 1 then xs = 0 and vice versa • Thus, H(xs | yt) = 0 < 1 = H(xs) • So information flowed from x to y

  27. Implicit Flow of Information • Information flows from x to y without an explicit assignment of the form y := f(x) • f(x) an arithmetic expression with variable x • Example from previous slide: • ifx = 1 theny := 0 elsey := 1; • So must look for implicit flows of information to analyze program

  28. Compiler-Based Mechanisms • Detect unauthorized information flows in a program during compilation • Analysis not precise, but secure • If a flow could violate policy (but may not), it is unauthorized • No unauthorized path along which information could flow remains undetected • Set of statements certified with respect to information flow policy if flows in set of statements do not violate that policy Denning Denning 77

  29. Example ifx = 1 theny := a; elsey := b; • Info flows from x and a to y, or from x and b to y • Certified only if information from the security class x of x is allowed to flow into the security class y of y and similar conditions hold for a and b relative to y. • Write: x ≤ y and a ≤ y and b ≤ y • Note flows for both branches must be true unless compiler can determine that one branch will never be taken

  30. Declarations • Notation: x: intclass { A, B } means x is an integer variable with security class at least lub{ A, B } so lub{ A, B } ≤ x • Distinguished classes Low, High

  31. Assignment Statements x := y + z; • Information flows from y, z to x, so this requires lub{ y, z } ≤ x More generally: y := f(x1, …, xn) • Require lub{ x1, …, xn} ≤ y

  32. Compound Statements x := y + z; a := b * c – x; • First statement: lub{ y, z } ≤ x • Second statement: lub{ b, c, x } ≤ a • So, both must hold (i.e., be secure) More generally: S1; … Sn; • Each individual Si must be secure

  33. Iterative Statements while i < n do begin a[i] := b[i]; i := i + 1; end • Same ideas as for “if”, but must terminate More generally: while f(x1, …, xn) do S; • Loop must terminate; • S must be secure • lub{ x1, …, xn } ≤ glb{y | y target of an assignment in S }

  34. Conditional Statements if x + y < z then a := b else d := b * c – x; end • The statement executed reveals information about x, y, z, so lub{ x, y, z } ≤ glb{ a, d } More generally: if f(x1, …, xn) then S1 else S2; end • S1, S2 must be secure • lub{ x1, …, xn } ≤ glb{y | y target of assignment in S1, S2 }

  35. Input Parameters • Parameters through which data passed into procedure • Class of parameter is class of actual argument ip: type class { ip }

  36. Output Parameters • Parameters through which data passed out of procedure • If data passed in, called input/output parameter • As information can flow from input parameters to output parameters, class must include this: op: typeclass { r1, …, rn } where ri is class of ith input or input/output argument

  37. Procedure Calls proc pn(i1, …, im: int; var o1, …, on: int) begin S end; • S must be secure • For all j and k, if ij ≤ ok, then xj ≤ yk • For all j and k, if oj ≤ ok, then yj ≤ yk

  38. Example procsum(x: int class { A }; varout: int class { A, B }); begin out := out + x; end; • Require x ≤ out and out ≤ out

  39. Array Elements • Information flowing out: … := a[i] Value of i, a[i] both affect result, so class is lub{ a[i], i } • Information flowing in: a[i] := … • Only value of a[i] affected, so class is a[i]

  40. Other Constructs • Goto Statements • Exceptions • Infinite loops • Concurrency

  41. Resource Sharing • Topics so fare deal with shared resources: • Non-interference: no information flows through shared resources • Information flow: discover and measure information flows • Confinement: policies to describe and techniques to implement limits on information flows

  42. Confinement Problem Confinement problem – problem of preventing a server from leaking information that the user of the service considers confidential. [Lampson] • Non-interference and information flow policies can specify confinement policies • Confinement is also concerned with covert channels

  43. Transitivity • Confinement needs to be transitive • If a confined process invokes a second process, the second process must be as confined as the caller • Otherwise, confined processes could escape

  44. Isolation • Confinement established through isolation • Restrict a process’ access • Enforce the principle of least privilege • Isolation can sometimes also ensure non-confinement properties like availability

  45. Mechanisms for Isolation • Memory protection • chroot • Sandboxes • Virtual Machine Monitors

  46. Memory protection • Isolate processes memory space from each other • Interact only through the OS or shared memory • Memory protection around since the ‘60s • Only recently entered mainstream OS: even single user machines need process isolation • Usually not confining enough • Communication through the OS needs to be restricted

  47. / usr sbin etc lib … chroot • Present a limited view of the file system apache / sbin etc lib httpd.conf libc, … apache

  48. chroot • chroot problems • Have to replicate files used by the jail • Hard links save space, but entire directories are more troublesome • Sometimes possible to escape chroot • Control only file system access • BSD Jails: chroot + network controls • Fix existing chroot security holes • Regulate access to the network • Used to set up low-cost “virtual hosts”

  49. Sandboxes • Restrict access to the operating system • Usually done on the scale of system calls • Used to restrict untrusted or marginally trusted applications • Three types of sandboxes: • User-space • Kernel-based • Hybrid Garfinkel Pfaff Rosenblum 03

More Related