1 / 34

Internship at Microsoft Research?

Advertisement. Internship at Microsoft Research?. 12 week research projects, undertaken at MSR Cambridge, typically by grad students mid-way through their PhD . Goal: complete and publish research project with an MSR researcher:

Télécharger la présentation

Internship at Microsoft Research?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advertisement Internship at Microsoft Research? • 12 week research projects, undertaken at MSR Cambridge, typically by grad students mid-way through their PhD. • Goal: complete and publish research project with an MSR researcher: • K..Bhargavan, C. Fournet, A. Gordon, and R. Pucella, TulaFale: A security tool for web services, FMCO 2003 • C. Fournet, A. Gordon, and S. MaffeisA type discipline for authorization policies, ESOP 2005 • Applications for Summer 2006are due by end February 2006 http://research.microsoft.com/aboutmsr/jobs/internships/cambridge.aspx

  2. From Typed Process Calculi to Source-Based Security Andy Gordon (MSR) Based on joint work with Cédric Fournet (MSR), Alan Jeffrey (DePaul and Bell Labs), and Sergio Maffeis (Imperial) SAS 2005, London September 7-9, 2005

  3. Background • Process calculi are an effective setting for modelling security protocols and specifying their properties • Lowe (1995) used CSP to find his famous attack on the Needham-Schroeder public key protocol (1978) • The spi calculus (AG97) began a line of work in which many protocols have been expressed and analyzed within pi calculi • Security types allow the typechecker to prove various security properties automatically • Syntax-driven typing rules can be checked efficiently, with no state space exploration • Properties of arbitrarily many sessions and principals proved relative to arbitrary Dolev-Yao opponent • Inevitably incomplete as D-Y problem undecidable (DLMS99)

  4. An Authentication Example Suppose A and B are principals sharing a symmetric-key KAB The following should ensure B gets a fresh message from A We specify the authentication of the message via assertions: each end is to have distinct, preceding begin with same label Attacks (replays, impersonations) show up as violations of these assertions By assigning KAB the following type, we can check the protocol: Key (msg:T, Nonce [Sent(A,B,msg)] ) http://www.cryptyc.org

  5. Authentication; full trust Secrecy; full trust Applications Authorization; timing; partial trust Type inference Other work on security in pi includes: Bodei/Degano/Nielson/Nielson Berger/Honda/Yoshida

  6. This Talk • Two new developments • Checking authorization (is this request allowed?) as well as authentication (who sent this request?) • A type discipline for authorization policies (With C. Fournet and S. Maffeis. ESOP'05) • Allowing a realistic threat model in which some trusted hosts become compromised over time • Secrecy despite compromise: types, cryptography, and the pi-calculus. (With A. Jeffrey. CONCUR'05) • A useful idea in both is the use of inert processes to record events and to express security properties

  7. A Type Disciplinefor Authorization Joint with C. Fournet and S. Maffeis

  8. Motivations • Authorization policies prescribe conditions that must be satisfied before performing any privileged action • In practice, policies often only formalized in code • Hard to extract, hard to reason about, hard to audit • Tied to low-level authentication mechanisms • Relationship of code to intended policy left informal • In principle, • Policies can be formalized in high-level languages (e.g. Datalog) separate from the implementation code • Policies should be independent of enforcement mechanisms • Conformance of an implementation should be verifiable • Our initial motivations • Difficulty of auditing use of Java-style stack inspection • Authorization for web services

  9. Our Approach • We propose language-based mechanisms to express the intended policy of an implementation, and to verify conformance to the policy • We use the authorization policy as a specification • As opposed to being directly executed • The same policy supports alternative implementations • Our implementation language is a spi calculus • But the approach would apply to higher-level languages • We use types to verify that annotated code correctly implements a given authorization policy

  10. Datalog for Authorization • Datalog is a fragment of Prolog without negation, free variables and term constructors • Many policy languages for trust or authorization are based on Datalog or related logics (SD3, Binder, Cassandra, SPKI, XrML, …) • Realistic policies: Becker’s 375 rule formalization of NHS Electronic Health Record system in Cassandra (CSFW’04) • We use Datalog for specificity, but our results hold for any monotonic logic closed under substitutions

  11. Ex: Conference Reviewing • Extensional database: known facts (closed literals) • These generalize the events, such as Sent(A,B,msg), used in direct correspondence assertions to specify authentication • Rules for deriving new facts • Intensional database: facts derived from rules

  12. Spi calculus with annotations Zero-bits, only to keep track of guarantees Security annotations

  13. Authorization Properties • Inert processes model events and properties • A statementC models part of the authorization policy • Specifically, a fact L models an authorization event • An expectationexpectLmodels an expected property • The structural equivalence PP’and reduction PP’relationsare much as usual • There are no rules for these inert processes

  14. Some Basic Examples • Process P specifying a policy and two facts: • A robustly safe process : • A safe process : • … and the robustly safe version :

  15. Authorization by Typing Every ok value must be justified Every binding occurrence may add facts in E

  16. Type System: Results • Verification is efficient • Structural type system • Low complexity of logical resolution

  17. Typing the Examples • Process P specifying a policy and two facts: • A safe process (by typing) : • A robustly safe process (by typing) :

  18. In the Full Version • Two distributed implementations of a policy for conference management • One where each delegation is registered online • The other enables offline, signature based delegation with authorization decisions based on certificate chains

  19. Summary • We used inert processes to annotate programs with expected authorization properties • “At this point Report(U,ID,R) will be derivable” • Goal: check code annotations against explicit logical policy • Extends work to typecheck direct correspondences • Woo and Lam’s direct correspondences are derivable • Much prior work on logics for authorization • Ours is amongst the first to relate such logics to code and to use DY approach to model untrusted parts of system • Limitations: • Like many systems, no support for revocation • Interpreter + typechecker, but no direct implementation • Principals completely distrusted or completely trusted...

  20. Secrecy Despite Compromise Joint work with A. Jeffrey

  21. Motivation • Our opponent model has assumed a fixed partition • Trusted insiders versus distrusted outsiders • Real situations are more complex • Machines become compromised • Trusted users turn out to be untrustworthy • How can a type system handle partial compromise of a dynamically changing population of principals? • We approach this question from a simpler setting than spi, Odersky’s polarized pi calculus • Capabilities a? and a! for channel-based input and output

  22. Security Levels • Code annotated with security levels (or principals) • Different regions may run on behalf of different levels • Level annotation L attached to each output outa! M :: L • Level represents the opponent • Security ordering induced by arc processes • Arc L1 L2 is itself an (inert) process • Active (top-level) arcs in P induce a preorder PL1 L2 • Least and greatest elements  and  • Compound level (L1, L2) has P (L1, L2) Li for each i • Security ordering represents compromise • Let a level Lbe compromised iff L  • Hence L1 L2means L1 is at risk of compromise by L2 • So (L1, L2) is compromised if either L1 or L2 compromised

  23. Security Hierarchies any process any process b a  (a,b)   a  !;newa;(Ga | ;a) ... b an a1 ... an+m an+1  G  (a,b) a

  24. Conditional Secrecy • We say M is public if it can be output at level  • We model secrecy invariants as inert processes: • An expectationsecretMamongstN is justified if every output of M is at a higher security level than N • Read as “if M becomes public then N is compromised” • The secret message M may include fresh names

  25. A Basic Example • Consider two processes at level L that exchange a fresh secret s on a private channel k • We want a type system that: • Checks secrecy of s while k is secret and L uncompromised • Eventually allows k and s to be made public once L is compromised – an event modelled by the arc L • A specific formal problem: verify robust safety of

  26. Conditional Secrecy by Typing

  27. In the Full Version • Types ordered via a subtype relation • Main rule: if Public(T) and Tainted(T’) then T <: T’ • Secrecy types are special case of (kinded) channels • Kinds take the form {?L1,!L2} • We can assert secrecy of channels, eg the k channel • Type Ok{L1L2} proves that L1L2 • Allows security orderings to be communicated • Type system reflects usage of pair types • (splitx:T, U) – first element extracted without checking • (matchx:T, U) – first element matched against known value • Full form is (yx:T, U) where {split,match} and y is an existentially quantified lower bound on x used only in types

  28. Typing a Crypto Protocol

  29. Related Work • Key or host compromise often modelled using events • Paulson (JCS 98): “oops” events mark key disclosure • Bugliesi, Focardi, Maffei (FMSE’04) allow for compromised hosts in a type system for spi, but assume the set is known statically • Types to govern data declassification are a Hot Topic • Myers and Liskov (TOSEM’00) DLM is one of the first system of security types to consider declassification, though at level of individual expressions, not types • Several recent works (CSFW’05) on temporary modifications of a security ordering, akin to our L1L2 processes • Many studies of process calculi with security ordering • Our use of an ordering to model runtime compromise is new

  30. Summary, Conclusions • We introduced a mutable security ordering to model a dynamic, partially compromised set of principals • As with our authorization model, we rely on inert processes to describe events and expected properties • There remains much promise in the area of process calculi with security types • These two systems should combine fairly smoothly • They should be applicable to an important open problem; how to check security properties of the actual source code of crypto protocols and the applications built on them

  31. The End

More Related