1 / 23

Cryptographic Implementation of Confidentiality and Integrity Properties

Cryptographic Implementation of Confidentiality and Integrity Properties. Work in progress… Cédric Fournet Tamara Rezk INRIA-MSR Joint Centre Dagstuhl Seminar February 2007. Motivation.

inez
Télécharger la présentation

Cryptographic Implementation of Confidentiality and Integrity Properties

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Cryptographic Implementation of Confidentiality and IntegrityProperties Work in progress… Cédric Fournet Tamara Rezk INRIA-MSR Joint Centre Dagstuhl Seminar February 2007

  2. Motivation • Need for simple programming language abstractions for confidentiality and integrity – and their robust cryptography implementation • Find a relation between high-level security goals and the usage of cryptographic protocols

  3. Our Goal • We want to define a compiler in such a way that cryptographic and distribution issues of the implementation remain transparent to the programmer • The programmer specifies a security policy (confidentiality and integrity of data) • If the source program is typable for one policy, our compiler generates low-level, well-typed cryptographic code

  4. HL HH LL LH Confidentiality and Integrity Language-based Security Confidentiality and integrity policies are specified using labels from a security lattice (SL, ≤), e.g. confidentiality (Readers) integrity (Writers) x:= y is safe if Label(y) ≤ Label(x) Confidentiality (who can read): Data in y can be read at least by readers of x Integrity (who can modify): Data in y is more “trusted” (higher integrity) than data in x

  5. Simple Confidentiality and Integrity View of low confidentiality data does not depend on secret data and high integrity data cannot be affected by data that can be manipulated by the adversary

  6. Interaction with the adversary Input/output observation (passive case) s1 ~ s2 then P(s1) ~ P(s2) Adversary interacts with the system (active case) s1 ~ s2 then P[A](s1) ~ P[A](s2) [Robust Declassification, Zdancewic & Myers 01]

  7. Implementation with shared memory Private communication, h1 h2 have compatible confidentiality and integrity policies h1:=h2 l_1: = h2 h:=l_2 All communication is through shared memory (the adversary has access to shared memory)

  8. Implementation with shared memory Private communication, h1 h2 have compatible confidentiality and integrity policies h1:=h2 ks,kv: = Gs; ke,kd:= Ge; l_1: = Enc(h2,ke); l_2: = S(ks,l_1); try{ h:=V(kv,l_2) h_1:=Dec(h, kd) } catch { skip} All communication is through shared memory (the adversary has access to shared memory)

  9. Our contribution (coming soon…) • Simple typed language with information-flow security, both integrity and secrecy • Target language with crypto primitives. All communication is on shared memory. • We equip this language with a type system for checking its usage of cryptography. • Against adaptive chosen-ciphertext attacksand adaptive chosen-message attacks • We give a typed translation from the simple language to the target language.

  10. The language • An imperative language with shared memory and probabilistic (polytime) functions x: = f(x_1, … x_n) Including Ge, E, D, Gs, V, S Special command try h:= V(h1,k) catch c

  11. The semantics as Markov chains

  12. The semantics as Markov chains Prob(s1,s2) Configuration s1 from a set S Configuration s2 from a set S distTransformer: Distr(S) x Distr(S) distTransformer (dist) (s2) = ∑s1prob(s1,s2) dist(s1)

  13. Indistinguishable ensembles b:= {0,1}; if b =1 then v: = D1(n); else v: = D2(n); Distinguisher (v); return g=b

  14. Indistinguishable ensembles (passive adversaries) Computational Non-Interference: First introduced by Peeter Laud in Esop 01 b:= {0,1}; if b =1 then initial: = D1; final:=P(initial); else initial: = D2; final:=P(initial); A (low(final)); return g= b

  15. Computational NI for Active Adversaries • P1, ..Pn polynomial commands • A polynomial time algorithm, control of the scheduler, access to low memory b:= {0,1}; if b =1 then initial: = D1; else initial: = D2; A[P1,..Pn]; return g=b

  16. Cryptographic assumptions (also written in our language) • Assumption: the encryption scheme (Ge, E, D) provides indistinguishability under adaptive chosen ciphertexts attacks (Ind-CCA) CCA b:= {0,1};e,d:= Ge; A;return g= b A has access to the security parameter, to e, to mA can also call the two oracle commands: EO(v0,v1) if b then m:=E(v0,e) else m:=E(v1,e); log:= log + m; DO(m) if (m in log) then l:=0 else l:=D(m,d)

  17. Cryptographic assumptions(also written in our language) Assumption: the signature scheme (Gs, S, V) is secure against forgery under adaptive chosen message attack (CMA) CMA s,v:= Gs; A; if m in log then return 0 else try m’:=V(x,v) catch return 0; return m’=m; A has access to v (but not to s) A can also call an oracle commands for signing: S(m) Log:= log +m; S(s,m);

  18. Example: encryption with decrypted keys

  19. Types to keep integrity In a source program, a variable x has a label l to express confidentiality and integrity. In the target, x will have type l, Data and variables for keys will have additional types given by T T: = | Data | EK T K | DK T K | VK (l,T) | SK (l,T) | Enc T K | Sgn T

  20. Problem: key selection

  21. Typability Preserving Compiler Theorem (Typability Preservation). Let c be a source program (communication on private channels). If c is typable then its translation to the target language (communication on shared memory) is typable Corollary (Computational soundness). Let c be a source program (communication on private channels). If c is typable then its translation satisfies computational non-interference for active adversaries.

  22. Conclusion and Further Work • More efficient protocols for the translation: • Can we use less keys? • Less signatures? (types should then be more expressive) • Shared keys? (protocols to establish keys) • Cryptographic back-end for Jif Split? • Mechanization of proofs • Concurrency

  23. Related Work • Non-interference Goguen & Meseguer 82, Bell & LaPadula 76, Denning 76 • Declassification Principles and dimensions by Sabelfeld & Sands 05 Robust Declassification Zdancewic & Myers 01 Enforcing Robust Declassification Myers, Sabelfeld, Zdancewic 04 • Secure information flow and Cryptography Laud 01, Backes & Pfitzmann 02 03 • Secure implementations Jif Split Myers & Zheng et al 01 Cryptographic DLM by Vaughan & Zdancewic 07

More Related