1 / 33

Confidentiality-preserving Proof Theories for Distributed Proof Systems

Confidentiality-preserving Proof Theories for Distributed Proof Systems. Kazuhiro Minami National Institute of Informatics. FAIS 2011. Distributed proving is an effective way to combine information in different administrative domains. Distributed authorization

malo
Télécharger la présentation

Confidentiality-preserving Proof Theories for Distributed Proof Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Confidentiality-preserving Proof Theoriesfor Distributed Proof Systems Kazuhiro Minami National Institute of Informatics FAIS 2011

  2. Distributed proving is an effective way to combine information in different administrative domains • Distributed authorization • Make a granting decision by constructing a proof from security policies • Examples: DL[Li03], DKAL [Gurevich08], SD3 [Jim01], SecPAL [Becker07], and Grey [Bauer05] • Data fusion in pervasive environments • Infer a user’s activity from sensor data owned by different organizations

  3. Distributed Proof System • Consist of multiple principals, which consist of a knowledge base and an inference engine • Construct a proof by exchanging proofs in a peer-to-peer way • Support rules and facts in Datalog with the says operator (e.g., BAN logic) Quoted fact Ç Ç

  4. Protecting each domain’s confidential information is crucial • Each organization in a virtual business coalition needs to protect its proprietary information from the others • A location server must protect users’ location information with proper privacy policies • To do these, principals in a distributed proof system could limit access to their sensitive information with discretionary access-control policies

  5. To determine the safety of a system involving multiple principals is not so trivial Suppose that principal p0 is willing to disclose the truth of fact f0 only to p2 What if p2 still derives fact f2?

  6. Problem Statements • How should define confidentiality and safety in distributed proof systems? • Is it possible to derive more facts that a system that enforces confidentiality policies on a principal-to-principal basis? • If so, is there any upper bound in terms of the proving power of distributed proof systems?

  7. Outline • System model based on a TTP • Safety definition based on non-deducibility • Safety analysis • DAC system • NE system • CE system • Conclusion

  8. Abstract System Model • Parameterize a distributed proof system D with a set of inference rules I and a finite set of principals P (i.e., D[P, I]) • Only consider the initial and final state of system D based on a trusted third-party model (TTP) Datalog inference rule:

  9. Reference System D[IS] • The body of a rule contains a set of quoted facts (e.g., q1 = (p1 says f1)) • All the information is freely shared among principals (COND) (SAYS)

  10. TTP is a fixpoint function that computes the final state of a system Inference rulesI Trusted Third Party (TTP) fixpoint1(KB) fixpoint2(KB) fixpointn(KB) KB1 KB2 KBn pn p1 p2

  11. Soundness Requirement Any confidentiality-preserving system D[I] should not prove a fact that is not provable with the reference system D[IS] Definition Definition (Soundness) A distributed proof system D[I] is sound if

  12. Outline • System model based on a TTP • Safety definition based on non-deducibility • Safety analysis • DAC system • NE system • CE system • Conclusion

  13. Confidentiality Policies • Each principal defines a discretionary access-control policy on its local fact • Each confidentiality policy is defined with the predicate release(principal_name, fact_name) • E.g., if Alice is willing to disclose her location to Bob, she could add the policy • release(Bob, loc(Alice, L)) to her knowledge base.

  14. Attack Model • A set of malicious colluding principals A try to infer the truth of a confidential facts f in non-malicious principal pi’s knowledge base KBi Fact f0 is confidential because all the principals in A are not authorized to learn its truth A Fact f1 is NOT confidential because p4 is authorized to learn its truth System D

  15. Attack Model (Cont.) Malicious principals only use their initial and final states ) to perform inferences A System D

  16. Attack Model (Cont.) Malicious principals only use their initial and final states to perform inferences are available A System D

  17. Sutherland’s non-deducibility model inferences by considering all possible worlds W Consider two information functions v1: W → X and v2: W → Y. x Public view v1 W’ = { w ⎢ v1(w) = x} w X w’ v2 Y’ y Private view W y’ This cannot be possible! Y

  18. Nondeducibility considers information flow between two information functions regarding system configuration Initial and final states of malicious principals in set A Function v1 Information flow A set of initial configurations Function v2 Confidential facts that are actually maintained by non-malicious principals

  19. Safety Definition We say that a distributed proof system D[P, I] is safe if for every possible initial state KB, for every possible subset of principals A, for every possible subset of confidential facts Q, there exists another initial state KB’ such that Malicious principals A has the same initial and final local states • v1(KB) = v1(KB’), and • Q = v2(KB’). Non-malicious principals could posses any subset of confidential facts

  20. Outline • System model based on a TTP • Safety definition based on non-deducibility • Safety analysis • DAC system • NE system • CE system • Conclusion

  21. DAC System D[IDAC] Enforce confidentiality policies on a principal-to-principal basis (COND) (DAC-SAYS)

  22. Example Derivations in D[IDAC] (DAC-SAYS) (COND)

  23. D[P, IDAC] is Safe because deviations performed by one principal are transparent from others Let P and A be {p0, p1} and {p1} respectively Principal p1 cannot distinguish KB0 and KB’0 KB0 KB1 KB’0

  24. NE System D[INE] • Introduce function Ei to represent an encrypted value • Associate each fact or quoted fact q with an encrypted value e • Each principal performs an inference on an encrypted fact (q, e) • Principals cannot infer the truth of an encrypted fact without decrypting it • TTP discards encrypted facts from the final system state

  25. Inference Rules INE (ECOND) (DEC2) (DEC1) (ENC-SAYS)

  26. Example Derivations (ENC-SAYS) (ECOND) (ENC-SAYS) (DEC1) (DEC1) (DEC2) (ECOND)

  27. Analysis of System D[INE] • The strategy we use for the DAC system does not work • Need to make sure that every malicious principals receive an encrypted fact of the same structure Malicious principals A KB0 KB0

  28. NE System is Safe • All the encrypted values must be decrypted in the exact reverse order • Can collapse a proof for a malicious princpal’s fact such that all the confidential facts are only mentioned in non-malicious principals’ rules • Thus, can make all the confidential facts transparent from the malicious principals by modifying non-malicious principals’ rules

  29. Conversion Method – Part 1 • Keep collapsing proofs by modifying non-malicious principals’ rules • If a proof contains a subsequence replace the sequence above with • Eventually, all the confidential facts only appear in non-malicious principals rules

  30. Conversion Method – Part 2 • Given a set of quoted facts Q that should be in KB’ • Case 1:(pi says f) is not in Q, but f is in KBi*, • Remove (pi says f) from the body of every non-malicious principal rule • Case 2:(pi says f) is in Q, but f is not in KBi*, • Remove all non-malicious principal’ rules whose body contains (pi says f)

  31. CE System D[ICE] is NOT safe • An encrypted value can be decrypted in any arbitrary order • Consequently, we cannot collapse a proof as we did for the NE system (CE-DEC)

  32. Summary • Develop formal definitions of safety for distributed proof systems based on the notion of nondeducibility • Show that the NE system, which derives more facts than the DAC system, is indeed safe • Provide an unsafe result of the CE system, which extends the NE system with commutative encryption • The proof system with the maximum proving power exists somewhere between the NE and CE systems

  33. Thank you!

More Related