1 / 42

Contextual Integrity & its Logical Formalization

18739A: Foundations of Security and Privacy. Contextual Integrity & its Logical Formalization. Anupam Datta Fall 2009. Privacy in Organizations. Huge amount of personal information available to various organizations

duy
Télécharger la présentation

Contextual Integrity & its Logical Formalization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 18739A: Foundations of Security and Privacy Contextual Integrity & its Logical Formalization Anupam Datta Fall 2009

  2. Privacy in Organizations • Huge amount of personal information available to various organizations • Hospitals, financial institutions, websites, universities, census bureau, social networks… • Information shared within and across organizational boundaries to enable useful tasks to be completed • Personal health information has to be shared to enable diagnosis, treatment, billing, … • Fundamental Question: • Do organizational processes respect privacy expectations while achieving purpose?

  3. Approach What is Privacy? Contextual Integrity (CI): - Transfer of personal information governed by contextual norms Logic of Privacy and Utility: - Formalization of CI; automated methods for checking compliance with privacy regulations (HIPAA, GLBA, COPPA,…) Express and Enforce Privacy Policies

  4. Contextual Integrity [Nissenbaum 2004] • Philosophical framework for privacy • Central concept: Context • Examples: Healthcare, banking, education • What is a context? • Set of interacting agents in roles • Roles in healthcare: doctor, patient, … • Informational norms • Doctors should share patient health information as per the HIPAA rules • Norms have a specific structure (descriptive theory) • Systematic method at arriving at contextual norms (prescriptive theory) • Purpose • Improve health • Some interactions should happen - patients should share personal health information with doctors

  5. Outline • Motivating Example • Framework • Model • Logic of Privacy and Utility • Workflows and Responsibility • Algorithmic Results • Workflow Design assuming agents responsible • Auditing logs when agents irresponsible • Summary

  6. Health Answer Yes! except broccoli Health Question Now that I have cancer, Should I eat more vegetables? MyHealth@Vanderbilt++ Workflow Humans + Electronic system Health Answer Appointment Request Secretary Doctor Patient Health Question Health Question Health Answer Nurse

  7. Outline • Motivating Example • Framework • Model • Logic of Privacy and Utility • Workflows and Responsibility • Algorithmic Results • Workflow Design assuming agents responsible • Auditing logs when agents irresponsible • Conclusions

  8. Informational Norms “In a context, the flow of information of a certain type about a subject (acting in a particular capacity/role) from one actor (could be the subject) to another actor (in a particular capacity/role) is governed by a particular transmission principle.” Contextual Integrity[Nis2004]

  9. Health Question Now that I have cancer, Should I eat more vegetables? Model Bob Alice • Communication via send actions: • Sender: Bob in role Patient • Recipient: Alice in role Nurse • Subject of message: Bob • Tag: Health Question • Message: Now that …. • Data model & knowledge evolution: • Agents acquire knowledge by: • receiving messages • deriving additional attributes based on data model • Health Question Protected Health Information contents(msg) vs. tags (msg)

  10. Model [Barth,Datta,Mitchell,Sundaram 2007] • State determined by knowledge of each agent • Transitions change state • Set of concurrent send actions • Send(p,q,m) possible only if agent p knows m K0 A13 A11 K13 A12 K11 ... K12 ... Concurrent Game Structure G = <k, Q, , , d, >

  11. Logic of Privacy and Utility • Syntax  ::= send(p1,p2,m) p1 sends p2 message m | contains(m, q, t) m contains attrib t about q | tagged(m, q, t) m tagged attrib t about q | inrole(p, r) p is active in role r | t  t’ Attrib t is part of attrib t’ |    |  | x.  Classical operators | U | S | O Temporal operators | <<p>> Strategy quantifier • Semantics Formulas interpreted over concurrent game structure

  12. Informational Norms “In a context, the flow of information of a certain type about a subject (acting in a particular capacity/role) from one actor (could be the subject) to another actor (in a particular capacity/role) is governed by a particular transmission principle.” Contextual Integrity[Nis2004]

  13. Privacy Regulation Example (GLB Act) Sender role Subject role Financial institutions must notify consumers if they share their non-public personal information with non-affiliated companies, but the notification may occur either before or after the information sharing occurs Attribute Recipient role Exactly as CI says! Transmission principle

  14. Structure of Privacy Rules [Barth,Datta,Mitchell,Nissenbaum 2006] • Allow message transmission if: • at least one positive norm is satisfied; and • all negative norms are satisfied

  15. HIPAA – Healthcare Privacy • HIPAA consists primarily of positive norms: share phi if some rule explicitly allows it (2), (3), (5), (6) • Exception: negative norm about psychotherapy notes (4)

  16. COPPA – Children Online Privacy • COPPA consists primarily of negative norms • children can share their protected info only if parents consent (7) (condition) • (8) (obligation – future requirements)

  17. Specifying Privacy • MyHealth@Vanderbilt In all states, only nurses and doctors receive health questions G  p1, p2, q, m send(p1, p2, m)  contains(m, q, health-question)  inrole(p2, nurse)  inrole(p2, doctor)

  18. Specifying Utility • MyHealth@Vanderbilt Patients have a strategy to get their health questions answered  p inrole(p, patient)  <<p>> F  q, m. send(q, p, m)  contains(m, p, health-answer)

  19. Outline • Motivating Example • Framework • Model • Logic of Privacy and Utility • Workflows and Responsibility • Algorithmic Results • Workflow Design assuming agents responsible • Auditing logs when agents irresponsible • Conclusions • Differentially Private Workflows (WIP)

  20. Health Answer Yes! except broccoli Health Question Now that I have cancer, Should I eat more vegetables? MyHealth@Vanderbilt Improved Doctor should answer health questions Health Answer Appointment Request Secretary Doctor Patient Health Question Health Question Assign responsibilities to roles & workflow engine Health Answer Nurse

  21. Graph-based Workflow • Graph (R, R x R), where R is the set of roles • Edge-labeling function permit: R x R  2T, where T is the set of attributes • Responsibility of workflow engine Allow msg from role r1 to role r2 iff tags(msg)  permit(r1, r2) • Responsibility of human agents in roles Tagging responsibilities • ensure messages are correctly tagged Progress responsibilities • ensure messages proceed through workflow

  22. MyHealth Responsibilities • Tagging Nurses should tag health questions G p, q, s, m. inrole(p, nurse)  send(p, q, m)  contains(m, s, health-question)  tagged(m, s, health-question) • Progress • Doctors should answer health questions G p, q, s, m. inrole(p, doctor)  send(q, p, m)  contains(m, s, health-question)  F m’. send(p, s, m’)  contains(m’, s, health-answer)

  23. Abstract Workflow • Responsibility of workflow engine • LTL formula  • Feasible (enforceable) if  is asafety formula without the contains() predicate • Responsibility of each role r • LTL formula r • Feasible if agents have a strategy to discharge their responsibilities  p.   inrole(p, r)  <<p>> r Graph-based workflows are a special case

  24. Outline • Motivating Example • Framework • Model • Logic of Privacy and Utility • Workflows and Responsibility • Algorithmic Results • Workflow Design assuming agents responsible • Abstract workflows • Auditing logs when agents irresponsible • Only graph-based workflows • Conclusions

  25. Workflow Design Results • Theorems: Assuming all agents act responsibly, checking whether workflow achieves • Privacy is in PSPACE (in the size of the formula describing the workflow) • Utility is decidable Algorithms implemented in model-checkers, e.g. SPIN, MOCHA Potential for small model theorems

  26. Outline • Motivating Example • Framework • Model • Logic of Privacy and Utility • Workflows and Responsibility • Algorithmic Results • Workflow Design assuming agents responsible • Abstract workflows • Auditing logs when agents irresponsible • Only graph-based workflows • Summary

  27. Auditing Results: Summary • Definitions • Accountability = Causality + Irresponsibility • Design of audit log • Captures causality relation • Algorithm • Finding agents accountable for policy violation in graph-based workflows using audit log • Algorithm uses oracle: • O(msg) = contents(msg)

  28. Policy compliance/violation Contemplated Action Judgment Policy Future Reqs History • Strong compliance [BDMN2006, BDMS 2007] • Action does not violate current policy requirements • Future policy requirements after action can be met • Formally,

  29. Local Compliance • Locally compliant policy • Agents can determine strong compliance based on their local view of history

  30. Causality • Lamport Causality [1978]

  31. Accountability & Audit Log • Accountability • Causality + Irresponsibility • Audit log design • Records all Send(p,q,m) and Receive(p,q,m) events executed • Maintains causality structure • O(1) operation per event logged

  32. Auditing Algorithm

  33. Central Lemma

  34. Main Theorem

  35. MyHealth Example • Policy violation: Secretary Candy receives health-question mistagged as appointment-request • Construct causality graph G and search backwards using BFS Candy received message m from Patient Jorge. • O(m) = health-question, but tags(m) = appointment-request. • Patient responsible for health-question tag. • Jorge identified as accountable

  36. Summary • Framework • Concurrent game model • Logic of Privacy and Utility • Temporal logic (LTL, ATL*) • Organizational Process as Workflow • Role-based responsibility for human and mechanical agents • Algorithmic Results • Workflow design assuming agents responsible • Privacy, utility decidable (model-checking) • Auditing logs when agents irresponsible • From policy violation to accountable agents Automated Using oracle

  37. Thanks Questions?

  38. Deciding Privacy • PLTL model-checking problem is PSPACE decidable G|= tags-correct U agents-responsible  privacy-policy G: concurrent game structure Result applies to finite models (#agents, msgs,…)

  39. Deciding Utility • ATL* model-checking of concurrent game structures is • Decidable with perfect information • Undecidable with imperfect information • Theorem: There is a sound decision procedure for deciding whether workflow achieves utility • Intuition: • Translate imperfect information into perfect information by considering possible actions from one player’s point of view

  40. Local communication game • Quotient structure under invisible actions, Gp • States: Smallest equivalence relation K1 ~p K2 if K1 K2 and a is invisible to p • Actions: [K]  [K’] if there exists K1 in [K] and K2 in [K’] s.t. K1 K2 • Lemma: For all LTL formulas  visible to p, Gp |= <<p>> implies G |= <<p>> • The truth value of formulas visible to p depends only on actions that p can observe, e.g. send (*, p, *), send(p, *, *) Locality again!

  41. Related Languages • Legend:  unsupported o partially supported  fully supported • LPU fully supports attributes, combination, temporal conditions, and utility

More Related