1 / 22

A Logic of Secure Systems with Tunable Adversary Models

A Logic of Secure Systems with Tunable Adversary Models. Jason Franklin With Anupam Datta , Deepak Garg , Dilsun Kaynar CyLab, Carnegie Mellon University. Motivation: Secure Access to Financial Data. Goal: An end-to-end trusted path in presence of local and network adversaries. Network.

mio
Télécharger la présentation

A Logic of Secure Systems with Tunable Adversary Models

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Logic of Secure Systems with Tunable Adversary Models Jason Franklin With AnupamDatta, Deepak Garg, DilsunKaynar CyLab, Carnegie Mellon University

  2. Motivation: Secure Access to Financial Data Goal:An end-to-end trusted path in presence of local and network adversaries Network

  3. Web Server OS BIOS Secure System Designs Secure System Security Property The VMM maintains the confidentiality and integrity of data stored in honest VMs. System maintains integrity of OS and web server code. Communication between frames remains confidential. Adversary Malicious Thread Malicious Frame & Server Malicious Virtual Machine

  4. Logic-based Analysis of System Security Security Property Formal Model of Adversary Analysis Engine Adversary defined by a set of capabilities Secure System Proof of security property or identify attack A. Datta, J. Franklin, D. Garg, D. Kaynar, A Logic of  Secure Systems and its Application to Trusted Computing,Oakland’09

  5. Method Secure System Security Property Modeled as a set of programs in a concurrent programming language containing primitives relevant to secure systems Cryptography, network communication, shared memory, access control, machine resets, dynamic code loading Specified as logical formulas in the Logic of Secure Systems (LS2)‏ Adversary Model Any set of programs running concurrently with the system Analysis Engine Sound proof system for LS2

  6. Adversary Model • Adversary capabilities: • Local process on a machine • E.g., change unprotected code and data, steal secrets, reset machines • In general, constrained only by system interfaces • Network adversary: • E.g., create, read, delete, inject messages • More later in Arnab Roy’s talk • These capabilities enable many common attacks: • Network Protocol Attacks: Freshness, MITM • Local Systems Attacks: TOCTTOU and other race conditions, violations of code integrity and data confidentiality and integrity violation • Combinations of network and system attacks, e.g., web attacks

  7. Application • Case study of Trusted Computing Platform • TCG specifications are industry and ISO/IEC standard • Over 100 million deployments • Applications include Microsoft’s BitLocker and HP’s ProtectTools • Formal model of parts of the TPM co-processor • First logical security proofs of two attestation protocols • Results of analysis: • Previously unknown incompatibility between protocols • Cannot be used together without additional protection • 2 new weaknesses • Previously known TOCTTOU attacks • [GCB+(Oakland’06),SPD(Oakland’05)] • Principled source code audit

  8. TCG Remote Attestation Why should the client’s answer be trusted? Client Describe your software stack! Remote Verifier

  9. Trusted Computing Platform Components Append only log; Set to BOL on reset Co-processor for cryptographic operations Client Remote Verifier Protected private key (AIK) PCR Check ‏ Trusted Platform Module (TPM)‏ BOL Industry standard developed by Trusted Computing Group

  10. Dynamic Root of Trust for Measurement (DRTM) Client Isolated Environment … … latelaunch APP Remote Verifier Nonce, EOL P LL Dynamic PCR Check EOL Trusted Platform Module (TPM)‏ Nonce ‏ P BOL

  11. Security Skeleton of DRTM in LS2 Operating Sys. Late Launch Protected Program Co-processor Remote Verifier Abstraction: Security skeleton only models security relevant operations

  12. Challenge: Dynamic Code Loading Operating Sys. Late Launch What is P? Protected Program Typically programs proved correct assuming code known at time of invocation Reasoning about security of dynamically loaded unknown code requires separate technique to identify code of P Co-processor Remote Verifier Abstraction: Security skeleton only models security relevant operations

  13. Proof of DRTM Security Property tN tC tE te Jump P Eval f Verifier Finishes Nonce Generated

  14. Refining Trust Requirements between Systems Client • Pis provided by application (APP) • P has full access to the machine • What if P is malicious? Isolated Environment … … latelaunch APP Nonce, EOL Remote Verifier P LL Dynamic PCR Check EOL Trusted Platform Module (TPM)‏ Nonce ‏ P BOL

  15. Backwards incompatibility Verifier believes (incorrectly) that APP1 was loaded on Client Client APP Isolated Environment OS Signature BL P BIOS SLB Remote Verifier PCR BL H(APP2) OS H(APP1) APP Check H(APP) Trusted Platform Module (TPM)‏ H(OS) ‏ H(BL) BOL Insecure composition of DRTM and SRTM (not modular)

  16. Principled Source Code Auditing Toward secure refinement from design to code • intslb_dowork(unsigned long params) { • unsigned char buffer[34], buffer2[34]; • if(slb_prepare_tpm() < 0) { gototpm_error; } • pal_enter((void *)params); • memset(buffer2, 0x00, 20); • /* Extend("bottom") */ • slb_TPM_Extend(buffer, 17, buffer2); • tpm_error: • build_page_tables(); • return 0; • } • Correspondence between system design and implementation • Small TCB aids correspondence check (Flicker ~ 250 LOC) + abstractions

  17. In-progress work… • Towards an interface-based theory of system security Adversary Trust Boundary Operating System Hardware

  18. Tunable Adversary Models Adversary • LS^2 has fixed adversary • Local concurrently-executing malicious thread • Can we extend LS^2 with tunable adversaries? • Consider adversary that is Constrained to System Interfaces (CSI) • Adversary can interleave interface calls, combine outputs, compose interfaces to produce new interfaces • DRTM-adversary has interface <skinit, extend, …,write> Trust Boundary Operating System Hardware

  19. Qualitative Comparison of Security • Comparing Adversary Models: Given two S-system interface-specified adversaries S-Adv1 and S-Adv2 is S-Adv2 more powerful than S-Adv1? S-Adv2 S-Adv1 Trust Boundary Operating System Hardware

  20. Other Scientific Questions • Comparing Systems: A system S1 is at least as secure as a system S2 … if S2-Adv S2 is secure  S1-Adv S1 is secure • Modularity: If system S1 is secure against adversary S1-Adv and system S2 is secure against adversary S2-Adv, how can we reason modularly about the security of system S1||S2 against adversary S1-Adv||S2-Adv?

  21. Conclusion • A logic for reasoning about secure systems • Analysis of trusted computing attestation protocols • Formal model of parts of the TPM co-processor • First logical security proofs of two attestation protocols (SRTM and DRTM) • Analysis identifies: • Previously known TOCTTOU attacks on SRTM [GCB+(Oakland’06),SPD(Oakland’05)] • Previously unknown incompatibility between SRTM and DRTM • (Cannot be used together without additional protection) • In-progress work includes interface-based modeling and analysis • Themes • Adversary models, modular verification, secure refinement, design for verification

  22. Work Related to LS2 • Work on network protocol analysis • BAN, …, Protocol Composition Logic (PCL)‏ • Inspiration for LS2, limited to protocols only • LS2 adds a model of local computation and local adversary • Work on program correctness (no adversaries) • Concurrent Separation Logic • Synchronization through locks is similar • Higher-order extensions of Hoare Logic • Code being called has to be known in advance • Temporal, dynamic logic • Similar goals, different formal treatment • Formal analysis of Trusted Computing Platforms • Primarily using model checking

More Related