1 / 36

Student Talk #1: Embedded Security

Student Talk #1: Embedded Security. Reading List. Required: P. Kocher, R. Lee, G. McGraw, A. Raghunathan, and S. Ravi, “Security as a New Dimension in Embedded System Design,” in Design Automation Conference , pp. 753-760, 2004.

javan
Télécharger la présentation

Student Talk #1: Embedded Security

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Student Talk #1: Embedded Security

  2. Reading List • Required: • P. Kocher, R. Lee, G. McGraw, A. Raghunathan, and S. Ravi, “Security as a New Dimension in Embedded System Design,” in Design Automation Conference, pp. 753-760, 2004. • T Park, KG Shin, Soft Tamper-Proofing via Program Integrity Verification in Wireless Sensor Networks, IEEE Transactions on mobile computing, http://kabru.eecs.umich.edu/security/papers/park-shin-TMC05.pdf • Suh, G.E., el at. “Design and Implementation of the AEGIS Single-Chip Secure Processor Using Physical Random Functions,” ISCA, 2005, http://ieeexplore.ieee.org/search/wrapper.jsp?arnumber=1431543 • Recommended: • P. Koopman, “Embedded system security,” inComputer, vol. 37,  Issue 7, pp. 95 – 97, July 2004. • Arvind Seshadri et al., SWATT: SoftWare-based ATTestation for Embedded Devices. IEEE Symposium on Security and Privacy, 2004, http://www.cs.cornell.edu/People/egs/syslunch-fall04/swatt.pdf • Lie, D., Thekkath, C. el at. “Architectural Support for Copy and Tamper Resistant Software,” ASPLOS, 2000, http://portal.acm.org/citation.cfm?coll=GUIDE&dl=GUIDE&id=357005

  3. Motivation for Embedded Security • Embedded system are proliferating into everyday objects. • Unlike desktop or enterprise systems, there is no “back up” or “recovery” for result of system failure • Embedded systems interacts with the real world with potentially deadly consequences. • Automotive braking / engine management system • Medical monitoring / reporting • Airport Radar tracking system

  4. Challenges of Embedded Security • Computation constraints • Low CPU speed • Small data word size, 4-bit/8-bits common • Cost constraints • 4-bit/8-bit processor widespread because of low cost • Energy Constraints • Many operate off battery power • Some must endure long period without recharge • Physical security • Can be bought/stolen for reverse engineering • ie Satellite TV, Sensor networks

  5. Embedded System Attacks • Invasive • Physical Attacks • Micro-probing • Reverse Engineering • Destroy the embedded device • Requires expensive equipments • Confocal microscope • Micro-manipulator workstation • Scanning electron microscope for automated layout reconstruction • Many time stepping stone for other non-invasive attacks

  6. Invasive Attack CMOS AND gate imaged by a confocal microscope. The image shows 1610 bits in an ST16xyz. Every bit is represented by either a present or missing diffusion layer connection. The depackaged smartcard processor is glued into a test package, whose pins are then connected to the contact pads of the chip with the aluminum wires in a manual bonding machine. Images and Caption from [K¨ommerling ’99]

  7. Embedded System Attacks • Non-Invasive • Side-Channel Attacks • Timing Analysis • Power Analysis • Electromagnetic Analysis • Fault Induction Figures from www.cryptography.com

  8. Non-Invasive Attack: Timing Analysis • Measure slight difference in execution time base on input of the algorithm • Need detailed knowledge of algorithm • Typical square and multiply for RSA, ElGamal, Diffie-Hellman public-key cryptography. • Information of implementation of the algorithm and reasons for variation in execution time. • Perform statistical timing analysis of input/output sequence to break the code. See [Hess ’00] for more information.

  9. Non-Invasive Attack: Power Analysis • Measures the power consumed by the device while performing cryptographic operations. • Simple Power Analysis (SPA) • Direct measurement of system poweras function of time • Multiplication, Shift and Permutation observable • Differential Power Analysis • Data collection phase • Perform SPA, construct Sample matrix S[k][j] • Collect ciphertext output C[k] • Data analysis phase • Solve differential average trace T[i][j] • When T[i = Ki] there will be power consumption biases • When T[i != Ki] show small or no fluctuation Figures from www.cryptography.com

  10. Non-Invasive Attack: Electromagnetic Analysis • Requires knowledge of circuit layout. • Typically used in conjunction with invasive techniques. • Idea similar to power analysis except electromagnetic radiation are measured.

  11. Non-Invasive Attack: Fault Induction • Specific security algorithm needs to be known. • Example: RSA • Induce error in internal calculation • ie induce incorrect modulo q, but correct modulo p • Then • Jump instruction attack • Cause conditional branch instruction to fail. • Cyclic encryption fail because key / data to be easily recovered

  12. Non-Invasive Attack: Software Attacks • Software Attacks • Reverse engineer/alter the program and/or master secret in the sensor. • Create/deploy (multiple clones of) manipulated sensors (which could then be used to mount actual attacks like DoS or sabotaging certain services. • Possible Countermeasures: • Secure Hardware (Trusted Platform) • Software Attestation.

  13. Trusted Computing (TC) • The Idea of providing a secure environment for a system (with both hardware and software support) so that it behaves as intended • Trusted Computing Group (TCG) is responsible for development of industry standards of TC for different platforms (e.g. servers, mobile devices, storage systems) • i.e. “what constitutes trusted computing” • Academia and other parties also propose their own “secure models” that guarantee some level of TC • Mainly to prevent software-based attacks • Hardware-based attacks (e.g. Differential Power Analysis or any invasive attacks) still possible as long as the device is physically reachable. • This concept extended to the context of Digital Rights Management (DRM)

  14. Trusted Computing (TC) • Key Components • Protected Capabilities • Commands that provide access to shielded locations – places where action on sensitive data is safeguarded • Platform Configuration Registers (PCRs) – states & integrity measurements • Attestation • Verification of accuracy of information • Integrity Measurement, Storage & Reporting • process of obtaining metrics of platform characteristics that affect the integrity (trustworthiness) of a platform; storing those metrics; and putting digests of those metrics in PCRs

  15. Trusted Platform Module (TPM) • A general architectural design/model that meets minimum TCG specifications. • Components inside black box assumed to be trusted & functional (a.k.a. Root of Trust) • (Hardware) Usually assembled as part of platform to prevent physical tampering

  16. Trusted Platform Module – Examples • Atmel AT97SC3203 • TCG Revision 1.2 • Used in IBM Thinkpads • Intel: LaGrande Technology • AMD: Presidio • Microsoft: NGSCB • e.g. Xbox 360  All are based on some form of the TCG specifications

  17. Other TC Models: XOM (Stanford) • Execute-Only Memory (XOM) • Assumes only a secure processor and nothing else • Data in main memory are encrypted • Programs separated in compartments • Data within protection tagged with identifiers • XOM Virtual Machine Monitor (XVMM) • special instructions for inter-program communications • Pitfalls • Private key fixed • Overhead

  18. Other TC Models – AEGIS (MIT) • Assumes a secure processor & a security kernel • Secure Model • Prevents both software & hardware attacks • Implemented using FPGA, uses manu-facturing variations in ICs to generate Physical Random (Unclonable) Functions – PUFs. • Performance Overhead • Off-chip memory mechanisms (e.g. integrity verification) PUF circuit

  19. Other TC Models – AEGIS (MIT) • Architectural Overview • Four secure execution modes (STD, TE, PTR, SSP) for varying levels of security • Execution Mode Transition • Memory Protection • Static vs. dynamic • Verified vs. private • Debugging Support • Similar to idea of scan chains Start-up STD SSP Resume Compute Hash Suspend TE/PTR

  20. Other TC Models – AEGIS (MIT) • Software (Applications) • Security Kernel to handle multitasking • Functions & variables: unprotected, verified, or private • Program memory layout

  21. Trusted Computing – Applications • Smart Cards (IC Cards) • Portable embedded integrated circuits (e.g. GSM SIM cards) with tamper-resistant properties • Microprocessor-based or Non-volatile memory-based • Applications: credit cards, access cards, IDs • Must include some cryptographic capability and storage. • User Authentication

  22. Trusted Computing – Applications • A Smart Card must • Store any authorization data/code for authenticating the owner and accessing TPM (e.g. a 20-byte sequence) • Process any protocol related to the communication scheme between itself and TPM  Authorization data should never be exposed • Random Number Generator • Capability to compute a secret value using SHA-1-based HMAC • Ability to verify authentication values • e.g. Object-Independent Authorization Protocol (OIAP), Object-Specific Authorization Protocol (OSAP)

  23. Trusted Computing – Applications • Sequence of TPM-Smart Card interactions (OSAS) • Authorization data in smart card never exposed to other entities • Can still encounter other possible hacks & attacks • Need some additional methods to attest (verify) information

  24. Software Attestation: Threat Model Or perhaps someone changed the software for good • Ubiquity of embedded devices and its pervasiveness to our everyday life provides an opportunity for the adversary to easily: • Capture one or more sensors. • Reverse engineer/alter the program and/or master secret in the sensor. • Create/deploy (multiple clones of) manipulated sensors (which could then be used to mount actual attacks like DoS or sabotaging certain services. Cartoon adapted from http://dies.cs.utwente.nl/

  25. Software Attestation: Requirements • Resistance to Replay: The attacker should not be able to send a valid result to the verifier by simply reapplying previous valid results. • Resistance to Prediction: The attacker should not be able to predict the next attestation routing. • Resistance to static analysis: The attacker should not be able to successfully analyze the code by using static analysis techniques within the time period the attester waits for a response from the sensor. • Complete memory coverage: To detect even small memory changes, the attestation routine should read every memory location. • Efficient Construction: The attestation routine should be as small as possible to reduce bandwidth consumption and should be as efficient as possible to consume less battery power.

  26. Software Attestation: Naïve Approach • Verifier (physically distinct from embedded device) challenges the embedded device to compute and return message authentication code (MAC) of device’s memory contents. • Verifier sends random MAC key • Embedded device computes MAC on entire key and returns resulting MAC value

  27. Naïve Approach (contd.) • Insufficient: • Attacker could store original memory contents in empty memory • Attacker could move the original code to another device that it could access when computing MAC.

  28. SWATT: SoftWare-based ATTestation • Verifier sends the device a randomly-generated challenge. • Challenge is used as a seed to the pseudorandom number generator which generates the addresses for memory access. • Verification procedure then performs a pseudorandom memory traversal, and iteratively updates a checksum of the memory contents. • Attacker cannot predict which memory location is accessed. • If the current memory access indeed touches an altered location in the memory, the attacker’s verification procedure needs to divert the load operation to the memory location where the correct copy is stored. • Causes the result to be delayed. • A verifier will detect the altered memory when • Either the checksum returned by the embedded device is wrong, • Or the result is delayed a suspiciously long time.

  29. SWATT: Achieving the desired properties • Pseudo-random memory traversal: Forces the attacker to check every memory access for a match with location(s) altered by attacked. • Resistance to pre-computation and replay: Ensured by having verifier send the seed for pseudo-random generator. • High probability of detecting changes: ensured by accessing every memory location (requires a total of O(nlogn) accesses). • Optimized Implementation: achieved by hand optimizing code using nearly optimal machine code sequence. • Non-parallelizable: achieved by making the address for the memory access and the computation of the checksum depend on the current value of the checksum.

  30. SWATT: Experiment and Results • Used the AVR studio version 4.0, which is an integrated development environment for Atmel microcontrollers (also having simulator for ATMega163L).

  31. SWATT: Some vulnerabilities • Efficiency: incurs many more memory accesses than sequential scanning of the program, without guaranteeing 100 percent detection of memory modifications. • Use of latency in detecting attack: the (random) communication latency in a networked environment may significantly reduce the detectability of this scheme. • Time of verification: time at which the memory is verified is not the same as the time at which the device is used. So there is a possibility that an attacker changes the memory contents of the device between verification and use.

  32. PIV: Program Integrity Verification • Uses Randomized Hash Function(RHF) for generating addresses for memory access: More suited than using keyed hash functions with randomly generated keys (used in SWATT). • Based on 32 bit operation: performs badly in 8-bit CPUs. • Verifier is required to store/process the complete programs stored on sensors. • Uses PIV Servers (PIVS) distributed over the entire network that generate PIV Code (PIVC) which gets executed on sensor being verified. • Assumes that the adversaries can arbitrarily modify program, result of the checksum computation or fake messages from sensor showing that the sensor is not attacked. • Assumes that the channel of communication between verifier and device is insecure. • Partitions the program into multiple program blocks and process and stores these blocks in PIVS: • Size of this database will be smaller than that of storing all sensor programs as their will be overlap in program blocks common to all (or group of sensors).

  33. PIV: Hash and Verify M1: Sensor → PIVS: IDsensor: specifies the program blocks to be used in current sensor. M2: PIVS → Sensor: G,H: PIVS generates a PIVC (hashing function) and sends it to the sensor. M3: Sensor → PIVS: Hash{G,H,xl}: Sensor calculates the hash value corresponding to hashing function provided by PIVS and sends the calculated value back to PIVS. M4: PIVS → Sensor: pass or fail: Compares the hash value with locally computed hash value and accordingly decides pass/fail. PIVC will take the corresponding action at sensor accordingly.

  34. PIV: State-transition diagram for Sensors

  35. PIV: Security Analysis Replay attacks on M1-M4 not possible as proposed hash computation and verification are keyed functions. Specifically: • Reporting a different IDsensor will be caught by the PIVS when its uniqueness is checked and, moreover, the malicious sensor cannot pass the rest of the PIV test unless it has the matching program which must be free of malicious codes. • Modifying G, H, or the Hash algorithm will cause inconsistency between two hash outputs and, hence, the verification will fail. • Replaying M3 does not work because each verification will produce a distinct hash value even for the uncompromised sensor • Intercepting M4 to always report “pass” may execute the main code. However, the subsequent requests to access the network resources will be denied. • PIVS will update the status of that sensor as malicious and will inform the neighbors not to relay packets to/from that sensor.

  36. Additional References • P. Kocher, J. Jaffe, and B. Jun, Introduction to differential power analysis and related attacks. (http://www.cryptography.com/resources/whitepapers/). • O. K¨ommerling and M. G. Kuhn, “Design Principles for Tamper-Resistant Smartcard Processors,” in Proc. USENIX Wkshp. on Smartcard Technology (Smartcard '99), pp. 9-20, May 1999 • E. Hess, N. Janssen, B. Meyer, and T. Schütze. “Information Leakage Attacks Against Smart Card Implementations of Cryptographic Algorithms and Countermeasures-A Survey.” EUROSMART Security Conference, 2000. • “Trusted Computing Group: Architectural Overview” http://www.trustedcomputinggroup.org • “Smart Card – Wikipedia” http://en.wikipedia.org/wiki/Smart_cards • George, Patrick, “User Authentication With Smart Card In Trusted Computing Architecture,” Gemplus. http://www.gemplus.com/smart/rd/publications/pdf/SAM2406.pdf • M. Shaneck et al., Remote Software-based Attestation for Wireless Sensors, http://www-users.cs.umn.edu/~shaneck/esas2005.pdf • Arvind Seshadri et al., Using Software-based Attestaion for Verifying Embedded Systems in Cars. www.ece.cmu.edu/~adrian/projects/escar04.pdf

More Related