Download
software based memory protection for sensor nodes n.
Skip this Video
Loading SlideShow in 5 Seconds..
Software Based Memory Protection For Sensor Nodes PowerPoint Presentation
Download Presentation
Software Based Memory Protection For Sensor Nodes

Software Based Memory Protection For Sensor Nodes

144 Vues Download Presentation
Télécharger la présentation

Software Based Memory Protection For Sensor Nodes

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Software Based Memory Protection For Sensor Nodes Ram Kumar, Eddie Kohler, Mani Srivastava (ram@ee.ucla.edu) CENS Technical Seminar Series

  2. Memory Corruption 0x0200 • Single address space CPU • Shared by apps., drivers and OS • Most bugs in deployed systems come from memory corruption • Corrupted nodes trigger network-wide failures Run-time Stack Globals and Heap (Apps., drivers, OS) No Protection 0x0000 Sensor Node Address Space Memory protection is an enabling technology for building robust software for motes CENS Seminar

  3. Why is Memory Protection hard ? • No MMU in embedded micro-controllers • MMU hardware requires lot of RAM • Increases area and power consumption CENS Seminar

  4. Software-based Approaches • Software-based Fault Isolation (Sandbox) • Coarse grained protection • Check all memory accesses at run-time • Introduce low-overhead inline checks • Application Specific Virtual Machine (ASVM) • Interpreted code is safe and efficient • ASVM instructions are not type-safe CENS Seminar

  5. Software-based Approaches • Type safe languages • Language semantics prevent illegal memory accesses • Fine grained memory protection • Challenge is to interface with non type-safe software • Ignores large existing code-base • Output of type-safe compiler is harder to verify • Especially with performance optimizations • Ccured - Type-safe retrofitting of C code • Combines static analysis and run-time checks • Provides fine-grained memory safety • Difficult to interface to pre-compiled libraries • Different representation of pointer types CENS Seminar

  6. Overview • Ideal: Combination of software based approaches • For e.g.: Sandbox for ASVM instructions • Software-based Fault Isolation (SFI) • Building block for providing coarse-grained protection • Enhanced using other approaches (e.g. static analysis) • Memory Map Manager • Ensure integrity of memory accesses • Control Flow Manager • Ensure integrity of control flow CENS Seminar

  7. Tree Routing Module Data Collector Application Photosensor Module Dynamically Loaded modules Static SOS Kernel Dynamic Memory Message Scheduler Dynamic Linker Kernel Components Sensor Manager Messaging I/O System Timer SOS Services Radio I2C ADC Device Drivers SOS Operating System CENS Seminar

  8. Design Goals • Provide coarse-grained memory protection • Protect OS from applications • Protect applications from one another • Targeted for resource constrained systems • Low RAM usage • Acceptable performance overhead • Memory safety verifiable on the node CENS Seminar

  9. Outline • Introduction • System Components • Memory Map • Control Flow Manager • Binary Re-Writer • Binary Verifier • Evaluation CENS Seminar

  10. Binary Re-Writer Sandbox Binary Raw Binary Binary Verifier Memory Safe Binary Memory Map Control Flow Mgr. System Overview Desktop Sensor Node CENS Seminar

  11. System Components • Re-writer • Introduce run-time checks • Verifier • Scans for unsafe operations before admission • Memory Map Manager • Tracks fine-grained memory layout and ownership info. • Control Flow Manager • Handles context switch within single address space CENS Seminar

  12. Kernel Module #1 Kernel Module #2 Operating System Classical SFI (Sandboxing) • Partition address space of a process into contiguous domains • Applications extensions loaded onto separate domains • Run-time checks force memory accesses to own domain • Checks have very low overhead Run-time Stack CENS Seminar

  13. Challenges - SFI on a mote • Partitioning address space is impractical • Total available memory is severely limited • Static partitioning further reduces memory • Our approach • Permit arbitrary memory layout • But maintain a fine-grained map of layout • Verify valid accesses through run-time checks CENS Seminar

  14. Memory Map 0x0200 • Encoded information per block • Ownership - Kernel/Free or User • Layout - Start of segment bit Fine-grained layout and ownership information Partition address space into blocks Allocate memory in segments (Set of contiguous blocks) 0x0000 User Domain Kernel Domain CENS Seminar

  15. pA = malloc(KERN, 16); pB = malloc(USER, 30); mem_chown(pA, USER); Memmap in Action: User-Kernel Protection • Blocks Size on Mica2 - 8 bytes • Efficiently encoded using 2 bits per block • 00 - Free / Start of Kernel Allocated Segment • 01 - Later portion of Kernel Allocated Segment • 10 - Start of User Allocated Segment • 11 - Later Portion of User Allocated Segment User Kernel Free CENS Seminar

  16. Memmap API memmap_set(Blk_ID, Num_blk, Dom_ID) • Updates Memory Map • Blk_ID: ID of starting block in a segment • Num_blk: Number of blocks in a segment • Dom_ID: Domain ID of owner (e.g. USER / KERN) Dom_ID = memmap_get(Blk_ID) • Returns domain ID of owner for a memory block API accessible only from trusted domain (e.g. Kernel) • Property verified before loading CENS Seminar

  17. Using memory map for protection • Protection Model • Write access to a block is granted only to its owner • Systems using memory map need to ensure: • Ownership information in memory map is current • Only block owner can free/transfer ownership • Single trusted domain has access to memory map API • Store memory map in protected memory • Easy to incorporate into existing systems • Modify dynamic memory allocator - malloc, free • Track function calls that pass memory from one domain to other • Changes to SOS Kernel ~ 1% • 103 lines in SOS memory manager • 12720 lines in kernel CENS Seminar

  18. Memmap Checker • Enforce a protection model • Checker invoked before EVERY write access • Protection Model • Write access to block granted only to owner • Checker Operations • Lookup memory map based on write address • Verify current executing domain is block owner CENS Seminar

  19. Address (bits 15-0) Block Offset (bits 2-0) Block Number (bits 11-3) 9 bits Memmap Table 2 7 Address  Memory Map Lookup 1 Byte has 4 memmap records CENS Seminar

  20. Optimizing Memmap Checker • Minimize performance overhead of checks • Address  Memory Map Lookup • Requires multiple complex bit-shift operations • Micro-controllers support single bit-shift operations • Use FLASH based look-up table • 4x Speed up - From 32 to 8 clock cycles • Overall overhead of a check - 66 cycles CENS Seminar

  21. Memory Map is Tunable • Number of memmap bits per block • More Bits  Multiple protection domains • Address range of protected memory • Protect only a small portion of total memory • Block size • Match block size to size of memory objects • Mica2 - 8 bytes, Cyclops - 128 bytes Memory Map Overhead - 8 Byte Blocks CENS Seminar

  22. Outline • Introduction • System Components • Memory Map • Control Flow Manager • Binary Re-Writer • Binary Verifier • Evaluation CENS Seminar

  23. What about Control Flow ? • State within domain can become corrupt • Memory map protects one domain from other • Function pointers in data memory • Calls to arbitrary locations in code memory • Return Address on Stack • Single stack for entire system • Returns to arbitrary locations in code memory CENS Seminar

  24. Control Flow Manager DOMAIN A call foo • Ensure control flow integrity • Control flow enters domain at designated entry point • Control flow leaves domain to correct return address • Track current active domain • Required for memmap checker • Require Binary Modularity • Program memory is partitioned • Only one domain per partition DOMAIN B foo: … call local_fn ret Program Memory CENS Seminar

  25. Ensuring control flow integrity • Check all CALL and RETURN instructions • CALL Check • If address within bounds of current domain then CALL • Else transfer to Cross Domain Call Handler • RETURN Check • If address on stack within bounds of current domain then RETURN • Else transfer to Cross Domain Return Handler • Checks are optimized for performance CENS Seminar

  26. Cross Domain Control Flow • Function call from one domain to other • Determine callee domain identity • Verify valid entry point in callee domain • Save current return address CENS Seminar

  27. Cross Domain Call Stub • Verify call into jump table • Get callee domain ID from call address • Store return address Register exported function Cross Domain Call Domain A call fooJT Domain B foo: … ret Jump Table fooJT:jmp foo Program Memory CENS Seminar

  28. Cross Domain Return call foo Cross Domain Return Stub • Verify return address • Restore caller domain ID • Restore prev. return addr • Return foo: … ret Program Memory CENS Seminar

  29. Stack Protection Single stack shared by all domains • Stack bound set at cross domain calls and returns • Protection Model • No writes beyond latest stack bound • Limits corruption to current stack frame • Enforced by memmap_checker • Check all write address Stack Bounds Stack Grows Down Data Memory User Kernel CENS Seminar

  30. Outline • Introduction • System Components • Memory Map • Control Flow Manager • Binary Re-Writer • Binary Verifier • Evaluation CENS Seminar

  31. Binary Re-Writer Sandbox Binary Raw Binary Binary Re-Writer • Re-writer is a C program running on PC • Input is raw binary output by cross-compiler • Performs basic block analysis • Insert inline checks e.g. Memory Accesses • Preserve original control flow e.g. Branch targets PC CENS Seminar

  32. Memory Write Checks • Actual sequence depends upon addressing mode • Sequence is re-entrant, works in presence of interrupts • Can be improved by using dedicated registers st Z, Rsrc push X push R0 movw X, Z mov R0, Rsrc call memmap_checker pop R0 pop X CENS Seminar

  33. Control Flow Checks Return Instruction ret jmp ret_checker Direct Call Instruction call foo ldi Z, foo call call_checker In-Direct Call Instruction icall call call_checker CENS Seminar

  34. Outline • Introduction • System Components • Memory Map • Control Flow Manager • Binary Re-Writer • Binary Verifier • Evaluation CENS Seminar

  35. Binary Verifier • Verification done at every node • Correctness of scheme depends upon correctness of verifier • Verifier is very simple to implement • Single in-order pass over instr. sequence • No state maintained by verifier • Verifier Line Count: 205 lines • Re-Writer Line Count: 3037 lines CENS Seminar

  36. Verified Properties • All store instructions to data memory are sandboxed • Store instructions to program memory are not permitted • Static jump/call/branch targets lie within domain bounds • Indirect jumps and calls are sandboxed • All return instructions are sandboxed CENS Seminar

  37. Outline • Introduction • System Components • Memory Map • Cross Domain Calls • Binary Re-Writer • Binary Verifier • Evaluation CENS Seminar

  38. Resource Utilization • Implemented scheme in SOS operating system • Compiling blank SOS kernel for Mica2 sensor platform • Size of Memory Map - 128 Bytes • Additional memory used for storing parameters • Stack Bound, Return Address etc. CENS Seminar

  39. Memory Map Overhead • API modification overhead (CPU cycles) • Overhead of setting and clearing memory map bits CENS Seminar

  40. Control Flow Checks and Transfers • Inline checks occur most frequently • ker_ret_check: Push and Pop of return address • Module verification ~ 175 ms for 2600 byte module CENS Seminar

  41. Impact on Module Size • Code size increase due to inline checks • Can be reduced if performance is not critical • True for most sensor network apps • Increased cost for module distribution • No change in data memory used CENS Seminar

  42. Performance Impact • Experiment Setup • 3-hop linear network simulated in Avrora • Simulation executed for 30 minutes • Tree Routing and Surge modules inserted into network • Data pkts. transmitted every 4 seconds • Control packets transmitted every 20 seconds • 1.7% increase in relative CPU utilization • Absolute increase in CPU - 8.41% to 8.56% • 164 run-time checks introduced • Checks executed ~20000 times • Can be reduced by introducing fewer checks CENS Seminar

  43. Deployment Experience • Run-time checker signaled violation in Surge • Offending source code in Surge: hdr_size = SOS_CALL(s->get_hdr_size, proto); s->smsg = (SurgeMsg*)(pkt + hdr_size); s->smsg->type = SURGE_TYPE_SENSORREADING; • SOS_CALL fails in some conditions, returns -1 • Unchecked return value used as buffer offset • Protection mechanism prevents such corruption CENS Seminar

  44. Conclusion • Software-based Memory Protection • Enabling technology for reliable software systems • Memory Map and Cross Domain Calls • Building blocks for software based fault isolation • Low resource utilization • Minimal performance overhead • Widely applicable • SOS kernel with dynamic modules • TinyOS components using dynamic memory • Natively implemented ASVM instructions CENS Seminar

  45. Future Work • Explore CPU architecture extensions • Prototype AVR implementation in progress • Static analysis of binary • Reduce number of inline checks • Improve overall system performance • Increase complexity of verifier CENS Seminar

  46. Thank You !http://nesl.ee.ucla.edu/projects/sos-1.x Ram Kumar CENS Seminar October 20, 2006

  47. SOS Memory Layout 0x0200 • Static Kernel State • Accessed only by kernel • Heap • Dynamically allocated • Shared by kernel and applications • Stack • Shared by kernel and applications Run-time Stack Dynamically Allocated Heap Static Kernel State 0x0000 CENS Seminar

  48. Reliable Sensor Networks • Reliability is a broad and challenging goal • Data Integrity • How do we trust data from our sensors ? • Network Integrity • How to make network resilient to failures ? • System Integrity • How to develop robust software for sensors ? CENS Seminar