1 / 36

Multihop Over the Air Programming

Multihop Over the Air Programming. Thanos Stathopoulos LECS Lab, UCLA. Introduction. Nature of sensor networks Expected to operate for long periods of time Human intervention impractical or detrimental to sensing process Nevertheless, code needs to be updated Add new functionality

kamin
Télécharger la présentation

Multihop Over the Air Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multihop Over the Air Programming Thanos Stathopoulos LECS Lab, UCLA

  2. Introduction • Nature of sensor networks • Expected to operate for long periods of time • Human intervention impractical or detrimental to sensing process • Nevertheless, code needs to be updated • Add new functionality • Incomplete knowledge of environment • Predicting right set of actions is not always feasible • Fix bugs • Maintenance

  3. Example: ESS at James Reserve • It’s there! • 20 motes deployed • Target: 100 at first • But what about bug fixes, or new functionality that’s not there yet?

  4. Well… It would be better if… 10011100

  5. Reprogramming Approaches • Use a VM and transfer capsules • Advantage • Low energy cost • Disadvantages • Not as flexible as full binary update • VM required • Transfer the entire binary to the motes • Advantage • Maximum flexibility • Disadvantage • High energy cost due to large volume of data • Reliability is required regardless of approach

  6. Previous work • Crossbow Network Programming (XNP) • Single hop • One sender, N receivers • Sender sends the entire file, in ‘code capsules’ (segments) • Receivers store each capsule in EEPROM • Each capsule carries an application-layer sequence number (segment number), so it can be stored in the correct address • After transmission is complete, receivers read through EEPROM to find gaps • Retransmission requests sent if gaps found • Sender polls each receiver to find out whether they have the entire image

  7. MOAP: Overview • Code distribution mechanism specifically targeted for Mica2 motes • Full binary updates • Multi-hop operation achieved through recursive single-hop broadcasts • Energy and memory efficient

  8. Requirements and Properties of Multihop Code Distribution • The complete image must reach all nodes • Reliability mechanism required • If the image doesn’t fit in a single packet, it must be placed in stable storage until transfer is complete • Network lifetime shouldn’t be significantly reduced by the update operation • Memory and storage requirements should be moderate

  9. Resource Prioritization • Energy: Most important resource • Radio operations are expensive • TX: 12 mA • RX: 4 mA • Stable storage (EEPROM) • Optimized for Read() operations • Write()s are expensive • But, everything must be stored • Goals • Minimize transmissions • Minimize Write()s

  10. Resource Prioritization • Memory usage • Static RAM • Only 4K available on current generation of motes • Code update mechanism should leave ample space for the real application • Program memory • MOAP must transfer itself as well as the new code • Otherwise, code updates will only happen once • Large image size means more packets transmitted! • Not an issue if one can send differences (‘diffs’) • Goals • Minimize RAM consumption • Use diffs

  11. Resource Prioritization • Something’s got to give… • Latency • Updates don’t respond to real-time phenomena • Update rate is infrequent • Can be traded off for reduced energy usage • Unfortunately, not for RAM usage

  12. Design Choices • Dissemination protocol: How is data propagated? • Concurrently • Traditional IP multicast mechanisms • Tree construction either at the source(s) or at rendezvous points • In MOAP, all nodes must be reached • Tree must span the entire network • Expensive maintenance • State requirements too high for sensor nets • Diffusion • Soft state reduces memory requirements • Currently, TinyDiffusion is not optimized for many-to-all dissemination • Flooding • Minimal state requirements • Low energy efficiency • In steps • Ripple (neighborhood-by-neighborhood) • Low state requirements • Slow

  13. Design Choices • Reliability mechanism: How are repairs handled? • Repair scope: local vs global • Answer depends on dissemination protocol • Loss detection responsibility • ACKs vs NACKs

  14. Design Choices • Segment management • A segment is MOAP’s unit of data, used for transfer and storage • Currently aligned to an EEPROM line • MOAP needs to store all segments • Out-of-order delivery and losses likely • Indexing segments and gap detection • Memory hierarchy • Sliding window

  15. Ripple Dissemination • Transfer data neighborhood-by-neighborhood • Neighborhood: nodes in the same broadcast domain • Single-hop • Recursively extended to multi-hop • Goal: very few sources at each neighborhood • Preferably, only one • Receivers attempt to become sources when they have the entire image • Publish-subscribe interface prevents nodes from becoming sources if another source is present • Leverage the broadcast medium • If data transmission is in progress, a source will always be one hop away! • Allows local repairs • Increased latency • O(h*D) • Flooding: O(D)

  16. Ripple

  17. Reliability Mechanism • Loss responsibility lies on receiver • Only one node to keep track of (sender) • NACK-based • In line with IP multicast and WSN reliability schemes • Local scope • No need to route NACKs • Energy and complexity savings • Affordable, since all nodes will eventually have the same image

  18. Retransmission Policies • Broadcast RREQ, no suppression • Simple • High probability of successful reception • Highly inefficient • Zero latency • Broadcast RREQ, suppression based on randomized timers • Quite efficient • Complex • Latency and successful reception based on randomization interval

  19. Retransmission Policies (cont’d) • Broadcast RREQ, fixed reply probability • Simple • Good probability of successful reception • Latency depends on probability of reply • Average efficiency • Broadcast RREQ, adaptive reply probability • More complex than the static case • Similar latency/reception behavior • Unicast RREQ, single reply • Smallest probability of successful reception • Highest efficiency • Simple • Complexity increases if source fails • Zero latency • High latency if source fails

  20. Retransmission Polices: Comparison

  21. Segment Management: Discovering if a segment is present • No indexing • Nothing kept in RAM • Need to read from EEPROM to find if segment i is missing • Full indexing • Entire segment (bit)map is kept in RAM • Look at entry i (in RAM) to find if segment is missing RAM EEPROM

  22. Segment Management (cont’d) • Partial indexing • Map kept in RAM • Each entry represents k consecutive segments • Combination of RAM and EEPROM lookup needed to find if segment i is missing RAM EEPROM

  23. Segment Management (cont’d) • Hierarchical full indexing • First-level map kept in RAM • Each entry points to a second-level map stored in EEPROM • Combination of RAM and EEPROM lookup needed to find if segment i is missing RAM EEPROM Index EEPROM Data

  24. Segment Management (cont’d) • Sliding window • Bitmap of up to w segments kept in RAM • Starting point: last segment received in order • RAM lookup • Limited out-of-order tolerance! RAM Base Offset EEPROM

  25. Segment Management: Comparison

  26. Results: Energy efficiency • Significant reduction in traffic when using Ripple • Up to 90% for dense networks • Full Indexing performs 5-15% better than Sliding Window • Reason: better out-of-order tolerance • Differences diminish as network density grows

  27. Results: Latency • Flooding is ~ 5 times faster than Ripple • Full indexing is 20-30% faster than Sliding window • Again, reason is out-of-order tolerance

  28. Results: Retransmission Policies • Order-of-magnitude reduction when using unicasts

  29. Current Mote implementation • Using Ripple-sliding window with unicast retransmission policy • User builds code on the PC • Packetizer creates segments out of binary • Mote attached to PC becomes original source and sends PUBLISH message • Receivers 1 hop away will subscribe, if version number is greater than their own • When a receiver gets the full image, it will send a PUBLISH message • If it doesn’t receive any subscriptions for some time, it will COMMIT the new code and invoke the bootloader • If a subscription is received, node becomes a source • Eventually, sources will also commit

  30. Current Mote Implementation (cont’d) • Retransmissions have higher priority than data packets • Duplicate requests are suppressed • Nodes keep track of their sources’ activity with a keepalive timer • Solves NACK ‘last packet’ problem • If the source dies, the keepalive expiration will trigger a broadcast repair request • Late joiner mechanism allows motes that have just recovered from failure to participate in code transfer • Requires all nodes to periodically advertise their version • Footprint • 1000 bytes RAM (with 100-byte packets) • 4.5K bytes ROM

  31. One critical piece: the bootloader • (slightly) modified version of the Crossbow bootloader • Resides at the very end of program memory • Very small RAM+ROM footprint • Purpose • Transfer the entire image from EEPROM into program memory • Reboot the mote

  32. Improving MOAP: Diffing • Sending the entire image is not the most efficient thing • Bug fixes are usually small • Solution: use “diffs” • Send only what’s new • Work done by Rahul Kapur, Tom Yeh and Ujjwal Lahoti • How diff works • Original source sends out a diff from the previous version • Nodes store diff in EEPROM • When transfer is complete nodes use the diff to construct the new image into EEPROM • Bootloader is then called and the mote reboots

  33. Diff results

  34. Improving MOAP: Status reports • MOAP will eventually reprogram all motes • But how do you know when it’s done? • How do you know which state a mote is in? • Status reporting is needed • Absolutely necessary for diffs • Requirement: a stable tree • Keep information about the publisher after reboot • Maintanance issues • Ability to ask simple questions • <node, version> tuple • Which node has which version: <*,*> • Which node has version Y: <*, Y> • What version does node X have: <X,*> • Does node X have version Y: <X,Y>

  35. Improving MOAP: Adding control and selective updates • Need to control a node’s behavior • “Don’t use the new version” • Need for selective updates • “Only nodes X-Z should be updated” • Requirement: ability to route packets from the original sender to receivers • Hard problem in the general case • Quite easier when N is small (~20 or so) • Considered solution • Discover paths to all nodes using flooding • Store this information at the original source • Microservers have infinite memory (compared to motes) • Use source routing or path-vector to send packets

  36. MOAP: Conclusion • Full binary updates over multiple hops • Ripple dissemination reduces energy consumption significantly • Sliding window method and unicast retransmission policy also reduce energy consumption and complexity • Successful updates of images up to 30K in size • Next steps • Sending DIFFS instead of full image • Status reports • Control and selective updates • Routing from source to receivers

More Related