1 / 54

Network Power Scheduling for wireless sensor networks

Network Power Scheduling for wireless sensor networks. Barbara Hohlt Intel Communications Technology Lab Hillsboro, OR August 9, 2005. Outline. Introduction Radio Scheduling FPS Overview Implementation Micro Benchmarks Application Evaluation. Wireless Sensor Networks.

Télécharger la présentation

Network Power Scheduling for wireless sensor networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Network Power Schedulingfor wireless sensor networks Barbara Hohlt Intel Communications Technology Lab Hillsboro, OR August 9, 2005

  2. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  3. Wireless Sensor Networks • Networks of small, low-cost, low-power devices • Sensing/actuation, processing, wireless communication • Dispersed near phenomena of interest • Self-organize, wireless multi-hop networks • Unattended for long periods of time

  4. Berkeley Motes • Mica • Mica2Dot • Mica2

  5. Mote Layout 14 5 ` 15 15 13 6 12 9 11 8 Example Applications Pursuer-Evader Environmental Monitoring Home Automation Indoor Building Monitoring Security Inventory Tracking

  6. Power Consumption • Power consumption limits the utility of sensor networks • Must survive on own energy stores for months or years • 2 AA batteries or 1 Lithium coin cell • Replacing batteries is a laborious task and not possible in some environments • Conserving energy is critical for prolonging the lifetime of these networks

  7. Where the power goes • Main energy draws • Central processing unit • Sensors/actuators • Radio • Radio dominates the cost of power consumption

  8. Radio Power Consumption • Primary cost is idle listening • Time spent listening waiting to receive packets • Nodes sleep most of the time to conserve energy • Secondary cost is overhearing • Nodes overhear their neighbors communication • Broadcast medium • Dense networks • Must turn radio off •  need a schedule

  9. Flexible Power Scheduling • Flexible Power Scheduling • Reduces radio power consumption • Supports fluctuating demand (multiple queries, aggregates) • Adaptive and decentralized schedules • Improves power savings over approaches used in existing deployments • 4.3X over TinyDB duty cycling • 2–4.6X over GDI low-power listening • High end-to-end packet reception • Reduces contention • Increases end-to-end fairness and yield • Optimized per hop latency

  10. Network Power Schedule CSMA MAC FPS Two-Level Architecture • Coarse-grain scheduling • At the network layer • Planned radio on-off times • Fine-grain CSMA MAC underneath • Reduces contention and increases end-to-end fairness • Distributes traffic • Decouples events from correlated traffic • Reserve bandwidth from source to sink • Does not require perfect schedules or precise time synchronization

  11. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  12. Scheduling Approaches Approach Protocol Layer

  13. idle listening (low-power mode) PHY Layer Low-power Listening • Radio periodically samples channel for incoming packets • Radio remains in low-power mode during idle listening • Fixed channel sample period per deployment • Supports general communication

  14. frame MAC Layer S-MAC Scheduled Listening • Virtual Clustering, all nodes maintain and synchronize on schedules of their neighborhoods • Data transmitted during “sleep” period, otherwise radios turned off • Fixed duty-cycle per deployment • Supports general communication listen period “sleep” period SYN RTS CTS sleep or send data

  15. Application Layer TinyDB Duty Cycling • All nodes sleep and wake at same time every epoch • All transmissions during waking period • Fixed duty-cycle per deployment • Supports a tree topology waking period epoch

  16. Network Layer Flexible Power Scheduling • Each node has own local schedule • During idle time slots the radio is turned off • Schedules adapt continuously over time • Duty-cycles are adaptive • Supports tree topology cycles

  17. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  18. Assumptions • Sense-to-gateway applications • Multihop network • Majority of traffic is periodic • Nodes are sleeping most of the time • Available bandwidth >> traffic demand • Routing component

  19. slot cycle time T I I R I T The power schedule • Time is divided into cycles • Each cycle is divided into slots • Each node maintains a local power schedule of what operations it performs over a cycle T – Transmit R – Receive I - Idle

  20. Scheduling flows • Schedule entire flows (not packets) • Make reservations based on traffic demand • Bandwidth is reserved from source to sink • (and partial flows from source to destination) • Reservations remain in effect indefinitely and can adapt over time

  21. T I I R I T Adaptive Scheduling Local data structure local state • Demand represents how many messages a node seeks to forward each cycle • Supply is reserved bandwidth • The network keeps some preallocated bandwidth in reserve • Changes in reservations percolate up the network tree supply demand

  22. Supply and Demand cycle supply demand • If supply < demand • Request reservation • If Conf -> Increment supply • If supply >= demand • Offer reservation • If Req ->Increment demand For the purposes of this example, we will say one unit of demand counts as one message per cycle.

  23. T 1 window size = w R T 2 3 0 1 2 3 4 5 slot Reduced Latency Sliding Reservation Window Using only local information, the next Receive slot is always within w of the next Transmit slot putting an upper bound on the per hop latency of the network. supply demand cycle

  24. Listen Receiver Initiated Scheduling Joining Protocol • Periodically nodes advertise available bandwidth • A node joining the network listens for advertisements and sends a request • Thereafter it can increase/decrease its demand during scheduled time slots Broadcast Rx Tx Receiver CONF REQ ADV Joiner Tx Rx

  25. Receiver Initiated Scheduling Reservation Protocol • Periodically advertise available bandwidth • Nodes increase/decrease their demand during scheduled time slots • No idle listening Broadcast Rx Tx Receiver CONF REQ ADV Sender Tx Rx

  26. Properties of supply/demand • All network changes cast as demand • Joining • Failure • Lossy link • Multiple queries • Mobility • 3 classes of node • Router and application • Router only • Application only • Load balancing

  27. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  28. Implementation • HW • Mica • Mica2Dot • Mica2 • SW • Slackers • TinyDB/FPS (Twinkle) • GDI/FPS (Twinkle)

  29. Architecture Application Flexible Power Scheduling Multihop Routing BufferManagement RandomMLCG TimeSync PowerManagement SendQueues Active Messages MAC/PHY • Radio power scheduling • Manages send queues • Provides buffer management

  30. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  31. Micro Benchmarks Mica • Power Consumption • Fairness and Yield • Contention

  32. 3 2 1 0 Power consumption • 4 TinyOS Mica motes • 3-hop network • Node 3 sends one 36-byte packet per cycle • Measure the current at node 2 source gateway

  33. Slackers. Early experiment on Mica. 5X savings Current in mA Avg 1.4 Time in seconds

  34. 1 2 3 4 5 6 Mica Experiments Scheduled (FPS) vs Unscheduled (Naïve) • 10 MICA motes plus base station • 6 motes send 100 messages across 3 hops • One message per cycle (3200ms) • Begin with injected start message • Repeat 11 times • Two Topologies • Single Area • one 8’ x 3’4” area • Multiple Area • five areas, motes are 9’-22’ apart

  35. End-to-end Fairness and Yield FPS Naive

  36. Contention is Reduced

  37. Outline • Introduction • Radio Scheduling • FPS Overview • Implementation • Micro Benchmarks • Application Evaluation

  38. Application Evaluation • TinyDB/fps vs TinyDB/duty cycling • 4.3X power savings • Multiple queries • Partial flows • Query dissemination • Aggregation • GDI/fps vs GDI/lpl • 2-4.6X power savings • Up to 23 % increase in yield

  39. Evaluation with TinyDB • Two implementations • TinyDB Duty Cycling • TinyDB FPS • Current Consumption Analysis • Berkeley Botanical Gardens Model • Acknowledgment: Sam Madden

  40. TinyDB Redwood Deployment • 1/3 two hops • 2/3 one hop • 2 trees • 35 nodes BTS 0 18 1 2 17 3

  41. 3 Step Methodology • Estimate radio-on time for TinyDB/DC and TinyDB/FPS • No power management  3600 sec/hour • For FPS, validate the estimate at one mote with an experiment • Use Mica current measurements to estimate current consumption

  42. TinyDB Duty Cycling 24 samples/hour * 4 sec/sample = 96 sec/hour 4 seconds 2.5 minutes All nodes wake up together for 4 seconds every 2.5 minutes. During the waking period nodes exchange messages and take sensor readings. Outside the waking period the processor, radio, and sensors are powered down.

  43. 0 1 2 Traffic Communication Broadcast 3 Flexible Power Scheduling 24 samples/hour * 0.767 sec/cycle = 18.4 sec/hour Node 1: 2 T, 3 A Node 2: 3 T, 2 R, 3 A Node 3: 2 T, 3 A 18 slots = 5 (node 1) + 8 (node 2) + 5 (node 3) 0.767 sec/cyc (per node) = 18 slots * 128 ms = 2.3 sec/cycle per 3 nodes

  44. FPS Validation

  45. Current Consumption Current Consumption mA-seconds per hour = (On time) * (On draw) + (Off time) * (Off Draw) 4.39 X TinyDB/ Duty Cycling 803 mA-s/hr = 96 s/hr (8mA) + 3504 s/hr (.01mA) Mica1 TinyDB/FPS 183 mA-s/hr = 18.4 s/hr (8mA) + 3582 s/hr (.01mA) Mica1

  46. Evaluation with GDI • Two implementations • GDI Low-Power Listening • GDI FPS • Experiments • Yield • Power Measurements • Power Consumption • Acknowledgement: Rob Szewczyk

  47. MAC Layer GDI Low-Power Listening Each node wakes up periodically to sample the channel for traffic and goes right back to sleep if there is nothing to be received.

  48. 12 Experiments Mica2Dot • 30 mica2dot inlab testbed • 3 sets • GDI/lpl100 • GDI/lpl485 • GDI/Twinkle • 4 sample rates • 30 seconds • 1 minute • 5 minute • 20 minute

  49. Yield and Fairness

  50. Measured Power Consumption Sample Period: 5 minutes Sample Period: 20 minute

More Related