1 / 26

An Adaptive Link Layer for Range Diversity in Multi-radio Mobile Sensor Networks

An Adaptive Link Layer for Range Diversity in Multi-radio Mobile Sensor Networks. Jeremy Gummeson Deepak Ganesan Mark D. Corner Prashant Shenoy. Mobile Sensor Networks. Mobile entities equipped with sensors, radios Exchange data with peer mobile nodes, infrastructure basestation

jadzia
Télécharger la présentation

An Adaptive Link Layer for Range Diversity in Multi-radio Mobile Sensor Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An Adaptive Link Layer for Range Diversity inMulti-radio Mobile Sensor Networks Jeremy Gummeson Deepak Ganesan Mark D. Corner Prashant Shenoy

  2. Mobile Sensor Networks • Mobile entities equipped with sensors, radios • Exchange data with peer mobile nodes, infrastructure basestation • High-power long-range Radio maximizes communication opportunities, but expensive at short-range • Mobility Patterns difficult to predict • Tracking applications require small form factor

  3. A Spectrum of Radio Choices • Existing radios optimized for short or long range: • Long Range, Low bit rate, Low Energy Efficiency • Short Range, High bit rate, Higher Energy Efficiency • Designer chooses efficiency or range Common Small Form Factor Radios

  4. Approach • Design a node with heterogeneous radios to exploit short range efficiency and long range connectivity • Use unified link layer to manage radios and react to channel and mobility dynamics

  5. Contributions Our System makes the following contributions to Mobile Multi-Radio Sensors Research: • Arthropod: A low-power, multi-radio sensor platform • A machine-learning Algorithm that uses link-layer statistics to select between radio interfaces • A multi-radio switching protocol that provides robust transitions and manages radio state

  6. Outline • Motivation • System Design • Implementation • Results • Conclusions

  7. Arthropod: A Multi-Radio Sensor Platform • Hardware platform consists of: MSP430 MCU, CC2420 radio, and XE1205 radio: • Expansion board provides existing platform Tinynode with CC2420 radio • Board connects CC2420 to unused SPI bus and GPIO pins. Existing TinyOS-2.x drivers modified for use with new hardware Application Unified Link Layer CC2420 MAC XE1205 MAC CC2420 Radio XE1205 Radio Hardware Prototype System Block Diagram

  8. Send Receive Switching Protocol Q-Learning Algorithm decision CC2420 MAC XE1205MAC CC2420 MAC XE1205MAC Utilizing Multiple Radios • Problem: Need to determine energy-optimal radio at given time • Approach: Unified link-layer presents multiple radios as one entity • Two subcomponents: • Q-Learning Algorithm: Observe MAC retransmissions, learn/choose optimal radio interface • Switching Protocol: Manage radio power states, coordinate handoffs

  9. Q-Learning • Goal: Choose action a, arrive in state with maximal Q value • In multi-radio context, Q represents learned energy needed to send packet on given interface at particular power-level • a represents decision to send packet using particular interface/power combination • After transmission, receive reward r, where i represents retransmissions: r[i] = -(i*PacketSize*ByteTime*TxPower + AckTimeOut*RxPower) + RxPower*AckRTT + PacketSize*ByteTime*TxPower • r used to update Q using simple rule with fixed parameters • Periodically explore alternate interface/power-levels by choosing random action a; allows transitions when conditions improve

  10. Multi-Radio Switching Protocol • Q-Learning finds optimal interface/power level, need handoff between radios • non-trivial problem: radio transitions occur during periods of high loss • Need to handle: • State synchronization problems between sender and receiver • Graceful disconnections • Solution: • Embed control flags that negotiate handoffs • Handoff state temporarily powers both radio receivers; Minimize time spent during handoff to minimize overhead

  11. Switching Protocol Description • Sending node drives state transitions at receiver: • Asserting EXPLORE flag in sent packet causes both radio interfaces to become active until timeout • Consecutive packets may be sent on either interface; continuously asserting EXPLORE will keep both interfaces active • Alternatively, the next packet may be sent with HIGH_ON or LOW_ON flag asserted to commit receiver to one particular interface • Two consecutive timeouts force receiver into Low Power Listen (LPL) on long range interface; may proactively enter LPL by asserting END_BLOCK EXPLORE|| Timeout EXPLORE Low On Handoff High On LOW_ON HIGH_ON || Timeout Wakeup END_BLOCK END_BLOCK|| Timeout Idle

  12. Evaluation Methodology • Trace-driven simulations using real datasets: • Results from software implementation: • Show performance of link layer software implementation • Validate simulated link layer performance for indoor continuous dataset

  13. Trace Driven Simulation Results Fraction of Lost Packets Per Packet Energy Consumption • Multi-Radio approach improves per packet energy consumption while only marginally increasing packet loss

  14. Multi-Radio Power Control Results • Additional simulation looks at power control across radios: • Data set uses max/min Tx power settings on each radio Summary of results for each power level Cumulative Energy Consumption for Single and Multi-power level strategies • Unified Link Layer successfully tracks energy-minimal radio/power setting

  15. Implementation Loss Rates and Energy/Packet • TinyOS-2.x software implementation for Arthropod shows algorithm running online; measures performance of radio switching protocol • Recreate mobility pattern of indoor continuous trace; implementation results compared to single radio performance from indoor continuous Summary of Implementation Results • Multi-Radio implementation loses more packets, consumes substantially less energy

  16. Breakdown of Receiver Energy Costs Energy Spent during different Rx States • Multi-Radio approach uses significantly less power than an XE1205 only implementation; Loss rate comparable to the CC2420

  17. Conclusions • Showed hardware implementation of multi-radio sensor node Arthropod • Designed and tested a unified link layer for multi-radio hardware: • Uses learning algorithm and MAC statistics to select radio interace • Implemented switching protocol to handoff between radios • Evaluated link-layer via trace driven simulation and algorithm running online: • Considerably more energy efficient for different mobility patterns, while only marginally increasing losses

  18. Related Work Existing Multi-Radio Systems: Separate Radio Roles: • Wake-On-Wireless: low-power, low-bandwidth radio wakes up high-power, high-band-width radio (Agarwal, 2007) • DieselNet Throwboxes: Long-range radio maximizes utility of short-range, high-bandwidth radio in a mobile scenario (Banerjee, 2007) Dynamic Radio Selection: • Mobile Access Router: Use heterogeneous radios to maximize bandwidth and minimize stalled transfers; neglects energy (Rodriguez, 2004) • Coolspots: Use Bluetooth for communication when available, otherwise uses 802.11 (Pering, 2006) Mesh Networking: • MR-LQSR: Use Multiple Radios per mesh node, makes channel assignment more effective (Draves, 2004)

  19. Thank You Questions?

  20. Sender State Machine • States represent sender’s view of the receiver • Intermediate handoff state used to activate alt. radio • Transition out of IDLE requires wakeup packet • Receiver -> Both radios active during handoff

  21. Receiver State Machine • Used to manage radio receiver power states • Flags used to coordinate handoff between radios • Two Consecutive Timeouts result in transition to IDLE state • May proactively switch to IDLE state at end of block transfer

  22. Research Contributions • A prototype low-power multi-radio hardware system • Develop low-overhead techniques for dynamically switching between radio interfaces • Evaluation methodologies for showing energy performance benefits of multi-radio systems

  23. Current Strategies Communication is Expensive! • Use communication resources intelligently: • Minimize radio time spent in active mode • Send data when channel conditions are “good”

  24. Q-Learning • Q-Learning is a reinforcement-learning technique used for decision-making by agents in an unknown environment: • A Matrix Q contains the accumulated reward by an agent in a given state • The agent has several choices of action and chooses the action a such that the Q-value of the arrival state is maximized. • After Taking action a, the agent receives a reward r and adjusts Q with an update rule defined by parameters αand γ • The agent will also periodically take a random action, ε, which allows unexplored state to be reached Formal definition of Q-Learning

  25. More Q-Learning • In the context of a multi-radio system: • Each state S is an individual radio/power-level combination • An action a corresponds to sending a packet over a given radio interface. • Reward r corresponds to the negative energy used for sending the packet. The amount of energy used is defined by a combination of radio hardware characteristics and channel dynamics. • Q represents cumulative energy consumption across multiple transmission attempts. αand γ are used to control how quickly Q is updated as well as limiting the reward value r for staying in a given state. • ε defines when the alternate radio interface should be explored. In a multi-radio scenario, it does not make sense to take a random action

  26. Defining reward value r The success of the Q-Learning algorithm depends heavily on r: • r is defined as energy required to send a packet. Energy is calculated via MAC layer statistics • The following equation shows how a reward is calculated, where i is the number of packet retransmissions: r[i] = -(i*PacketSize*ByteTime*TxPower + AckTimeOut*RxPower) + RxPower*AckRTT + PacketSize*ByteTime*TxPower • A radio-agnostic quantity, energy, allows head-to-head comparison of performance across radios. Maximizing Q is synonymous with minimizing energy • Congestion backoffs also contribute to power consumption, but not in practice

More Related