1 / 117

Lecture #3: Wireless Comm. for ENS - Part II The Higher Layers

Lecture #3: Wireless Comm. for ENS - Part II The Higher Layers. Reading List for this Lecture: Routing. GENERAL Jamal Al-Karaki and Ahmed Kamal, “Routing Techniques in Wireless Sensor Networks: A Sruvey,” IEEE Wireless Communications, pp. 7-28, December 2004.

abedi
Télécharger la présentation

Lecture #3: Wireless Comm. for ENS - Part II The Higher Layers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture #3: Wireless Comm. for ENS - Part IIThe Higher Layers

  2. Reading List for this Lecture: Routing GENERAL • Jamal Al-Karaki and Ahmed Kamal, “Routing Techniques in Wireless Sensor Networks: A Sruvey,” IEEE Wireless Communications, pp. 7-28, December 2004. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Al-Karaki04_IEEEWC.pdf DIFFUSION • Fabio Silva, John Heidemann, Ramesh Govindan, and Deborah Estrin, “Directed Diffusion,” In Frontiers in Distributed Sensor Networks, S. S. Iyengar and R. R. Brooks, editors, p. to appear. Boca Raton, Florida, USA, CRC Press, Inc.. October, 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Silva04_Artech.pdf • Chalermek Intanagonwiwat, Ramesh Govindan, Deborah Estrin, John Heidemann, and Fabio Silva, “Directed Diffusion for Wireless Sensor Networking,” ACM/IEEE Transactions on Networking, 11 (1 ), pp. 2-16, February, 2002 • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Intanagonwiwat02_ToN.pdf • John Heidemann, Fabio Silva, Chalermek Intanagonwiwat, Ramesh Govindan, Deborah Estrin, and Deepak Ganesan, “Building Efficient Wireless Sensor Networks with Low-Level Naming,” Proceedings of the Symposium on Operating Systems Principles, pp. 146-159. Chateau Lake Louise, Banff, Alberta, Canada, ACM. October, 2001. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Heidemann01_SOSP.pdf • John Heidemann, Fabio Silva, and Deborah Estrin, “Matching Data Dissemination Algorithms to Application Requirements”, ACM SenSys 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Heidemann03_SenSys.pdf OTHER ROUTING • Badri Nath and Dragos Niculescu, ``Routing on a curve", In the First Workshop on Hot Topics in Networks (HotNets-I), October 28-29, Princeton, NJ, 2002. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Nath02_HotNets.pdf • Philip Levis, Neil Patel, David Culler, and Scott Shenke, "Trickle: A Self-Regulating Algorithm for Code Propagation and Maintenance in Wireless Sensor Networks," In Proceedings of the First USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI 2004) • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Levis04_NSDI.pdf  

  3. Reading List for this Lecture: Routing in Intermittently Connected Networks • A. Lindgren, A. Doria, O. Schelen, "Probabilistic Routing in Intermittently Connected Networks,” Proceedings of the The First International Workshop on Service Assurance with Partial and Intermittent Resources (SAPIR 2004), August 2004. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Lindgren04_SAPIR.pdf • A. Vahdat and D. Becker, "Epidemic Routing for Partially-connected Ad Hoc Networks," Tech. Rep. CS-2000-06, Duke University, 2000. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Vahdat00_Duke.pdf • Zygmunt Haas, Joseph Y. Halpern, Li Li, “Gossip-Based Ad Hoc Routing,” IEEE Infocom 2002. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Haas02_Infocom.pdf • Kevin Fall, “A Delay Tolerant Networking Architecture for Challenged Internets,” ACM Sigcomm 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Fall03_Sigcomm.pdf • S. Jain, K. Fall, R. Patra, "Routing in a Delay Tolerant Networking", ACM Sigcomm, Aug/Sep 2004. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Jain04_Sigcomm.pdf • Melissa Ho and Kevin Fall. "Poster: Delay Tolerant Networking for Sensor Networks." In the First IEEE Conference on Sensor and Ad Hoc Communications and Networks (SECON 2004), October 2004. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Ho04_SECON.pdf

  4. Reading List for this Lecture: Reliable Transport • Chieh-Yih Wan and Andrew T. Campbell, Lakshman Krishnamurthy, “PSFQ: A Reliable Transport Protocol For Wireless Sensor Networks”, First ACM International Workshop on Wireless Sensor Networks and Applications (WSNA 2002), Atlanta, September 28, 2002. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Wan02_WSNA.pdf • Fred Stann and John Heidemann, “RMST: Reliable Data Transport in Sensor Networks,” In Proceedings of the First International Workshop on Sensor Net Protocols and Applications, pp. 102-112. Anchorage, Alaska, USA, IEEE. April, 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Stann03_SNPA.pdf • Y. Sankarasubramaniam, O. B. Akan, I. F. Akyildiz, "ESRT: Event-to-Sink Reliable Transport in Wireless Sensor Networks," in Proc. ACM MOBIHOC 2003, pp. 177-188, Annapolis, Maryland, USA, June 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Sankarasubramaniam03_MobiHoc.pdf • Chieh-Yih Wan, Shane B. Eisenman and Andrew T. Campbell, “CODA: Congestion Detection and Avoidance in Sensor Networks”, ACM SenSys 2003, November 2003. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Wan03_SenSys.pdf • Bret Hull, Kyle Jamieson, Hari Balakrishnan, "Techniques for Mitigating Congestion in Sensor Networks" ACM SenSys 2004, November 2004 • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Hull04_SenSys.pdf • Cheng Tien Ee and Ruzena Bajscy, "Congestion Control and Fairness for Many-to-One Routing in Sensor Networks", ACM SenSys 2004, November 2004. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Ee04_SenSys.pdf 

  5. Reading List for this Lecture: Architecture • Joseph Polastre, Jonathan Hui, Philip Levis, Jerry Zhao, David Culler, Scott Shenker, Ion Stoica,"A Unifying Link Abstraction for Wireless Sensor Networks," In Proceedings of the Third ACM Conference on Embedded Networked Sensor Systems (SenSys), November 2-4, 2005. • http://nesl.ee.ucla.edu/courses/ee202b/2006s/papers/L03/Polastre05_SenSys.pdf 

  6. Traditional Networked Systems • Well established layers of abstractions • Strict boundaries • Ample resources • Independent Applications at endpoints communicate pt-pt through routers • Well attended Application Application User System Network Stack Transport Threads Network Address Space Data Link Files Physical Layer Drivers Routers Ack: Culler, MobiHoc ‘05

  7. By comparison… • Highly Constrained resources • processing, storage, bandwidth, power • Applications spread over many small nodes • self-organizing Collectives • highly integrated with changing environment and network • communication is fundamental • Concurrency intensive in bursts • streams of sensor data and network traffic • Robust • inaccessible, critical operation • Unclear where the boundaries belong and what the abstractions are • even HW/SW will move Ack: Culler, MobiHoc ‘05

  8. Vast Networks of Tiny Devices • Past 25 years of internet technology built up around powerful dedicated devices that are carefully configured and very stable • local high-power wireless subnets at the edges • 1-1 communication between named computers • Here, ... • every little node is potentially a router • work together in application specific ways • collections of data defined by attributes • connectivity is highly variable • must self-organize to manage topology, routing, etc • and for power savings, radios may be off 99% of the time Ack: Culler, MobiHoc ‘05

  9. Routing

  10. Common Communication Patterns • Internet • Many independent pt-pt stream • Parallel Computing • Shared objects • Message patterns (any, grid, n-cube, tree) • Collective communications • Broadcast, Grid, Permute, Reduces • Sensor Networks • Dissemination (broadcast & epidemic) • Collection • Aggregation • Tree-routing • Neighborhood • Point-point Disseminate the Query - eventual consistency Collect (aggregate) results The Emergence of Networking Abstractions and Techniques in TinyOSPhilip Levis, Sam Madden, David Gay,  Joseph Polastre, Robert Szewczyk, Alec Woo, Eric Brewer, and David Culler, NSDI'04 Ack: Culler, MobiHoc ‘05

  11. The Basic Primitive • Transmit a packet • Received by a set of nodes • Dynamically determined • Depends on physical environment at the time • What other communication is on-going • Each selects whether to retransmit • Potentially after modification • And if so, when Ack: Culler, MobiHoc ‘05

  12. Routing Mechanism • Upon each transmission, one of the recipients retransmit • determined by source, by receiver, by … • on the ‘edge of the cell’ Ack: Culler, MobiHoc ‘05

  13. The Most Basic Neighborhood • Direct Reception • Non-isotropic • Large variation in affinity • Asymmetric links • Long, stable high quality links • Short bad ones • Varies with traffic load • Collisions • Distant nodes raise noise floor • Reduce SNR for nearer ones • Many poor “neighbors” • Good ones mostly near, some far Ack: Culler, MobiHoc ‘05

  14. Which node do you route through? Ack: Culler, MobiHoc ‘05

  15. What does this mean? • Always routing through nodes “at the hairy edge” • Wherever you set the threshold, the most useful node will be close to it • The underlying connectivity graph changes when you use it • More connectivity when less communication • Discovery must be performed under load Ack: Culler, MobiHoc ‘05

  16. Conventional Routing in MANETs • Proactive e.g. DSDV • Bellman-Ford, distance vector, routing algorithm • Each node maintains Routing Table with entry for each destination in network • Overhead: Control traffic • Optimization: incremental updates • Reactive: DSR, AODV • Reactive/On-demand • Route discovery • source node broadcasts route request packet (RREQ) • Destination/intermediate node with recent route sends route reply packet (RREP) • Different route metrics: # of hops, earliest, link quality etc. • In DSR, packets carry the route • AODV introduces routing tables along with sequence numbers • Dynamics • Paths disrupted by node movement • Trigger route refreshes upstream when a downstream neighbor moves

  17. GRAB:Field Based Minimum Cost Forwarding(Lu et al 2002) • Each node broadcasts only once • Cost Function • A measure of how expensive it is to get a message back to the sink. • Could be base: • Energy needed in radio communication. • Hop count. • … • Node Cost • Each node keeps a best estimate on its minimum cost. • Estimate updated upon receipt of every ADV message. • ADV message forwarding deferred for time proportional to nodes cost estimate.

  18. ADV Dissemination Example • Signal strength is used to measure cost. • B sees strong signal and judges cost to be 1. • C sees weak signal and judges cost to be 3.

  19. ADV Dissemination Example contd. • Because B has a smaller cost, he defers for a shorter time then C. • C updates his cost to 2 and restarts his deferral timer. • Each node has optimal cost with minimum broadcast.

  20. Data Dissemination • A node that decides it has interesting data. broadcasts two things (besides data) • Total budget to get back to sink. • Amount of budget used in initial broadcast. • A node receiving a data message will only forward a data message if Total Budget  Budget Spent So Far + My Cost • If the inequality holds then Budget Spent So Far is updated. • Otherwise the message is dropped.

  21. Data Dissemination Example • Assume hop count was used as a cost metric. • Node A is the sink. • Node C is the source.

  22. Data Dissemination Example contd. • Node C sends a data message which specifies • Total Budget = 2 • Budget Spent = 1 • Node E drops message • TB < BS + E’s Cost • Node B forwards message.

  23. Routing on a Curve(Nath et al 2002) • Trajectories are a natural name space for embedded networks • By definition, network structure mimics physical structure that is instrumented • Stress along a column • Flooding along a river • Pollution along a road • Trajectories come from the application domain

  24. TBF (Trajectory based forwarding) • Fundamental Idea • Route packets along a specified trajectory • Generalization of Source Based Routing and Cartesian routing • Trajectory specified in the packet • Trajectory specification • Function • Equation • Parametric TBF uses this choice

  25. Features of TBF • Decouples pathname from the actual path • Source based Routing (LSR, DSR etc) mixes naming and route path • Applications: • Route around obstacles/changes/failures • Trajectory forwarding need not have a “destination” • Route along a line, pattern • Applications: • Flooding, discovery, group communication (pollination)

  26. Routing on a Curve

  27. Spoke flooding

  28. Directed Diffusion for Data Centric Routing • Basic idea • Name data (not nodes) with externally relevant attributes • Data type, time, location of node, SNR, etc • Data sources (sensors) publish data, Data clients (users) subscribe to data • All nodes may play both roles • Diffuse requests and responses across network • Optimize path with gradient-based feedback • Support in-network aggregation and processing • Combine data from different sources en route by duplicate suppression, correlation etc. • True peer-to-peer model

  29. Many Variants of Diffusion • Two-phase pull diffusion (original diffusion algorithm) • Data consumers seek out data sources, and then sources search to fund the best possible path back to subscribers • Periodic flooding of interests and exploratory data • One-phase pull diffusion • Only floods interests • Push diffusion • Reverses the roles in publish/subscribe: data sources actively search for consumers • Floods only exploratory data messages • Suited for different traffic patterns • Many-to-one • Many-to-many • One-to-many • One-to-one

  30. Two-phase Pull (Original Diffusion) • Sinks identify data by a set of attributes and flood (geographically scoped) INTEREST message that sets up gradients • Sources respond with flooded EXPLORATORY DATA • Sink then REINFORCES the gradients corresponding to the best responses • Sources then send DATA along the reinforced gradients • Effectively, this creates multicast trees rooted at each sources

  31. One-phase Pull • Reduces one search phase relative to Two-phase pull • Sinks flood INTEREST messages • Sources respond with DATA along the lowest latency gradients to sinks • Requires that: • only symmetric links be used • INTEREST messages carry an id

  32. One-phase Push • Makes source the active party • Sinks are passive with interest information remaining local to the node subscribing to a data • When source generate data, they flood EXPLORATORY data • Interested sinks send REINFORCEMENT message back • DATA comes along reinforced gradients only Sink Every node Broadcast exploratory data Broadcast exploratory data Send reinforcement Forward reinforcement Send data along reinforced path Forward data along reinforced path

  33. When is which diffusion mode the best? • All benefit from late binding of name to address and reduction in address bit overhead • Intuitively: • One-phase pull works best with many sources and a few sinks • E.g. user querying a sensor network for events • Push works best with many sinks and many sources but where sources produce data only occasionally • E.g. sensor sending trigger to other sensors - inefficient with pull due to event traffic • Breakeven point also depends on • Sensor Data rate • Interest / Exploratory Data frequency (topology dynamics)

  34. Diffusion as a construct for in-network processing • Nodes pull, push, and store named data (using tuple space) to create efficient processing points in the network • e.g. duplicate suppression, aggregation, correlation • Nested queries reduce overhead relative to “edge processing” • Complex queries support collaborative signal processing • propagate function describing desired locations/nodes/data (e.g. ellipse for tracking) • Interesting analogs to emergingpeer-to-peer architectures • Build on a data-centric architecturefor queries and storage

  35. Data Centric vs. Address Centric (Krishnamachari et al.) • Address Centric • Distinct paths from each source to sink. • Data Centric • Support aggregation in the network where paths/trees overlap • Essential difference from traditional IP networking • Building efficient trees for Data centric model • Aggregation tree: On a general graph if k nodes are sources and one is a sink, the aggregation tree that minimizes the number of transmissions is the minimum Steiner tree. NP-complete….Approximations: • Center at Nearest Source (CNSDC): All sources send through source nearest to the sink. • Shortest Path Tree (SPTDC): Merge paths. • Greedy Incremental Tree (GITDC): Start with path from sink to nearest source. Successively add next nearest source to the existing tree.

  36. Source placement: event-radius model

  37. Comparison of energy costs Data centric has many fewer transmissions than does Address Centric; independent on the tree building algorithm. Address Centric Shortest path data centric Greedy tree data centric Nearest source data centric Lower Bound

  38. BCAST: Fundamental building block • Commands • Wake-up • Form routing tree • Discover route • Source-destination discovery (DSR, AODV) • Exploration in directed diffusion • Time-synchronization • Constructed from underlying local broadcast Ack: Culler, MobiHoc ‘05

  39. Flooding • Simple Address-Free Algorithm Schema if (new bcast msg) then take local action retransmit modified request • Naturally adapts to available connectivity • Minimal state and protocol overhead  surprising complexity in this simple mechanism Ack: Culler, MobiHoc ‘05

  40. Example Tree Ack: Culler, MobiHoc ‘05

  41. Factors • Long asymmetric links are common • Many children • Nodes out of range may have overlapping cells • hidden terminal effect • Collisions => these nodes hear neither ‘parent’ • become stragglers • As the tree propagates • folds back on itself • rebounds from the edge • picking up these stragglers. • Mathematically complex because behavior is notindependent beyond singe cell • Redundancy • Geometric overlap => <41% additional area Ni, S.Y., Tseng, Y.C., Chen, Y.S., Sheu, J.P.: The broadcast storm problem in a mobile ad hoc network. MobiCom'99 Ack: Culler, MobiHoc ‘05

  42. Selective Retransmission Schemes • Probabilistic Retransmission • Fixed prob. • What would be the right choice? • Counter • When hear msg, start random delay • If hear C msgs during wait, don’t retransmit • Distance • If nearest node from which msg is heard is less than some threshold, don’t retransmit • Location • If portion of cell not covered by transmitting neighbors is less than some threshold, don’t retransmit • Cluster-based • Partition graph into cluster heads, gateways, and members • Members don’t transmit Ack: Culler, MobiHoc ‘05

  43. Adaptive BCAST rate • Upon first msg • Start random delay • If new msg arrives during delay • Filter message (eg., discard if signal strength below threshold) • If passes filter, Utilize message • Start new delay • Upon expiration of delay • Complete local processing • E.g., pick lowest depth node with strongest signal as parent • Retransmit • Delay is proportion to cell density • Wait till neighbors go quiet before transmit  Approx uniform transmissions per unit area, regardless of node density • Exploit long links when appropriate Ack: Culler, MobiHoc ‘05

  44. Flooding vs Gossip • In gossip protocols, at each step pick a random neighbor • Assumes an underlying connectivity graph • Typically used when graph is full connected • E.g., ip • Much slower propagation Ack: Culler, MobiHoc ‘05

  45. Reliable, Epidemic Dissemination • Reliably deliver a datum to every node in the network • Fundament communication pattern needed for querying, reconfiguration, reprogramming, retasking etc. • Example: Query dissemination in TinyDB • Periodically transmit current query • If hear new query, accept it and start retransmitting it • Does not scale well over density • Is not responsive at low rate of management Ack: Culler, MobiHoc ‘05

  46. To Every Node in a Network • Network membership is not static • Loss, transient disconnection, repopulation • Limited resources prevent storing complete network population information • To ensure dissemination to every node, we must periodically maintain that every node has the data • Propagation is costly • Queries, scripts, modules, parameters etc. of few 10s to 100s of bytes to every node in a large, multihop network • But maintenance is even more costly • Consider 1 maintenance packet per minute • Maintenance for 15 min costs > 400B of data (e.g. script) • Two minutes are more costly than 20B of data (e.g. parameters) • Checking that every node has the data is more costly than propagating the data itself Ack: Levis et. al., NSDI ‘04

  47. Three Needed Properties • Low maintenance overhead • Minimize communication when everyone is up to date • Rapid propagation • When new data appears, it should propagate quickly • Scalability • Protocol must operate in a wide range of densities • Cannot require a priori density information Ack: Levis et. al., NSDI ‘04

  48. Epidemic Dissemination - Trickle • Polite gossip: “Every once in a while, broadcast what data you have, unless you’ve heard some other nodes broadcast the same thing recently.” • Behavior (simulation and deployment): • Maintenance: a few sends per hour • Propagation: less than a minute • Scalability: thousand-fold density changes • Instead of flooding a network, establish a trickle of packets, just enough to stay up to date. • Key Idea: maintain constant flux of communication per unit area, regardless of node density • More neighbors  listen more, talk less • Announcement rate a 1/cell_density Ack: Levis et. al., NSDI ‘04

  49. Trickle Assumptions • Broadcast medium • Concise, comparable metadata • Given A & B, know if one needs an update • Metadata exchange (maintenance) is the significant cost Ack: Levis et. al., NSDI ‘04

  50. Detecting that a Node Needs an Update • As long as each node communicates with others, inconsistencies will be found • Either reception or transmission is sufficient • Define a desired detection latency, t • Choose a redundancy constant k • k = (receptions + transmissions) • In an interval of length t • Trickle keeps the rate as close to k/ t as possible Ack: Levis et. al., NSDI ‘04

More Related