1 / 35

Scalable Location Management for Large Mobile Ad hoc Networks

Scalable Location Management for Large Mobile Ad hoc Networks. Sumesh J. Philip. Contents. Wireless Ad hoc networks Issue of Scalability Geographic Routing Scalable Location Update based Routing SLALoM - Scalable Location Management Grid Location Service

callum
Télécharger la présentation

Scalable Location Management for Large Mobile Ad hoc Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Scalable Location Management for Large Mobile Ad hoc Networks Sumesh J. Philip

  2. Contents • Wireless Ad hoc networks • Issue of Scalability • Geographic Routing • Scalable Location Update based Routing • SLALoM - Scalable Location Management • Grid Location Service • Hierarchical Grid Location Management • Numerical study • Conclusion

  3. Wireless Ad hoc networks • Infrastructure-less networks that can be easily deployed • Each wireless host acts as an independent router for relaying packets • Network topology changes frequently and unpredictably • Key challenge lies in routing packets • Quite a lot of protocols proposed in literature (table driven/reactive/hybrid) • Dynamic source Routing (DSR) works well for small networks

  4. Issue of Scalability • Increasing density increases average node degree, decreases average path length • Routing cost less • Any reasonable scheme might work! • To test scalability, area (playground size) must increase with nodes • Average node degree constant • Will present a mobility model that consolidates the above relationship

  5. Traditional Protocols • Table driven • incur large overheads due to routing table maintenance • Delayed topology updates can cause loops • On-demand • flood the entire network with discovery packets • long latency for discovery • Path maintenance means additional state • No separation between data and control • Ultimately, data suffers!!

  6. Any contenders ? • Not many invariants to play with (IP address, local connectivity) • Nodes physically located closer likely to be connected by a small number of radio hops • Geolocation techniques can be used to identify a node’s physical position • Geographic forwarding • Packet header contains the destination’s location • Intermediate nodes switch packets based on location

  7. C’s radio range Geographic Forwarding A D F C G B E • A addresses a packet to G’s latitude, longitude • C only needs to know its immediate neighbors to forward packets towards G. • Geographic forwarding needs location management!

  8. Desirable Properties ofLocation Management • Spread load evenly over all nodes • Degrade gracefully as nodes fail • Queries for nearby nodes stay local • Per-node storage and communication costs grow slowly as the network size grows

  9. Scalable Location based Routing Protocol (SLURP) • Hybrid Protocol that has a deterministic manner of discovering the destination • Topography divided into square grids • Each node (ID) selects a home regionusing f(ID),and periodically registers with the HR • Nodes that wish to communicate with a node query its HR using f--1(ID) • Use geographic forwarding to send data, once location is known (e.g. MFR)

  10. Example ID = 22; RT= 12; HR=22%12 = 10; - Home region [12] - Update/Query [10] - Data - Location Database f(ID) - ID Mod(RT) DST = 22; RT= 12; HR=22%12 = 10;

  11. Cost of Location Management • Location Registration • Periodic • Triggered • Location Maintenance • Operations for database consistency • Location Discovery • Query/response • Data Transfer

  12. Mobility Model • Each node moves independently and randomly • Direction , Velocity [v-c, v+c] at t • New direction and velocity at destination • Node degree = • To keep degree constant, A must grow linearly with N

  13. Location update Overhead

  14. Location update Overhead

  15. Home Region Maintenance • On region crossing • Inform previous region of departure • Inform new region of arrival • Update from any node in new region

  16. Total Overhead • Cost of Locating • Send a Location query to Home region • Total Overhead = Sum of all overheads for all nodes

  17. ScaLAble Location Management (SLALoM) • Define a hierarchy of regions : Order(3), Order(2), Order(1) • Each Order(2) region consists of K2 Order(1) regions • Each node assigned a HR in an Order(2) region • To reduce location update overhead, define far and near HRs; near regions updated frequently • Nodes that wish to communicate with another node query its HR in current Order(2) grid • Queries from farHRs find way to near ones for exact location of destination

  18. Order-1 Home region Order-2, K = 4 Grid Ordering in SLALoM • Terrain divided into Order-1 regions • K2 Order-1 regions combined to form Order-2 regions • Function f maps ID to home region in Order-2 region

  19. Near Home region Far Home region Near and Far Home Regions • 9 home regions around U’s current O-2 are near • Rest are far home regions

  20. Location Update • If movement within O-2, update near home regions • Otherwise update all home regions via multicast • Near home regions know exact location of U • Far home regions know approximate location (O-2) Movement Update

  21. A (A_loc) B (B_loc) … Location database to store ? Location Maintenance • On entry into a grid, a node broadcasts its presence • A server node replies with location information that the newly arrived node has to store • Use of timers to avoid a broadcast storm Mobile Node Movement

  22. Location Query • If U and V in same O-1, V knows U’s location • Otherwise, send a query to U’s closest home region • If far home region, route to nearest “near” home region V Query W

  23. sibling level-0 squares s n s s s s sibling level-1 squares s s sibling level-2 squares s s Grid Location Service (GLS) • s is n’s successorin that square. • (Successor is the node with “least ID greater than” n )

  24. location table content location update GLS Updates ... Invariant (for all levels): For node nin a square, n’s successor in each sibling square “knows” about n. 9 ... 1 11 1 1 2 ... 3 11, 2 9 6 ... 23 29 2 16 ... 23, 2 7 6 ... ... ... 17 5 ... 26 25 ... ... ... 8 21 4 ... 19

  25. GLS Query ... 9 ... 1 11 1 1 2 ... 3 11, 2 9 6 ... 23 29 2 16 ... 23, 2 7 6 ... ... ... 17 5 ... 26 25 location table content ... ... ... 8 21 4 query from 23 for 1 ... 19

  26. Using Multilevel Hierarchies • Random node movements and communication assumptions • Not realistic for all applications for large networks • Localized node movement; network traversals rare • Update cost proportional to mobility • Frequent data connections may occur in a locality • Multiple server regions redundant • Local queries stay local • Ideal for a hierarchical set up of node locations • Unfortunately, formation and maintenance of hierarchy is cumbersome

  27. Level III Level II Level I Level 0 Hierarchical Grid Ordering(HGRID) • Grid hierarchy built from unit grids recursively • At each level, one of the four lower level leaders selected as the leader for the next level • Grid ordering arbitrary; alternate orderings possible

  28. Update Location Update • Nodes update servers as they cross grid boundaries • Number of updates, and distance traversed by the updates depends upon boundary hierarchy • Localized movement results in low overhead Broadcast

  29. Query Response Data Location Discovery & Data Transfer • Source sends query to its leader • Query visits leaders until approximate location of destination is found; sends response • Data forwarded to more accurate locations until it reaches the destination V U

  30. Performance Study • Glomosim: packet level simulator • Simulator setup Application CBR Transport UDP Network IP Location Management Random Waypoint LL/MAC IEEE802.11 No Noise Radio Geographic Routing Free Space Mobility PHY

  31. Scalability with Mobility (High load) Throughput Discovery Delay • HGRID performs best, with throughput more than 90% • Surprisingly, SLALoMK2 performs better than others • Explained by lower location discovery delay and packet buffer • SLURP performs worst

  32. Scalability with Mobility Data Delay Control Overhead • HGRID performs best overall due to low signaling overhead • SLALoM performs worst due to congestion caused by network wide updates • Interestingly, overhead (bytes) more for HGRID than SLURP

  33. Scalability with Network Size Packets delivered Data Delay • Tradeoff between signaling overhead and throughput/delay • HGRID performs best overall

  34. Scalability with Network Size Database Size Control Overhead • Overhead (bytes) highest for SLALoM; maintenance of large databases increases overall overhead of HGRID • Storage cost grows slightly with network size for HGRID

  35. Summary • Issue of scalability in mobile ad hoc routing • Topology updates congest the network • Discovery, maintenance cause unnecessary flood • Geographic routing is a potential candidate • Localized and guaranteed • Need scalable location management schemes • Grid based protocols (Flat vs. Hierarchical) • SLURP, SLALoM, GLS, HGRID • Relative scalability of LM protocols dependant on location update, maintenance and discovery • Performance studies show HGRID scales well with network size, mobility

More Related