1 / 36

On the Effect of Group Mobility to Data Replication in Ad Hoc Networks

On the Effect of Group Mobility to Data Replication in Ad Hoc Networks. Jiun-Long Huang and Ming-Syan Chen IEEE Transactions On Mobile Computing, May 2006 Presented by Manu Shukla CS 6204 Fall 2006. Agenda. The Problem DRAM Algorithm Allocation unit construction phase VectorCluster

derora
Télécharger la présentation

On the Effect of Group Mobility to Data Replication in Ad Hoc Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. On the Effect of Group Mobility to Data Replication in Ad Hoc Networks Jiun-Long Huang and Ming-Syan Chen IEEE Transactions On Mobile Computing, May 2006 Presented by Manu Shukla CS 6204 Fall 2006

  2. Agenda • The Problem • DRAM Algorithm • Allocation unit construction phase • VectorCluster • Replica allocation phase • Experiments and Evaluations • Conclusions and Critique

  3. Introduction • Mobile Ad Hoc Network (MANET) is a self-organizing, rapidly deployable network of wireless nodes without infrastructure • Mobile nodes of a MANET also function as routers • Disconnection often occurs due to mobility and causes frequent network division • Disconnected partitions decrease data accessibility • Data replication can greatly improve the accessibility for a partitioned network

  4. Introduction (2) • DCG and E-DCG are two previously proposed replica allocation schemes in MANET • The two drawbacks of the schemes are: • Generation of large amounts of traffic • Negligence of group mobility

  5. Introduction (3) • Authors address the problem by exploring group mobility • Propose Scheme DRAM to allocate replicas by considering group mobility • Underlying group mobility model is assumed to be Reference Point Group Mobility model (RPGM)

  6. Description of symbols • Symbols used in formulae and equations

  7. Mobility Models • RPGM models team collaboration where mobile nodes collaborate and move as a group • In RPGM, all mobile nodes are divided into several mobility groups • Each node is assigned to virtual reference node and movement of a reference node in a time slot is called global motion vector • The vector from the position of corresponding reference node to mobile node position is random motion vector

  8. RPGM Example • We have and where PiN(k) and PiR(k) are positions of the mobile node and reference node in time T(k)

  9. System Model • m mobile nodes M1, M2,…,Mm and n data items D1,D2,…,Dn • Each data item is updated by its original host periodically with period τi • Each node is equipped with GPS device so its location is always known • Movement of each group follows a waypoint model which breaks movement of mobile node into repeating pause and motion periods

  10. DRAM Design • DRAM (Decentralized Replica Allocation with group Mobility) is decentralized algorithm to produce effective replica allocation efficiently • Executed periodically with relocation period r time slots to adapt according to the network connectivity • Two phases in relocation period • Allocation unit construction phase • Replica allocation phase • In allocation unit construction phase, all mobile nodes in network are divided into several disjoint allocation units

  11. DRAM Design (2) • In replication allocation phase, the replicas of all data items are allocated according to access frequencies of the data items

  12. Allocation Unit Construction Phase • Three mobile nodes states • INITIAL state • ZONE-MASTER and ZONE-MEMBER states • CLUSTER-MASTER and CLUSTER-MEMBER states

  13. INITIAL State • Mobile node broadcast info message to all mobile nodes in broadcast zone with a TTL • When a node receives the info message, it forwards it to all nodes that are at TTL or lesser distance from it • Each node maintains a list of its historical locations called a position list to track its pause and motion periods

  14. ZONE-MASTER and ZONE-MEMBER states • In ZONE-MASTER and ZONE-MEMBER states • Mobile nodes are classified into two groups by the lowest-id clustering algorithm • Ones with lowest host id are selected as master of their broadcast zone enter ZONE-MASTER state • Other nodes enter ZONE-MEMBER state • Node Mi in ZONE-MEMBER state joins node Mj in ZONE-MASTER state with lowest host id within broadcast zone of Mi

  15. ZONE-MASTER and ZONE-MEMBER states (2) • Each node in ZONE-MASTER state then clusters its member nodes • All nodes within a cluster are expected to have similar motion behavior • Master node re-clusters resulting clusters again by considering motion vectors

  16. Lemmas • With help of lemmas, we have two heuristics

  17. Lemmas (2) • In a mobility group, an actual motion vector is close to the global motion vector if it has • the maximal number of neighbors in angle with maximal difference θ • Maximal number of neighbors in length with maximal difference 2ε • Develop algorithm VectorCluster in accordance with above heuristics

  18. VectorCluster • VectorCluster consists of two major procedures • ClusterByAngle • ClusterByLength • After executing VectorCluster, each zone master will select one cluster master for each resulting cluster • The selected mobile nodes will enter the CLUSTER-MASTER state, and other nodes will enter CLUSTER-MEMBER

  19. VectorCluster (2) • Result of VectorCluster in given example

  20. CLUSTER-MASTER and CLUSTER-MEMBER states • CLUSTER-MASTER and CLUSTER-MEMBER states • Tasks of nodes in this state consist of two steps • Cluster maintenance • Cluster merge

  21. Cluster Maintenance • Cluster member sends a status message to its cluster master • Cluster master checks if the moving behaviors similar to one another • It clusters motion behaviors in status messages • Dominating cluster is one with most nodes • It sends reject messages to nodes not in dominating cluster and they return to INITIAL state

  22. Cluster Merge • Merging clusters which tend to be connected in the near future improves data accessibility • Two allocation units Ci and Cj can be merged into a new allocation unit if they are cluster wiseconnected in T(k) and potentially cluster wiseconnected in T(k+r)

  23. Cluster Merge (2) • Here cluster-wise connected and potentially cluster-wise connected are defined as shown • In replica allocation construction, each cluster master will broadcast a merge message containing cluster master id and current and estimate bounding rectangles

  24. ClusterMerge Procedure • Cluster Merge can be performed by following process below

  25. Replica Allocation Phase • Objective is to • identify data items to be replicated • locations to replicate them for each allocation unit in order to maximize data accessibility • Allocation weight of data item Dj in allocation unit Cx in T(k) is • All data items are allocated in Cx according to their allocation weights in Cx in descendent order • If the candidate set of Dj in Cx is not empty, Dj will be allocated to Mi, where fij is the largest in allocation candidate set of Dj • Allocation process completes if all mobile hosts in Cx is full

  26. Procedure ReplicaAllocation • Each master unit then executes ReplicaAllocation procedure

  27. Complexity • Complexity of VectorCluster is O(|V|log|V|) where |V| is the number of input vectors • Complexity of ReplicaAllocation is O(m/|c|+n)

  28. Integration with other algorithms • Li and Wang proposed RVGM (Reference Velocity Group Mobility) • Yin and Cao proposed scheme RN to balance the tradeoff between data accessibility and query delay • Each mobile node shares only part of its storage with neighbors • A mobile node Mi only cooperates with neighbors which tend to be directly connected to it in future • Easy to integrate these concepts into scheme DRAM

  29. Performance Evaluation • Compare DRAM with E-DCG • Use event driven simulator in C++ with SIM Evaluated the performance of DRAM based on several parameters • Assume 120 mobile nodes in a 50mx50m flatland and each node owns 20 data items • Use data accessibility as measure of performance • Accessibility=Number of successful requests/Number of issued requests

  30. Performance Evaluation (2) • Use produced network traffic to evaluate cost of schemes • Effect of relocation period below • Shorter relocation period means more executions of relocation schemes making both schemes adapt quickly to relocation behavior of mobile nodes

  31. Performance Evaluation (3) • Comparison based on effect of number of Mobility Nodes and number of Mobility Groups • More nodes for same number of mobility groups means more nodes can share their storage by constructing larger allocation units

  32. Performance Evaluation (4) • Effect of Number of Replicas per Node • Effect of Update Period • Effect of Precision of Location Information • Effect of Packet Loss Rate

  33. Performance Evaluation (5) • Effect of Value of Time-to-Live

  34. Conclusions • Partitions in MANET frequent problem • Mobility of nodes important consideration for data replication • DRAM algorithm efficient in allocating replicas by considering group mobility • DRAM also produces less network traffic than prior algorithms along with producing higher data accessibility

  35. Critique • Introduction to MANET and few examples of disruptive nature of partitioning not adequate • Experiments performed only on simulated data • Lack of real world applications of DRAM and no complexity and performance analysis on real application data a drawback • Number of nodes in simulation relatively small • Consider clustering of moving object techniques similar to ones used in spatial moving objects

  36. Q/A? Thank You!

More Related