1 / 44

CADRE: A Collaborative Replica Allocation and Deallocation approach for Mobile-P2P networks

CADRE: A Collaborative Replica Allocation and Deallocation approach for Mobile-P2P networks. Anirban Mondal (University of Tokyo, JAPAN) Sanjay K. Madria (University of Missouri-Rolla, USA) Masaru Kitsuregawa (University of Tokyo, JAPAN). Contact Email address: anirban@tkl.iis.u-tokyo.ac.jp.

milos
Télécharger la présentation

CADRE: A Collaborative Replica Allocation and Deallocation approach for Mobile-P2P networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CADRE: A Collaborative Replica Allocation and Deallocation approach for Mobile-P2P networks Anirban Mondal (University of Tokyo, JAPAN) Sanjay K. Madria (University of Missouri-Rolla, USA) Masaru Kitsuregawa (University of Tokyo, JAPAN) Contact Email address: anirban@tkl.iis.u-tokyo.ac.jp

  2. INTRODUCTION Ever-increasing popularity and proliferation of mobile technology Mobile user statistics for JAPAN Jan 31, 2006 (http://www.wirelesswatch.jp/)

  3. Proliferation of mobile devices M-P2P Paradigm

  4. Proliferation of mobile devices + Popularity of the P2P paradigm e.g., Kazaa M-P2P Paradigm

  5. Proliferation of mobile devices + Popularity of the P2P paradigm e.g., Kazaa M-P2P Paradigm

  6. Proliferation of mobile devices + Popularity of the P2P paradigm e.g., Kazaa M-P2P Paradigm • M-P2P network: Mobile Hosts (MHs) interact in a P2P fashion • Sometimes, base station infrastructure does not exist • Current infrastructures are beginning to support P2P interactions among mobile devices e.g., Microsoft’s Zune

  7. Challenges in M-P2P networks Low data availability • frequent network partitioning due to mobility

  8. Challenges in M-P2P networks Dynamic data replication Low data availability • frequent network partitioning due to mobility

  9. Challenges in M-P2P networks Dynamic data replication Low data availability • frequent network partitioning due to mobility What makes M-P2P replication more challenging than traditional replication?

  10. Challenges in M-P2P networks Dynamic data replication Low data availability • frequent network partitioning due to mobility What makes M-P2P replication more challenging than traditional replication? • Limited Bandwidth • Limited Energy • Limited Available memory space • Uncertainty due to MH mobility

  11. Challenges in M-P2P networks Dynamic data replication Low data availability • frequent network partitioning due to mobility What makes M-P2P replication more challenging than traditional replication? • Limited Bandwidth • Limited Energy • Limited Available memory space • Uncertainty due to MH mobility Existing static P2P replication schemes are not adequate for M-P2P environments.

  12. Challenges in M-P2P networks Dynamic data replication Low data availability • frequent network partitioning due to mobility What makes M-P2P replication more challenging than traditional replication? • Limited Bandwidth • Limited Energy • Limited Available memory space • Uncertainty due to MH mobility Existing static P2P replication schemes are not adequate for M-P2P environments. Existing schemes for mobile P2P replication do not consider deallocation, fairness and load-sharing issues.

  13. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city

  14. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city Tourist buses generally have a tour-guide Tour-guides

  15. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city John says to Tourist guide: “I wish to see pictures/video-clips of different rooms in Janpath National Museum”

  16. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city John says to Tourist guide: “I wish to see pictures/video-clips of different rooms in Janpath National Museum” Query

  17. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city John says to Tourist guide: “I wish to see pictures/video-clips of different rooms in Janpath National Museum” Query Result Image

  18. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city John receives the result image and deletes it after sometime. Query Result Image

  19. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city Ravi says to Tourist guide: “I wish to see pictures/video-clips of different rooms in Janpath National Museum”

  20. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city Oops! Now I need to retrieve the same image(s) again Retrieving the same image/replica multiple times taxes the limited bandwidth and energy resources of mobile peers

  21. M-P2P APPLICATION SCENARIOS Two tourist buses moving in different parts of a city Oops! Now I need to retrieve the same image(s) again If Ravi and John had collaboratively deallocated images/replicas, the need to retrieve the image multiple times would not arise.

  22. Why is collaborative replica deallocation necessary? Suppose replicas of data item d are stored at MHs A, B, C. At 10.15 AM At 10.30 AM Existing replication schemes deallocate replicas locally, hence they would deallocatedfrom MH C at 10.15 AM, then reallocate d at MH C at 10.30 AM. Multiple allocations and deallocations of the same data item at the same MH could lead to THRASHING, if d is large (e.g., image)

  23. Why is collaborative replica deallocation necessary? Suppose replicas of data item d are stored at MHs A, B, C. At 10.15 AM At 10.30 AM Existing replication schemes deallocate replicas locally, hence they would deallocatedfrom MH C at 10.15 AM, then reallocate d at MH C at 10.30 AM. Multiple allocations and deallocations of the same data item at the same MH could lead to THRASHING, if d is large (e.g., image) Replica deallocation should be done collaboratively.

  24. Main contributions of CADRE 1) It collaboratively performs both replica allocation and deallocation in tandem to facilitate optimal replication and to avoid ‘thrashing’ conditions. 2) It addresses fair replica allocation across the MHs. It also considers replication of images at different levels of granularity to optimize MH memory space.

  25. ARCHITECTURE OF CADRE • CADRE considers a hybrid super-peer architecture, • some of the MHs act as the ‘Gateway Nodes’ (GNs). • GNs have high processing capacity, high available bandwidth and high energy. • Neighbouring GNs periodically exchange their regional information concerning MH characteristics (e.g., load, energy) to facilitate replication. • In case of GN failures, neighbouring GNs could take over the responsibility of the failed GN. • GNs can also collaborate for search and replication across different regions.

  26. QUERY PROCESSING IN CADRE • When an MH enters a region R, it registers with the GN G in R. • G provides the MH with the list of data items currently available in R. • EachMH periodicallysends its list of data items and replicas to its corresponding GN. • GN periodically broadcasts the list of available items within its region to the MHs in its region. • A query issuing MH M can distinguish whether its query is local or global. • CADRE supports both local and remotequerying. • Local queries: Broadcast mechanism • Remote queries: GN forwards query to its neighbouring GNs.

  27. Key Components of CADRE • User-specified size of query result image • Fairness in replication • Prevention of thrashing conditions

  28. User-specified query result image size Users specify a maxsize constraint for the query result image • Limited memory space of mobile devices • Significant differences in available memory space among MHs • Higher value of maxsize  finer image granularity (i.e., better quality) Query for MI MS MS stores img or its replica. image img • Case 1: Query Result maxsize < size_img • Image needs to be compressed. • MS (not MI) performs the compression • Bandwidth optimization due to smaller-sized images being transmitted • One-time compression by MS to serve multiple user requests  saves energy • Case 2: Query Result maxsize >= size_img • No need for image decompression. • Direct the query to the original owner of img or to any MH storing a relatively larger-sized replica of img

  29. User-specified query result image size (Cont.) • Different users can specify different maxsize values • MS needs to determine the size of the replica (of img) that it stores • We consider four ranges of image size granularity w.r.t. the original image size So • Low image size granularity: (0.25 * So) to [0.5 *So] • Medium image size granularity: (0.5 * So) to [0.75 * So] • Medium image size granularity: Above (0.75 * So), but below So • Original image size granularity: So • MS keeps track of queries and maps each query to any one of these four mutually exclusive ranges. • MS selects the range with maximum number of queries. • The average maxsize in this range is the replica size of img at MS.

  30. Fairness in replication • We need to ensure fairness in replication by considering origin of queries for data items. • Each MH M assigns a score S to each data item d to quantify importance of d. • M sorts the MHs in descending order of their access frequencies for d. S where N = the number of MHs MH w = weight coefficient i.e., i / (no. of MHs) i n = access frequency of d at MH i i = spatial density of region from which the query originated = weight factor for normalizing S w.r.t. image size = 0.25 (small), 0.5 (medium), 0.75 (big)

  31. Prevention of thrashing conditions • To address prevention of thrashing, each MH keeps track of the number of deallocations of each replica at itself over a period of time. • We define a metric designated as the Flip-Flop Ratio (FFR). where N = no. of times that replica r has been deallocated at M dealloc where T = total no. of deallocations of all replicas at M dealloc size = size of the replica r T = sum of the sizes of all the replicas at M size We normalize FFR w.r.t. replica size to minimize the probability of thrashing of large data items • A replica r should not be deallocated if its FFR value exceeds a threshold • threshold= the average value of FFR across all the replicas in the network

  32. CADRE allocates replicas starting from the data item with the highest score, thus preferring data items with higher scores.

  33. { CADRE tries to replicate a given data item d either at the MH, which made the maximum number of accesses to d or at one of its 1-hop neighbours

  34. { CADRE considers load, memory space and energy of MHs.

  35. The score of the item d to be replicated at the MH M is compared with the scores of the existing replicas.

  36. Performance Study • Metrics • Average Response time ART • Data Availability • Traffic (hop-count) during replica allocation

  37. Effect of fair replica allocation

  38. Effect of thrashing prevention

  39. Performance of CADRE

  40. Effect of variations in the workload skew

  41. Effect of variations in the reallocation period TP

  42. Effect of variations in the number of MHs

  43. SUMMARY • CADRE collaboratively performs both replica allocation and deallocation in tandem to facilitate optimal replication and to avoid ‘thrashing’ conditions. • CADRE addresses fair replica allocation across the MHs. • CADRE also considers replication of images at different levels of granularity to optimize MH memory space. ONGOING WORK: Economy-based models in Mobile-P2P environments • Presented at Dagstuhl Seminar • To be presented at COMAD’06 and DASFAA’06

More Related