1 / 20

Beneficial Caching in Mobile Ad Hoc Networks

Beneficial Caching in Mobile Ad Hoc Networks. Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University. Outline. Introduction: Why caching in ad hoc network? Problem formulation of cache placement problem under memory constraint Beneficial caching

Télécharger la présentation

Beneficial Caching in Mobile Ad Hoc Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Beneficial Caching in Mobile Ad Hoc Networks Bin Tang, Samir Das, Himanshu Gupta Computer Science Department Stony Brook University

  2. Outline • Introduction: • Why caching in ad hoc network? • Problem formulation of cache placement problem under memory constraint • Beneficial caching • Centralized greedy algorithm with provable bound • Distributed caching algorithm • Cache routing protocol • Distributed caching policy • Simulation and analysis • Comparison of centralized and distributed algorithm • Comparison of distributed algorithm with latest existing work (Yin & Cao Infocom’04) • Conclusion and future work

  3. Motivation of Caching in MANET • MANET • Multi-hop wireless networks consisting of mobile nodes without any infrastructure support. Each node is both a host and a router • Application: rescue work, battle field, outdoor assemblies… • Scarce bandwidth and limited battery power/memory • Wireless communication is a significant drain on the battery • Our goal • Develop communication-efficient caching technique with memory limitations

  4. Problem formulation of Cache Placement Problem under Memory Constraint

  5. General ad hoc network graph G(V,E) • p data items D1,D2, … Dp. Each Di is originally stored by a source node Si • Each node has memory capacity of mi pages • Node i request Dj with access freq aij, • Distance between i and j is dij • Definition: Aijk indicates the jth memory page of node i is selected for caching of Dk • Our Goal: minimize total access cost

  6. Centralized Greedy Algorithm • Benefit of Variable: Let Γ denote the set of variables that have been already selected by the greedy algorithm at some stage. The benefit of Aijk with respect to Γ is defined as:

  7. Theorem: Algorithm 1 returns a solution Γ whose benefit is at least as half of the optimal benefit.

  8. Distributed Algorithm • Cache routing protocol • Cache routing table entry at node i: (Dj, Hj, Nj, dj) • Nj is the closest node to i that stores a copy of Dj • Hj is the next hop on the shortest path to Nj • Dj is the weighted length of the shortest path to Nj • Special cases: • If i is the source node of Dj, assume the Dj will not be removed • If i has cached Dj, then Nj is the nearest node (excluding i) that has a copy of Dj

  9. Distributed caching policy: • Node i observes its local traffic and calculates the benefit (Bij) of caching or removing a data item Dj: Bij = k known locally akj dj • Node i decides to cache the mi most beneficial data items

  10. Performance Evaluation • Comparison of centralized and distributed algorithms • Parameters • Number of nodes in the network • Transmission radius Tr • Number of data items • Number of clients accessing each data • Memory capacity of each node • Distributed and centralized algorithms perform quite closely.

  11. Varying number of data Varying number of nodes and items and memory capacity Transmission Radius

  12. Varying number of clients

  13. Comparison of beneficial caching and cooperative caching (Yin & Cao Infocom’04) • Experiment setup: • Ns2 implementation • Underlying routing protocol: DSDV • 2000m x 500m • Random waypoint model in which 100 nodes move at a speed within (0,20m/s) • Tr=250m, bandwidth=2Mbps • Experiment metrics: • Average delay • Message overhead • Packet delivery ratio (PDR)

  14. Server Model: • Two servers: server0, server1 (to be consistent with Cao’s paper) • 100 data items: even-id data items in server0, odd-id data items in server1 • Data size uniformly distributed between 100 bytes and 1500 bytes • Client Model: • Each node generates a single stream of read-only queries • Query generating time follows exponential distribution with some mean value (if the requested data does not return to the requesting node before the next query sent out, it is considered as a packet loss) • Each node accesses 20 data items uniformly out of 100 data items

  15. Beneficial caching: • Each node maintains a cache routing table, each entry of which indicates the closest cache of each data item. It is maintained by flooding • Node observes the data requests passing by and records how many times it sees for each item • When some threshold number of data request is reached(100 in our experiment), each node calculates the benefit of caching • Cache replacement algorithm is based on the benefit • Cooperative caching (Yin & Cao infocom’04): • Cache data – the data packet is cached if its size is smaller than some threshold value • Cache path – the id of the requestor is cached, otherwise • Requestor always caches the data packet; LRU is cache replacement policy

  16. Experiment Analysis • In static network: • Ours perform much better in average dealy (3 times better), when traffic gets very heavy ( query generating time < 5s), outs are 4~5 times better • Better PDR performance (100% vs. 98% in heavy traffic) • Worse message overhead when traffic is light • In Mobile network (max speed 20 m/s): • Our delay performance is slightly better • Better PDR (87% vs. 75% for most of the range) • Worse message overhead (5 times worse)

  17. Conclusions • We propose and design a benefit-based caching paradigm for wireless ad hoc networks. • A centralized algorithm in static network is given with provable bound under memory constraint of each node. • The distributed version has a very close performance to the centralized one. • Compared with the latest published work in mobile environment, our scheme performs better in some range of parameters.

  18. Ongoing and future work • We are currently working on mobility-based caching techniques • Reduce overhead in our work

More Related