1 / 18

Caching in Information-Centric Networking

Caching in Information-Centric Networking. Sen Wang, Jun Bi, Zhaogeng Li, Xu Yang, Jianping Wu Network Research Center, Tsinghua University AsiaFI 2011 Summer School Aug 8th, 2011. Introduction. ICN is becoming an important direction of the Future Internet Architecture research

kamil
Télécharger la présentation

Caching in Information-Centric Networking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Caching in Information-Centric Networking Sen Wang, Jun Bi, Zhaogeng Li, Xu Yang, Jianping Wu Network Research Center, Tsinghua University AsiaFI 2011 Summer School Aug 8th, 2011

  2. Introduction • ICN is becoming an important direction of the Future Internet Architecture research • PSIRP, NetInf, PURSUIT, CCN, DONA and NDN. • In-network caching is considered as one of the most significant properties of ICN • But so far, cache policy for ICN is still explored little

  3. Basic Communication and Caching Paradiam • Requests are sent to original content objects by name routing • Any router along the routing path has the content object in cache can respond to the request • While forwarding a content, an intermediate router can choose to cache the content according to its own caching policy.

  4. Problem Statement • Consider the static case • Given an ICN network • the request rate from each router to each content • the storage capacitiy of each router • a set of initial contents and an assignment of resident routers of these contents • Optimizing Objective • find a feasible assignment of caching copies of each content to routers in order to minimize the overall resource comsumption of the network.

  5. MILP Formulation • Objective Function: minimizing the overall average hops • Capacity Constraints • Introduce extra variables to linearize the objective fuction: • Objective Function :

  6. Theoretical Analysis • Theorem 1: If an assignment A is among the optimal cache assignments, it must have the property that any content cached in node n, its request rate is not smaller than that of any content cached in nodes which reside in the subtree rooted at node n. • Algorithm 1 greedily caches the contents with maximum request rate from the root node of the spanning tree to the leaf nodes • The resulting content assignment follows the Theorem 1 • But not optimal

  7. Cache Policy • Statistically Perfect-LFU can can result in the same caching assignment as the greedy algorithm generates • we expect that in practice the Perfect-LFU is a relatively good sub-optimal caching policy intuitively from the Theorem 1 • Improving Perfect-LFU by taking into account the distance • Adding an additional field HopNum to the ICN protocol • Named by LB(Least Benefit)

  8. Intelligent Forwarding • With the ability of caching more intelligent forwarding should be endowed in ICN. • Abit more intelligent forwarding scheme inspired by P2P forwarding • Forwarding with Shallow Flooding (FSF for short). • When a request is received by a node, the request will be flood to all its other interfaces with a specific flooding depth Forwarding with One-step Flooding

  9. EVALUATION • Traffic Model for Evaluation • the Zipf–Mandelbrot (Also known as the Pareto-Zipf) distribution as the traffic model for simulation • The two parameters and q in the equation above are the shape parameter and shift parameter respectively • is the normalizing constant.

  10. Evaluation with Simple Linear Topology • Scenario of single content source: • One requester, namely node 0 • One content source, namely node 11 Simple Linear Evaluating Topology

  11. Evaluation with Simple Linear Topology • We studied effects of the following three factors on the proportion of cache miss and average hops. • cache size • parameters of request pattern namely q and α Effects of Cache Size Effects of q Effects of α

  12. Evaluation with Simple Linear Topology • Scenario of multiple requesters and content sources • All the nodes can generate requests with the same request rate • The content objects are randomly distributed • The InCache-LB gains 44.2% reduction in average hops compared tothat in the case of no cache andslightly higher than InCache-LFU by 2.9%. • The Impact of topology scale by increasing the node number from 12 to 24 • The the reduction percentage of average hops arises by about 6.6%. (a) (b) (c) Simulation results for multiple requesters

  13. Evaluation with ISP Topology • A series of simulations using practical ISP topology is conducted to evaluate cache policies • PoP topology of ISP with AS No. 1221 • The effect of parameter αand cache size. • With αand cache size to be fixed to 0.7 and 55% respectively, the InCache-LB gains 40.3% reduction in average hops compared tothat in the case of no cache • Almost no difference with InCache-LFU Effects of Cache Size Effects of α

  14. Evaluation with ISP Topology • Study the effect of different ISP topology • the PoP topology of another ISP with AS No. 1239 • 78 nodes and 84 edges, larger that AS 1221 with 44 nodes and 44 edges • Different topologies make result in almost no difference in average hop reductions, which are 40.3% and 41.7% for AS 1221 and AS 1239 respectively with the cache policy of InCache-LB. Effects of topology

  15. Evaluation with ISP Topology • Study the effect of heterogeneous request rate among nodes. • In former simulation, each node uses the same mean value of request intervals • The request rates of nodes range from 10 per second to 1 in this simulation • In the setting of heterogeneous request rates, more average hops can be achieved, • Arising from 49.9% to 57.3%. Effects of request way

  16. Evaluation for Intelligent Forwarding • Evalute the proposed forwarding scheme, namely FSF, with different cache policies • Two series of simulations were conducted • A6⨉6 mesh topology • The PoP topology of AS 1221 • The FSF can furture decrease the average hops by 6.3% with 2 hops flooding for the InCache-LB • InCache-LB is better than InCache-LFU obviously while the flooding hops increasing. • 6⨉6 mesh topology • The PoP topology of AS 1221

  17. Conclusion • The in-network caching problem of ICN can be formulated into Mixed-Integer Linear Programming problem. • Via studying the properties of the optimal caching, We found that LFU-like cache policies is supposed to perform well, which is proved by our simulation results. • The proposed cache policy LB (Least Benefit) performs beter than LFU when the proposed forwarding scheme FSF is involved too and reduces the average hops future by 6.3%. • With in-networking caching, the average hops of the ICN network can be reduced significantly by nearly 50% and with some simple improvement such as LB and FSF the average hop can be reduced future

  18. Q & A Thank you!

More Related