1 / 17

Coded Caching with Non-Uniform Demands

Coded Caching with Non-Uniform Demands. Mohammad Ali Maddah-Ali Urs Niesen. Bell Labs, Alcatel-Lucent. Least Frequently Used. N=2 Files, K=1 Users, Cache Size M=1. Populate the cache in low-traffic time. P A =2/3. Server. P B =1/ 3. Cache the most popular file(s). E[R] =P B =1 / 3.

gili
Télécharger la présentation

Coded Caching with Non-Uniform Demands

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Coded Cachingwith Non-Uniform Demands Mohammad Ali Maddah-Ali UrsNiesen Bell Labs, Alcatel-Lucent

  2. Least Frequently Used N=2 Files, K=1 Users, Cache Size M=1 • Populate the cache in low-traffic time PA=2/3 Server PB=1/3 • Cache the most popular file(s) E[R]=PB=1/3 • Average rate is the same as miss rate User Size One Cache LFU is optimum for one cache memory in the system. LFU minimizes the miss rate.

  3. Least Frequently Used N=2 Files, K=2 Users, Cache Size M=1 PA=2/3 PB=1/3 E[R]=1-(2/3)2=5/9 Is this optimum?

  4. Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 A1 A2 B2 B1 A2 A2⊕B1 B1 A1 A2 B1 B2 Multicasting opportunity for users with different demand

  5. Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 Simultaneous Multicasting Opportunity

  6. Proposed Coded Scheme N=2 Files, K=2 Users, Cache Size M=1 A1 A2 B2 B1 A2⊕B1 E[R]=1/2 <5/9 LFU is not optimum for cache networks! A1 A2 B1 B2 Minimizing Miss Rate is not the target. Providing multicasting opportunity is more important.

  7. Problem Setting N Files Server Shared Link K Users Cache Contents Size M Placement: - Cache arbitrary function of the files (linear, nonlinear, …) Question: minimum average rate R(M) needed in delivery phase? How to choose (1) caching functions, (2) delivery functions Delivery: -Requests are revealed to the server - Server sends a function of the files

  8. Decentralized Proposed Scheme N=3 Files, K=3 Users, Cache Size M=2 0 1 2 3 12 13 23 123 Delivery: Greedy Linear Encoding Prefetching: Each user caches 2/3 of the bits of each file - randomly, - uniformly, - independently. 1 2 3 12 13 23 123 0 0 1 2 3 12 13 23 123 23 13 12 ⊕ ⊕ 0 0 0 1 12 13 123 2 12 23 123 3 13 23 123 1 2 1 3 2 3 ⊕ ⊕ ⊕ 1 12 13 123 2 12 23 123 3 13 23 123 1 12 13 123 2 12 23 123 3 13 23 123

  9. Observations • Gain proportional to aggregated cache memory KM(even though isolated)! • Coding can improve the rate significantly(order of number of users for uniform demand) • This scheme approximately achieves optimum average rate, for uniform popularities. Average Rate Coded

  10. Non-Uniform Demands • Contradicting Intuitions: • More popular file  More caching memory • Symmetry of the prefetching  Tractable Analysis

  11. Idea of Grouping Group the files with approximately similar popularities Dedicate Memory Mito group i. Prefetching: Apply decentralized prefetching within each group i, with memory budget of Mi Delivery: Apply coded delivery for users demanding file from one group. M1 M2 M3 M4 M1+M2+M3+M4=M

  12. Observations • Within each groupsame cache allocation • Files in different group  different cache allocation • Symmetry within each group  Analytically tractable • Losing coding between groups

  13. Netflix Data • K=300 Users • A Simplified Grouping rule: • First and last files’ Popularities are within factor of 2

  14. Can We Do Better? Theorem: The code proposed scheme is approximately optimum. • Converse is challenging is based • Genie-Aided Uniformization and Symmetrization • Cut-Set • Reducing the size of the problem to users with different demands.

  15. Conclusion • For cache networks,LFU is not optimum • Miss Rateis not the most relevant metric for cache network. • Coded Caching achieves approximately optimum results • The gain of coding can be significant

  16. Coded Caching Demo

  17. Read More • Maddah-Ali and Niesen, “Fundamental Limits of Caching”, Sept 2012 (accepted for IEEE Trans. On Information theory). • Maddah-Ali and Niesen, “Distributed Caching Attains Order-Optimal Memory-Rate Trade-offs”, Jan. 2013 (Accepted to IEEE Trans. On Networking). • Niesen and Maddah-Ali“Coded Caching with Non-Uniform Demands”, Jun. 2013. (Submitted to IEEE Trans. On Information Theory). • Pedarsani, Maddah-Ali, and Niesen, “Online Coded Caching”, Nov. 2013 (Submitted to IEEE Trans. On Networking). • Karamchandani, Niesen, Maddah-Ali, and Diggavi“Hierarchical Coded Caching”, Jan. 2014.

More Related