1 / 31

SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts

SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts. Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University Lee, Wang-Chien Lee GTE Laboratories

lajos
Télécharger la présentation

SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SAIU: An Efficient Cache Replacement Policy for Wireless On-demand Broadcasts Jianliang Xu, Qinglong Hu, Dik Lun Department of Computer Science in HK University Lee, Wang-Chien Lee GTE Laboratories Proceedings of the ninth international conference on Information knowledge management CIKM 2000.

  2. Outline • Introduction • Background • Cache replacement algorithm • Implementation issues • Simulation model • Performance evaluation • Conclusion • My comment

  3. Introduction • Wireless data dissemination • Broadcast-based information dissemination • On-demand services • Wireless on-demand broadcast systems • Some researches in wireless on-demand systems. • On-demand broadcast scheduling • Wireless data caching

  4. Wireless caching policy • Cache replacement is an important issue to be tackled for cache management. • Previous studies are based on unrealistic assumptions, such as fixed data sizes, no updates, and no disconnections.

  5. Background • Performance metrics • Traditional cache management • Cache hit ratio • Access latency • On-demand broadcast system • Access latency • Stretch—the ratio of the access latency of a request to its service time( size/bandwidth)

  6. Scheduling algorithms • Longest Wait First( LWF) • Longest Total Stretch First( LTSF) • In this paper, LTSF is the default scheduling algorithm. • RxW

  7. Invalidation propagation • To maintain cache consistency, periodically propagating invalidation report( IR) is an efficient method. • Adaptive cache invalidation algorithm( AAW_AT)

  8. Cache replacement algorithm • In traditional cache management methods, access probability is primary factor used to determine a cache replacement policy. • In an on-demand broadcast environment three additional factors, namely data retrieval delay, data update frequency and data item size, need to be considered in the design of cache replacement policies.

  9. Design Guide • Observation( Which object should be replace) • Lower access probability • Lower miss penalty( shorter data retrieval delay) • Higher update frequency • Larger data size

  10. The SAIU replacement policy • Stretch*Access-rate*Inverse Update-frequency( SAIU) • gain(i)=Li*Ai/Si*Ui • The algorithm remove the minimum gain(i) value until the free space is sufficient to accommodate the incoming item.

  11. on server-side on client-side Implementation issues • Heap management • Use min-heap data structure to implement SAIU. • The time complexity is O( logN). • Estimate of running parameter • An exponential aging method is used to estimate Ui, Li, and Ai. • Initially, Ui and Li are set to 0. • Ui=αu/(tc-tilu)+(1-αu)*Ui • Li=αs/(tc-tiqt)+(1-αs)*Li • Ai=αa/(tc-tila)+(1-αa)*Ai

  12. Implementation issues( cont.) • Maintenance of cache item attributes • A cache item has six parameters need to maintain, namely si, Ui, tilu, Li, Ai ,and tila. • Storing the attributes for cached data items in client cache. • In order to avoid the starvation problem. Maintaining a GAINminvalue which is the minimum gain(i) value in cached item. If one item need to evict, checking the gain value is larger than GAINmin or not. If it does, keep it’s attribute. If not, drop it.

  13. Simulation model • A single server and numbers of clients. • Two types of size distributions of item • Increasing Distribution( INCRT) Sizei=Smin+[(I-1)*(Smax-Smin+1)]/DbSize ,i=1,…….,Dbsize • Decreasing Distribution( DECRT) Sizei=Smax-[(i-1)(Smax-Smin+1)]/DbSize ,i=1,…….,DbSize

  14. Default system parameter settings

  15. Client model

  16. Server model

  17. Performance evaluation • αa=αs=αu=0.25 • Using AAW_AT to propagate invalidation information and LTSF for on-demand broadcast scheduling. • SAIU( EST) and SAIU( IDL)

  18. Experiment 1: Impact of the cache size( INCRT)

  19. Experiment 1: Impact of the cache size( DECRT)

  20. Experiment 1: Impact of the cache size( INCRT)

  21. Experiment 1: Impact of the cache size( INCRT)

  22. Experiment 2: Impact of the broadcast bandwidth( INCRT)

  23. Experiment 2: Impact of the broadcast bandwidth( DECRT)

  24. Experiment 3: Influence of the item size( INCRT)

  25. Experiment 3: Influence of the item size( DECRT)

  26. Experiment 4: Influence of the update frequency( INCRT)

  27. Experiment 4: Influence of the update frequency( DECRT)

  28. Experiment 5: Algorithm complexity

  29. Conclusion • SAIU performs substantially better than the well known LRU and LRU-MIN policies, especially for clients which favor access to comparatively smaller.

  30. Future work • They are incorporating the factor of cache validation delay. • They plan to conduct simulations for clients with heterogeneous access patterns. • Combining the prefetching into the current scheme.

  31. My comment • Unfortunately, it did not use real trace to simulate. We can not compare the result with other experiments. • It point out three more factors that we should consider in wireless environment, namely data retrieval delay, data update frequency and data item size. • Different invalidate. • The starvation problem.( Save attribute greater then GAINmin)

More Related