1 / 47

LRFU (Least Recently/Frequently Used) Block Replacement Policy

LRFU (Least Recently/Frequently Used) Block Replacement Policy. Sang Lyul Min Dept. of Computer Engineering Seoul National University. Why file cache?. Processor - Disk Speed Gap. 1950’s. 1990’s. Processor - IBM 701 17,000 ins/sec Disk - IBM 305 RAMAC Density - 0.002 Mbits/sq. in

oshin
Télécharger la présentation

LRFU (Least Recently/Frequently Used) Block Replacement Policy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LRFU (Least Recently/Frequently Used)Block Replacement Policy Sang Lyul Min Dept. of Computer Engineering Seoul National University

  2. Why file cache? Processor - Disk Speed Gap 1950’s 1990’s • Processor - IBM 701 • 17,000 ins/sec • Disk - IBM 305 RAMAC • Density - 0.002 Mbits/sq. in • Average seek time - 500 ms • Processor - IBM PowerPC 603e • 350,000,000 ins/sec • Disk - IBM Deskstar 5 • Density - 1,319 Mbits/sq. in • Average seek time - 10 ms x 20,000 x 600,000 x 50

  3. File Cache processor main memory disk controller disks file cache or buffer cache disk cache

  4. LRU Replacement LFU Replacement LRU Block LFU Block New reference New reference heap MRU Block MFU Block Operating System 101 O(1) complexity O(log n) complexity O(n) complexity

  5. LRU Advantage High Adaptability Disadvantage Short sighted LFU Advantage Long sighted Disadvantage Cache pollution Operating System 101

  6. Motivation • Cache size = 20 blocks

  7. Cache size = 60 blocks

  8. Cache size = 100 blocks

  9. Cache size = 200 blocks

  10. Cache size = 300 blocks

  11. Cache size = 500 blocks

  12. Observation • Both recency and frequency affect the likelihood of future references • The relative impact of each is largely determined by cache size

  13. Goal A replacement algorithm that allows a flexible trade-off between recency and frequency

  14. Results LRFU (Least Recently/Frequently Used) Replacement Algorithm that (1) subsumes both the LRU and LFU algorithms (2) subsumes their implementations (3) yieldsbetter performance than them

  15. CRF (Combined Recency and Frequency) Value Current time tc 1 2 3 time t1 t2 t3 Ctc(b) = F(1) + F(2) + F(3) || || || tc - t1 tc - t2 tc - t3

  16. CRF (Combined Recency and Frequency) Value • Estimate of how likely a block will be referenced in the future • Every reference to a block contributes to the CRF value of the block • A reference’s contribution is determined by weighing function F(x)

  17. Hints and Constraints on F(x) • should be monotonically decreasing • should subsume LRU and LFU • should allow efficient implementation

  18. Conditions for LRU and LFU • LRU Condition • If F(x) satisfies the following condition, then the LRFU algorithm becomes the LRU algorithm • LFU Condition • If F(x) = c, then the LRFU algorithm becomes the LFU algorithm current time i blocka: x blockb: x x x x x x x x i+1 i+2      i+3

  19. Weighing function F(x) F(x) = ()x Meaning: a reference’s contribution to the target block’s CRF value is halved after every 1/

  20. F(x) F(x) = 1 (LFU extreme) 1 Spectrum (LRU/LFU) F(x) = ()x (LRU extreme) 0 X current time - reference time Properties of F(x) = ()x • Property 1 • When  = 0, (i.e., F(x) = 1), then it becomes LFU • When  = 1, (i.e., F(x) = ()x ), then it becomes LRU • When 0 <  < 1, it is between LFU and LRU

  21. Results LRFU (Least Recently/Frequently Used) Replacement Algorithm that (1) subsumes both the LRU and LFU algorithms (2) subsumes their implementations (3) yieldsbetter performance than them

  22. Difficulties of Naive Implementation • Enormous space overheads • Information about the time of every reference to each block • Enormous time overheads • Computation of the CRF value of every block at each time

  23. t1 t2  1  2  3 time  = (t2 - t1) Ct2(b) = F (1+) + F (2+) + F (3+) = ()(1+ )+() (2+ )+() (3+ ) = (()1+()2+()3) () = Ct1(b)x F () Update of CRF value over time

  24. Properties of F(x) = ()x • Property 2 • With F(x) = ()x, Ctk(b) can be computed from Ctk-1(b) as follows Ctk(b) = Ctk-1(b)F()+ F(0) || tk - tk-1 • Implications: Only two variables are required for each block for maintaining the CRF value • One for the time of the last reference • The other for the CRF value at that time

  25. Difficulties of Naive Implementation • Enormous space overheads • Information about the time of every reference to each block • Enormous time overheads • Computation of the CRF value of every block at each time

  26. Properties of F(x) = ()x • Property 3 • If Ct(a) > Ct(b) and neither a or b is referenced after t, then Ct'(a) > Ct'(b) for all t' > t • Why? Ct'(a) = Ct(a) F() > Ct(b) F() = Ct'(b) (since F() > 0 ) • Implications • Reordering of blocks is needed only upon a block reference • Heap data structure can be used to maintain the ordering of blocks with O(log n) time complexity

  27. Blocks that can compete with a currently referenced block Optimized Implementation

  28. referenced block 1. demoted 1. replaced linked list linked list linked list 2. promoted 2. demoted 3. new block heap heap heap 3. heap restored referenced block 4. heap restored 1. heap restored Optimized Implementation Reference to a new block Reference to a block in the heap Reference to a block in the linked list

  29. Question What is the maximum number of blocks that can potentially compete with a currently referenced block?

  30. current time block a:x blockb: x x x x x dthreshold dthreshold +1 • • • dthreshold +2 time • • • + F(d threshold +2) + F(d threshold +1) + F(d threshold ) < F(0)

  31. dthreshold = log (1- ()) log (1- ()) log (1- ())    When   0 When   1 =  = 1 Properties of F(x) = ()x • Property 4 : Archi & Network LAB Seoul National University

  32. linked list linked list linked list (null) heap heap LRU extreme LFU extreme heap (single element) O(log n) O(1) Optimized implementation (Cont’d) Archi & Network LAB Seoul National University

  33. Results LRFU (Least Recently/Frequently Used) Replacement Algorithm that (1) subsumes both the LRU and LFU algorithms (2) subsumes their implementations (3) yieldsbetter performance than them

  34. Correlated References correlated references correlated references correlated references

  35. LRFU with correlated references • Masking function Gc(x) • C'tk(b), CRF value when correlated references are considered, can be derived from C'tk-1(b) C'tk(b) = F(tk - tk) + F(tk - ti )*Gc(ti+1 - ti ) = F( tk - tk-1) * [F(0) * Gc( tk - tk-1) +C'tk-1(b) - F(0)] + F(0) Archi & Network LAB Seoul National University

  36. Trace-driven simulation • Sprite client trace • Collection of block references from a Sprite client • contains 203,808 references to 4,822 unique blocks • DB2 trace • Collection of block references from a DB2 installation • Contains 500,000 references to 75,514 unique blocks Archi & Network LAB Seoul National University

  37. Hit Rate Hit Rate X X X X X X X X   (a) Sprite client (b) DB2 Effects of  on the performance Archi & Network LAB Seoul National University

  38. Hit Rate Hit Rate Correlated Period Correlated Period   (b) DB2 (a) Sprite client Combined effects of  and correlated period Archi & Network LAB Seoul National University

  39. Previous works • FBR (Frequency-Based Replacement) algorithm • Introduces correlated reference concept • LRU-K algorithm • Replaces blocks based on time of the K’th-to-last non-correlated references • Discriminates well the frequently and infrequently used blocks • Problems • Ignores the K-1 references • linear space complexity to keep the last K reference times • 2Q and sLRU algorithms • Use two queues or two segments • Move only the hot blocks to the main part of the disk cache • Work very well for “used-only-once” blocks Archi & Network LAB Seoul National University

  40. Hit Rate Hit Rate Cache Size (# of blocks) Cache Size (# of blocks) (a) Sprite client (b) DB2 Comparison of the LRFU policy with other policies Archi & Network LAB Seoul National University

  41. Implementation of the LRFU algorithm • Buffer cache of the FreeBSD 3.0 operating system • Benchmark: SPEC SDET benchmark • Simulates a multi-programming environment • consists of concurrent shell scripts each with about 150 UNIX commands • gives results in scripts / hour Archi & Network LAB Seoul National University

  42. SDET benchmark results Hit rate SDET Throughput (scripts/ hour) Hit Rate  Archi & Network LAB Seoul National University

  43. Conclusions LRFU (Least Recently/Frequently Used) Replacement Algorithm that (1) subsumes both the LRU and LFU algorithms (2) subsumes their implementations (3) yieldsbetter performance than them

  44. Future Research • Dynamic version of the LRFU algorithm • LRFU algorithm for heterogeneous workloads • File requests vs. VM requests • Disk block requests vs. Parity block requests (RAID) • Requests to different files (index files, data files)

  45. REAL PEOPLE (Graduate students) Lee, Donghee Choi, Jongmoo Kim, Jong-Hun Guides (Professors) Noh, Sam H. Min, Sang Lyul Cho, Yookun Kim, Chong Sang People http://archi.snu.ac.kr/symin/

  46. Adaptive LRFU policy • Adjust periodically depending on the evolution of workload • Use the LRU policy as the reference model to quantify how good (or bad) the locality of the workload has been • Algorithm of the Adaptive LRFU policy • if ( > )  value for period i+1 is updated in the same direction • else the direction is reversed Archi & Network LAB Seoul National University

  47. Results of the Adaptive LRFU Client Workstation 54 DB2 Archi & Network LAB Seoul National University

More Related