1 / 15

ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE

ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE. Nimrod Megiddo Dharmendra S. Modha IBM Almaden Research Center. Introduction (I). Caching is widely used in storage systems databases web servers processors file systems disk drives RAID controllers operating systems.

fagan
Télécharger la présentation

ARC: A SELF-TUNING, LOW OVERHEAD REPLACEMENT CACHE

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ARC: A SELF-TUNING,LOW OVERHEAD REPLACEMENT CACHE Nimrod Megiddo Dharmendra S. Modha IBM Almaden Research Center

  2. Introduction (I) • Caching is widely used in • storage systems • databases • web servers • processors • file systems • disk drives • RAID controllers • operating systems

  3. Introduction (II) • ARC is a new cache replacement policy: • Scan-resistant: • Better than LRU • Self-tuning: • Avoids problem of many recent cache replacement policy • Tested on numerous workloads

  4. Our Model (I) Cache/Main Memory (pages) replacement policy on demand Secondary Storage (pages)

  5. Our Model (II) • Caches stores uniformly sized items(pages) • On demand fetches into cache • Cache expulsions decided by cache replacement policy • Performance metrics include • Hit rate (= 1 - miss rate) • Overhead of policy

  6. Previous Work (I) • Offline Optimal (MIN): replaces the page that has the greatest forward distance • Requires knowledge of future • Provides an upper-bound • Recency (LRU): • Most widely used policy • Frequency (LFU): • Optimal under independent reference model

  7. Previous Work (II) • LRU-2: replaces page with the least recent penultimate reference • Better hit ratio • Needs to maintain a priority queue • Corrected in 2Q policy • Must still decide how long a page that has only been accessed once should be kept in the cache • 2Q policy has same problem

  8. Example Last two references to pages A and B X A XB XB X A Time LRU -2 expels A because B was accessed twice after this next to last reference to A LRU expels B because A was accessed after this last reference to B

  9. Previous Work (III) • Low Inter-Reference Recency Set (LIRS) • Frequency-Based Replacement (FBR) • Least Recently/Frequently Used(LRFU): subsumes LRU and LFU • All require a tuning parameter • Automatic LRFU (ALRFU) • Adaptive version of LRFU • Still requires a tuning parameter

  10. ARC (I) • Maintains two LRU lists • Pages that have been referenced only once (L1) • Pages that have been referencedat least twice (L2) • Each list has same length c as cache • Cache contains tops of both lists: T1 and T2 • Bottoms B1 and B2 are not in cache

  11. L-1 T1 B1 ARC (II) L-2 T2 |T1| + |T2| = c “Ghost caches” (not in memory) B2

  12. ARC (III) • ARC attempts to maintain a target size target_T1 for list T1 • When cache is full, ARC expels • LRU page from T1 if |T1|  target_T1 • LRU page from T2 otherwise

  13. ARC (IV) • If missing page was in bottom B1 of L-1, ARC increases target_T1 target_T1= min(target_T1+max(|B2|/|B1|,1),c) • If missing page was in bottom B2 of L-2,ARC decreases target_T1 target_T1= max(target_T1-max(|B1|/|B2|,1),0)

  14. ARC (V) • Overall result is • Two heuristics compete with each other • Each heuristic gets rewarded any time it can show that adding more pages to its top list would have avoided a cache miss • Note that ARC hasno tunable parameter • Cannot get it wrong!

  15. Experimental Results • Tested over 23 traces: • Always outperforms LRU • Performs as well as more sophisticated policies even when they are specifically tuned for the workload • Sole exception is 2Q • Still outperforms 2Q when 2Q has no advance knowledge of the workload characteristics.

More Related