1 / 46

MEMS and Caching for File Systems

MEMS and Caching for File Systems. Andy Wang COP 5611 Advanced Operating Systems. MEMS. MicroElectricMechnicalSystems 10 Gbytes of data in the size of a penny 100 MB – 1 GB/sec bandwidth Access times 10x faster than today’s drives ~100x less power than low-power HDs

robertoe
Télécharger la présentation

MEMS and Caching for File Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MEMS and Caching for File Systems Andy Wang COP 5611 Advanced Operating Systems

  2. MEMS • MicroElectricMechnicalSystems • 10 Gbytes of data in the size of a penny • 100 MB – 1 GB/sec bandwidth • Access times 10x faster than today’s drives • ~100x less power than low-power HDs • Integrate storage, RAM, and processing on the same die • The drive is the computer • Cost less than $10 [CMU PDL Lab]

  3. Read/Write tips Actuators Magnetic Media MEMS-Based Storage

  4. Bits stored underneath each tip MEMS-based Storage Read/write tips Media side view

  5. Y X MEMS-based Storage Media Sled

  6. Y X MEMS-based Storage Springs Springs Springs Springs

  7. Y X MEMS-based Storage Anchor Anchor Anchors attach the springs to the chip. Anchor Anchor

  8. Y X MEMS-based Storage Sled is free to move

  9. Y X MEMS-based Storage Sled is free to move

  10. Y X MEMS-based Storage Springs pull sled toward center

  11. Y X MEMS-based Storage Springs pull sled toward center

  12. Y X MEMS-based Storage Actuator Actuators pull sled in both dimensions Actuator Actuator Actuator

  13. Y X MEMS-based Storage Actuators pull sled in both dimensions

  14. Y X MEMS-based Storage Actuators pull sled in both dimensions

  15. Y X MEMS-based Storage Actuators pull sled in both dimensions

  16. Y X MEMS-based Storage Actuators pull sled in both dimensions

  17. Y X MEMS-based Storage Probe tip Probe tips are fixed Probe tip

  18. Y X MEMS-based Storage Probe tips are fixed

  19. One probe tip per square Sled only moves over the area of a single square Each tip accesses data at the same relative position Y X MEMS-based Storage

  20. MEMS-Based Management • Similar to disk-based scheme • Dominated by transfer time • Challenges: • Broken tips • Slow erase cycle • ~seconds/block track a cylinder group

  21. Caching for File Systems • Conventional role of caching • Performance improvement • Assumptions: • Locality • Scarcity of RAM • Shifting role of caching • Shaping disk access patterns • Assumptions: • Locality • Abundance of RAM

  22. Performance Improvement • Essentially all file systems rely on caching to achieve acceptable performance • Goal is to make FS run at the memory speeds • Even though most of the data is on disk

  23. Issues in I/O Buffer Caching • Cache size • Cache replacement policy • Cache write handling • Cache-to-process data handling

  24. Cache size • The bigger, the fewer the cache misses • More data to keep in sync with disk

  25. What if…. • RAM size = disk size? • What are some implications in terms of disk layouts? • Memory dump? • LFS layout?

  26. What if…. • RAM is big enough to cache all hot files • What are some implications in terms of disk layouts? • Optimized for the remaining files

  27. Cache Replacement Policy • LRU works fairly well • Can use “stack of pointers” to keep track of LRU info cheaply • Need to watch out for cache pollutions • LFU doesn’t work well because a block may get lots of hits, then not be used • So, it takes a long time to get it out

  28. Hmm… What is the optimal policy? • MIN: Replacing a page that will not be used for the longest time…

  29. Hmm… What if your goal is to save power? • Option 1: MIN replacement • RAM will cache the hottest data items • Disks will achieve maximum idleness…

  30. What if you have multiple disks?

  31. Access patterns And access patterns are skewed

  32. Spin down cold disks Access patterns Better Off Caching Cold Disks

  33. Handling Writes to Cached Blocks • Write-through cache: update propagate through various levels of caches immediately • Write-back cache: delayed updates to amortize the cost of propagation

  34. What if…. • Multiple levels of caching with different speeds and sizes? • What are some tricky performance behaviors?

  35. istory’s Mystery Puzzling Conquest Microbenchmark Numbers… Geoff Kuenning: “If Conquest is slower than ext2fs, I will toss you off of the balcony…”

  36. With me hanging off a balcony… • Original large-file microbenchmark: one 1-MB file (Conquest in-core file)

  37. Odd Microbenchmark Numbers • Why are random reads slower than sequential reads?

  38. Odd Microbenchmark Numbers • Why are RAM-based FSes slower than disk-based FSes?

  39. A Series of Hypotheses • Warm-up effect? • Maybe • Why do RAM-based systems warm up slower? • Bad initial states? • No • Pentium III streaming I/O option? • No

  40. Large L2 cache footprint Small L2 cache footprint write a file sequentially write a file sequentially footprint file end footprint file end read the same file sequentially read the same file sequentially footprint footprint read flush read flush file file end file file end Effects of L2 Cache Footprints footprint footprint

  41. LFS Sprite Microbenchmarks • Modified large-file microbenchmark: ten 1-MB files (in-core files)

  42. What if…. • Multiple levels of caching with similar characteristics? (via network)

  43. A Cache Miss • Multiple levels of caching with similar characteristics? (via network)

  44. Why cache the same data twice? A Cache Miss • Multiple levels of caching with similar characteristics? (via network)

  45. What if…. • A network of caches?

  46. Cache-to-Process Data Handling • Data in buffer is destined for a user process (or came from one, on writes) • But buffers are in system space • How to get the data to the user space? • Copy it • Virtual memory techniques • Use DMA in the first place

More Related