1 / 34

Bridging the Information Gap in Storage Protocol Stacks

Bridging the Information Gap in Storage Protocol Stacks. Timothy E. Denehy, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau University of Wisconsin, Madison http://www.cs.wisc.edu/wind/. State of Affairs. File System Storage System. Namespace Files Metadata Layout Liveness

regis
Télécharger la présentation

Bridging the Information Gap in Storage Protocol Stacks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bridging the Information Gapin Storage Protocol Stacks Timothy E. Denehy, Andrea C. Arpaci-Dusseau, and Remzi H. Arpaci-Dusseau University of Wisconsin, Madison http://www.cs.wisc.edu/wind/

  2. State of Affairs File System Storage System Namespace Files Metadata Layout Liveness Parallelism Redundancy

  3. Problem • Information gap may cause problems • Poor performance • Partial stripe write operations • Duplicated functionality • Logging in file system and storage system • Reduced functionality • Storage system lacks knowledge of files • Time to re-examine the division of labor

  4. Informed LFS Exposed RAID Our Approach • Enhance the storage interface • Expose performance and failure information • Use information to provide new functionality • On-line expansion • Dynamic parallelism • Flexible redundancy

  5. Outline • ERAID Overview • I·LFS Overview • Functionality and Evaluation • Conclusion

  6. ERAID Overview • Goals • Backwards compatibility • Block-based interface • Linear, concatenated address space • Expose information to the file system above • Allows file system to utilize semantic knowledge

  7. ERAID Regions • Region • Contiguous portion of the address space • Regions can be added to expand the address space • Region composition • RAID: one region for all disks • Exposed: separate regions for each disk • Hybrid ERAID

  8. ERAID ERAID Performance Information • Exposed on a per-region basis • Throughput and queue length • Reveals • Static disk heterogeneity • Dynamic performance and load fluctuations

  9. ERAID Failure Information • Exposed on a per-region basis • Number of tolerable failures • Regions may have different failure characteristics • Reveals dynamic failures to file system above ERAID X RAID1

  10. Outline • ERAID Overview • I·LFS Overview • Functionality and Evaluation • Conclusion

  11. I·LFS Overview • Modified NetBSD LFS • All data and metadata is written to a log • Log is a collection of segments • Segment table describes each segment • Cleaner process produces empty segments

  12. I·LFS Overview • Goals • Improve performance, functionality, and manageability • Minimize system complexity • Exploits ERAID information to provide • On-line expansion • Dynamic parallelism • Flexible redundancy • Lazy redundancy

  13. I·LFS Experimental Platform • NetBSD 1.5 • 1 GHz Intel Pentium III Xeon • 128 MB RAM • Four fast disks • Seagate Cheetah 36XL, 21.6 MB/s • Four slow disks • Seagate Barracuda 4XL, 7.5 MB/s

  14. I·LFS Baseline Performance

  15. I·LFS On-line Expansion • Goal: expand storage incrementally • Capacity • Performance • Ideal: instant disk addition • Minimize downtime • Simplify administration • I·LFS supports on-line addition of new disks

  16. I·LFS On-line Expansion Details • ERAID: an expandable address space • Expansion is equivalent to adding empty segments • Start with an oversized segment table • Activate new portion of segment table

  17. I·LFS On-line Expansion Experiment • I·LFS takes immediate advantage of each extra disk

  18. I·LFS Dynamic Parallelism • Goal: perform well on heterogeneous storage • Static performance differences • Dynamic performance fluctuations • Ideal: maximize throughput of the storage system • I·LFS writes data proportionate to performance

  19. I·LFS Dynamic Parallelism Details • ERAID: dynamic performance information • Most file system routines are not changed • Aware of only the ERAID linear address space • Segment selection routine • Aware of ERAID regions and performance • Chooses next segment based on current performance • Minimizes changes to the file system

  20. I·LFS Static Parallelism Experiment • I·LFS provides the full throughput of the system • Simple striping runs at the rate of the slowest disk

  21. I·LFS Dynamic Parallelism Experiment • I·LFS adjusts to the performance fluctuation

  22. I·LFS Flexible Redundancy • Goal: offer new redundancy options to users • Ideal: range of redundancy mechanisms and granularities • I·LFS provides mirroredper-file redundancy

  23. I·LFS Flexible Redundancy Details • ERAID: region failure characteristics • Use separate files for redundancy • Even inode N for original files • Odd inode N+1 for redundant files • Original and redundant data in different sets of regions • Flexible data placement within the regions • Use recursive vnode operations for redundant files • Leverage existing routines to reduce complexity

  24. I·LFS Flexible Redundancy Experiment • I·LFS provides a throughput and reliability tradeoff

  25. I·LFS Lazy Redundancy • Goal: avoid replication performance penalty • Ideal: replicate data immediately before failure • I·LFS offers redundancy with delayed replication • Avoids penalty for redundant, short-lived files

  26. I·LFS Lazy Redundancy • ERAID: region failure characteristics • Segments needing replication are flagged • Cleaner acts as replicator • Locates flagged segments • Checks data liveness and lifetime • Generates redundant copies of files

  27. I·LFS Lazy Redundancy Experiment • I·LFS avoids performance penalty for short-lived files

  28. Outline • ERAID Overview • I·LFS Overview • Functionality and Evaluation • Conclusion

  29. Comparison with Traditional Systems • On-line expansion • Yes, but capacity only, not performance • Dynamic parallelism • Yes, but with duplicated functionality • Flexible redundancy • No, the storage system is not aware of file composition • Lazy redundancy • No, the storage system is not aware of file deletions

  30. Conclusion • Introduced ERAID and I·LFS • Extra information enables new functionality • Difficult or impossible in traditional systems • Minimal complexity • 19% increase in code size • Time to re-examine the division of labor

  31. Questions? • Full paper available on the WiND publications page • http://www.cs.wisc.edu/wind/

  32. Extra Slides

  33. Storage Failure

  34. Crossed-pointer Problem

More Related