1 / 22

Energy Efficient Storage Technologies for Data Centers Alan G. Yoder, Ph.D. NetApp

Energy Efficient Storage Technologies for Data Centers Alan G. Yoder, Ph.D. NetApp Chair, SNIA Technical Council. Abstract. Energy Efficient Storage Technologies for Data Centers

denim
Télécharger la présentation

Energy Efficient Storage Technologies for Data Centers Alan G. Yoder, Ph.D. NetApp

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Energy Efficient Storage Technologies for Data Centers Alan G. Yoder, Ph.D. NetApp Chair, SNIA Technical Council

  2. Abstract • Energy Efficient Storage Technologies for Data Centers • An impressive amount of work has been done to date on improving the electrical efficiency of various data center components. The data storage industry has begun to see the fruits of this effort, with increased power supply and fan efficiencies. However, storage presents other significant opportunities for energy conservation, through various types of capacity optimization, that are not captured in electrical efficiency discussions. As data storage uses on the order of 25% of IT power in an average data center, these other opportunities bear examination. This article presents a survey of emerging storage technologies which positively impact energy usage and presents current thinking in the storage industry regarding their relative effectiveness. It also attempts to set a baseline for configurations against which improvements in capacity and energy use can be made.

  3. Outline • Purpose of paper • What is “storage”? • What is “storage efficiency”? • Baseline data center storage configuration • Storage optimizing technologies • Ballpark savings guesstimates • SNIA activities • Conclusion

  4. Purpose of paper • Introduce storage as a separate and special problem in energy efficiency • Data at rest requirements • Set baseline for comparison of purported improvements • Establish a taxonomy for ways of saving energy • Survey of what industry has been doing in the area of energy efficiency

  5. What is “storage”? • At home • Apple Time Machine + 2 USB drives (one offsite) • In the lab • Linux/BSD box and a few SATA drives • In the data center • Hi performance (both latency and bandwidth) • Petabyte scale • RAS (reliability, availability, serviceability) • 5 9’s availability or better (< 5 min unplanned downtime / yr) • RPO of minutes or less (sometimes zero) • RTO of minutes • non-disruptive firmware and hardware upgrades • Something always broken, yet life goes on

  6. What is “storage efficiency”? • Electrical efficiency • How much heat is generated during the conversion of electrical energy into useful work? • Good for CPUs, maybe for data in flight • Storage efficiency • How much data can be crammed into a box using a given amount of electricity and raw capacity • (Definitions are still a work in progress) • All based around sizeof(data) and sizeof(raw capacity)

  7. Baseline configuration • No single point of failure (no SPOF) • *everything* is redundant • RAID 1 • dual pathing • power supplies operate at 50% load (or less) • No system-wide reboots • *everything* is hot-swappable • High performance (SAN emulates Direct Attach) • Fibre Channel drives

  8. Summary of baseline configuration • No SPOF • RAID 1 • FC drives • This configuration has ruled Tier 1 storage in the data center for 15 years • Emphasis on performance and data safety

  9. Storage optimizing technologies • Electrical efficiency • Disk spindown • Power supply and fan efficiency, and • SSDs • Capacity optimization • Delta snapshots • Thin provisioning • Advanced RAID • Data deduplication • Compression • Hybrid systems • Slow SATA drives + flash

  10. A nod to facilities optimization • Usually the most energy savings are gotten here • PUE = ratio of input power to IT power • Traditional PUE: 2.25 and up • Modern PUE: 1.2 • Air economizers, variable speed fans, flywheel UPSs, etc. • Savings of over 50% often possible • Impossible to obtain with IT equipment optimizations assuming old-style power delivery inefficiencies (PUE > 2.0)

  11. Storage Efficiency: 4 techniques • Make the equipment more energy efficient • Use less redundancy • Commit less space • Squeeze more data into available space

  12. Making equipment more energy efficient • Power supply efficiency • 80plus, Climate Savers, US EPA • Efficiencies driven from ~65% to ~90% and up • Variable speed fans • Power theoretically quadratic in rotational speed • Great opportunity • Not enough operational data to date • Disk spindown • Looks great on paper; problematic in practice • RAID groups, background housekeeping • Only suitable for secondary storage due to latency hit • So far not viable in marketplace

  13. Better energy efficiency (cont.) • SSDs • Seemingly ideal • Zero data at rest energy (caveat: housekeeping requirements an open question) • Data in flight energy scales with IOPs • Price barriers to widespread adoption • High-capacity drives with flash • Performance about as good as FC • No free lunch caveat: except when at write saturation • 1/6 the energy density of FC at rest • Question: will the flash migrate to the host?

  14. Using less redundancy • RAID 1 • 50% space efficiency • Storage bricks (Google et al) • typically 33% or less • similar to RAID 1 + online backup • much worse if CPUs aren’t occupied with useful work • RAID 5 • Recommended raid group size of 5 to 8 • 80% to 88% efficiency • RAID 6 • Recommended raid group size of 10 to 16 • 80% to 88% efficiency • Moving to this because of convergence of BER and disk size on modern systems  ~4% chance annually of 2nd disk failure during RAID reconstruct on a 100-disk array

  15. Using less redundancy (cont.) • Delta snapshots • shared-data PIT copies • technology is similar to vfork • read-only and read-write variants • many data protection and what-if scenarios satisfied with deltas instead of full copies

  16. Committing less space • Thin provisioning • Works similarly to user quotas in filesystems • Impressive gains, because • Volumes are overprovisioned (more space for files than used) • Systems are overprovisioned (more space for volumes than used)

  17. Squeezing more data into available space • Compression • Harder on block storage (quantization) • Deduplication • Savings are most impressive on secondary storage • Global dedup an unsolved problem • Communication, index overhead • You *better* not lose that base copy!

  18. Average savings (from baseline or historical figures) Technology Savings Facilities optimization 50% Power supply improvements 20%Variable speed fans unknownLarge capacity drives 80%Advanced RAID 40%Delta snapshots 90% +Thin provisioning 50%In-place data deduplication 27% (Netapp: 1 exaByte)Compression 20% +

  19. Storage procural politics • IT guys don’t pay utility bill (OPEX) • So they don’t care how power efficient the storage is until they hit a density wall • Capacity optimizing technologies affect CAPEX as well as OPEX • Capacity optimizing technologies allow them to buy less gear to store the same data • IT guys do care about this

  20. SNIA activities • GSI – Green Storage Initiative • Collect and harmonize industry feedback for US EPA • Evangelize technical work • Develop labeling program • Green TWG – Technical Working Group • IP-protected group • Idle and active power metrics • Capacity Optimization subgroup (of TWG) • Characterization of capacity optimization technologies • Tutorials, whitepapers, etc. • At SNW trade shows, in FarSighted magazine

  21. Baseline configuration revisited • For research into data center class storage • Disk subsystems • No SPOF in shelves • Multipathing • RAID 6 (or 5 if 6 is unavailable) • 100 or more disks • Controllers • No SPOF • Multipathing • Multiprotocol (NFS + CIFS) • Thin provisioning, delta snapshots, compression, dedup all operational

  22. Questions

More Related