1 / 50

Elements of SAN capacity planning

Elements of SAN capacity planning. Mark Friedman VP, Storage Technology markf@demandtech.com (239) 261-8945. Overview. How do we take what we know about storage processor performance and apply it to emerging SAN technology? What is a SAN ? Planning for SANs:

morrie
Télécharger la présentation

Elements of SAN capacity planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Elements of SAN capacity planning Mark Friedman VP, Storage Technology markf@demandtech.com (239) 261-8945

  2. Overview • How do we take what we know about storage processor performance and apply it to emerging SAN technology? • What is a SAN? • Planning for SANs: • SAN performance characteristics • Original test results, May 2001: 4 x 550 MHz, 1 Gb FC • More recent test results, May 2002: 2 x 2.266 GHz, 2 Gb FC • SPC-1 testing using 3 nodes with 2 x 3 GHz, 1 & 2 Gb FC • Backup and replication performance

  3. Cached Disk Spindles Strings & Farms Storage Processors Evolution Of Disk Storage Subsystems See: Dr. Alexandre Brandwajn, “A study of cached RAID 5 I/O” CMG Proceedings, 1994. Write-thru Cached subsystems

  4. Disk Subsystem Modeling framework: • Components • Front end interface bandwidth • Internal bus bandwidth • Number of processors and their speed • Cache memory • Disk interfaces • SCSI Disks • Internal segmented buffering • Seek, Rotational Delay, Data transfer rate • Logical:physical disk mapping • Linear mapping • Disk striping • RAID mapping

  5. What Is A SAN? • Storage Area Networks are designed to exploit Fibre Channel plumbing • Approaches to simplified networked storage: • SAN appliances • SAN Metadata Controllers (“out of band”) • SAN storage managers (“in band”)

  6. Application: HTTP, RPC Host-to-Host: TCP, UDP Internet Protocol: IP Media Access: Ethernet, FDDI Packet Packet Packet Packet The Difference Between NAS and SAN • Storage Area Network(SAN) designed to exploit Fibre Channel plumbing require a new infrastructure. • Network Attached Storage (NAS) devices plug into the existing networking infrastructure. • Networked file access protocols (NFS, SMB, CIFS) • TCP/IP stack

  7. Application Interfaces RPC DCOM Winsock NetBIOS • User Mode • Kernel Named Pipes Redirector Server NetBT TDI TCP UDP IP ARP ICMP IGMP IP Filtering IP Forwarding Packet Scheduler NDIS NDIS Wrapper NDIS Miniport NIC Device Driver The Difference Between NAS and SAN • NAS devices plug into existing TCP/IP networking support. • Performance considerations: • 1500 byte Ethernet MTU • TCP requires acknowledgement of each packet, limiting performance.

  8. The Difference Between NAS and SAN • Performance considerations: • e.g., • 1.5 KB Ethernet MTU • Requires processing 80,000 Host interrupts/sec @ 1 Gb/sec • or Jumbo frames, which also requires installing a new infrastructure • Which is why Fibre Channel was designed the way it is! Source: Alteon Computers, 1999.

  9. The Holy Grail! Storage Area Networks • Uses low latency, high performance Fibre Channel switching technology (plumbing) • 100 MB/sec Full duplex serial protocol over copper or fiber • Extended distance using fiber • Three topologies: • Point-to-Point • Arbitrated Loop: 127 addresses, but can be bridged • Fabric: 16 MB addresses

  10. The Holy Grail! Storage Area Networks • FC delivers SCSI commands, but Fibre Channel exploitation requires new infrastructure and driver support Objectives: • Extended addressing of shared storage pools • Dynamic, hot-plugable interfaces • Redundancy, replication & failover • Security administration • Storage resource virtualization

  11. Distributed Storage & Centralized Administration Traditional tethered vs untethered SAN storage • Untethered storage can (hopefully) be pooled for centralized administration • Disk space pooling (virtualization) • Currently, using LUN virtualization • In the future, implementing dynamic virtual:real address mapping (e.g., the IBM Storage Tank) • Centralized back-up • SAN LAN-free backup

  12. Upper Level Protocol SCSI IPI-3 HIPPI IP Fc4 Common Services Fc3 Framing Protocol/Flow Control Fc2 8B/10B Encode/Decode Fc1 100MB/s Physical Layer Fc0 Storage Area Networks • FC is packet-oriented (designed for routing). • FC pushes many networking functions into the hardware layer. • e.g., • Packet fragmentation • Routing

  13. Storage Area Networks • FC is designed to work with optical fiber and lasers consistent with Gigabit Ethernet hardware • 100 MB/sec interfaces • 200 MB/sec interfaces • This creates a new class of hardware that you must budget for: FC hubs and switches.

  14. Storage Area Networks Performance characteristics of FC switches: • Extremely low latency ( 1sec), except when cascaded switches require frame routing • Deliver dedicated 100 MB/sec point-to-point virtual circuit bandwidth • Measured 80 MB/sec effective data transfer rates per 100 MB/sec Port

  15. Upper Level Protocol SCSI IPI-3 HIPPI IP Fc4 Common Services Fc3 Framing Protocol/Flow Control Fc2 8B/10B Encode/Decode Fc1 100MB/s Physical Layer Fc0 Storage Area Networks When will IP and SCSI co-exist on the same network fabric? • iSCSI • Nishan • Others?

  16. Storage Area Networks • FC zoning is used to control access to resources (security) • Two approaches to SAN management: • Management functions must migrate to the switch, storage processor, or…. • OS must be extended to support FC topologies.

  17. Approaches to building SANs • Fibre Channel-based Storage Area Networks (SANs) • SAN appliances • SAN Metadata Controllers • SAN Storage Managers • Architecture (and performance) considerations

  18. Approaches to building SANs • Where does the logical device:physical device mapping run? • Out-of-band: on the client • In-band: inside the SAN appliance, transparent to the client • Many industry analysts have focused on this relatively unimportant distinction.

  19. SAN appliances Conventional storage processors with • Fibre Channel interfaces • Fibre Channel support • FC Fabric • Zoning • LUN virtualization

  20. Host Interfaces FC Interfaces FC Disks Cache Memory SAN Appliance Performance Same as before, except fasterFibre Channel interfaces • Commodity processors, internal buses, disks, front-end and back-end interfaces • Proprietary storage processor architecture considerations Multiple Processors Internal Bus

  21. SAN appliances SAN and NAS convergence? • Adding Fibre Channel interfaces and Fibre Channel support to a NAS box • SAN-NAS hybrids when SAN appliances are connected via TCP/IP. Current Issues: • Managing multiple boxes • Proprietary management platforms

  22. SAN Clients 1 3 Token 2 Fibre Channel SAN Metadata Controller Pooled Storage Resources SAN Metadata Controller • SAN clients acquire an access token from the Metadata Controller (out-of-band) • SAN clients then access disks directly using proprietary distributed file system

  23. SAN Metadata Controller • Performance considerations: • MDC latency (low access rate assumed) • Additional latency to map client file system request to the distributed file system • Other administrative considerations: • Requirement for client-side software is a burden!

  24. SAN Clients Fibre Channel Storage Domain Servers Pooled Storage Resources SAN Storage Manager Requires all access to pooled disks through the SAN Storage Manager • (in-band)!

  25. SAN Storage Manager • SAN Storage Manager adds latency to every I/O request • How much latency is involved? • Can this latency be reduced using traditional disk caching strategies? SAN Clients Fibre Channel Storage Domain Servers Pooled Storage Resources

  26. Client I/O Initiator/Target Emulation SANsymphony Storage Domain Server FC Adaptor Polling Threads Security Fault Tolerance Data Cache Disk Driver Natives W2K I/O Manager Diskperf (measurement) Fault Tolerance (Optional) SCSI miniport Driver Fibre Channel HBA Driver Architecture of a Storage Domain Server • Runs on an ordinary Win2K Intel server • The SDS intercepts SAN I/O requests, impersonating a SCSI disk • Leverages: • Native Device drivers • Disk management • Security • Native CIFS support

  27. Sizing the SAN Storage Manager server • In-band latency is a function of (Intel server) front-end bandwidth: • Processor speed • Number of processors • PCI bus bandwidth • Number of HBAs • and performance of the back-end Disk configuration

  28. SAN Storage Manager Can SAN Storage Manager in-band latency be reduced using traditional disk caching strategies? • Read hits • Read misses • Disk I/O + (2 * data transfer) • Fast Writes to cache (with mirrored caches) • 2 * data transfer • Write performance ultimately determined by the disk configuration

  29. SCSI Read Command Length = 4000 Status Frame 16x1024 Byte Data Frames 140sec 27sec SAN Storage Manager Read hits (16 KB block): • Timings from an FC hardware monitor • 1Gbit/s Interfaces • No bus arbitration delays!

  30. SCSI Command Write Setup Data Frames SCSI Status Read vs. Write hits (16 KB block) Fibre Channel Latency (16KB Blocks)

  31. SCSI Command Write Setup Data Frames SCSI Status Decomposing SAN in-band Latency How is time being spent inside the server? • PCI bus? • Host Bus adaptor? • Device polling? • Software stack?

  32. 4x550MHz XEON Processors Memory Bus 64bit/33MHz PCI 32bit/33MHz PCI 32bit/33MHz PCI Benchmark Configuration • 4-way 550 MHz PC • Maximum of three FC interface polling threads • 3 PCI buses (528MB/s Total) • 1, 4, or 8 QLogic 2200 HBAs

  33. Decomposing SAN in-band Latency How is time being spent inside the SDS? • PCI bus? • Host Bus adaptor? • Device polling: • 1 CPU is capable of 375,000 unproductive polls/sec • 2.66secs per poll • Software stack: • 3 CPUs are capable of fielding 40,000 Read I/Os per second from cache • 73secs per 512-byte I/O

  34. SDS FC Interface Data Transfer Decomposing SAN in-band Latency SANsymphony in-band Latency (16KB Blocks)

  35. Impact Of New Technologies Front-end bandwidth: • Different speed Processors • Different number of processors • Faster PCI Bus • Faster HBAs e.g. Next Generation Server • 2GHz GHz Processors (4x Benchmark System) • 200MB/sec FC interfaces (2x Benchmark System) • 4x800MB/s PCI bus (6x Benchmark System) • ...

  36. 2GHz CPU, New HBAs, 2Gbit Switching 2GHz CPU & New HBAs 1 year ago Impact Of New Technologies

  37. Impact Of New Technologies

  38. Sizing the SAN Storage Manager Scalability • Processor speed • Number of processors • PCI bus bandwidth • 32bit/33MHz 132MB/sec • 64bit/33MHz 267MB/sec • 64bit/66MHz 528MB/sec • 64bit/100MHz 800MB/s (PCI-X) • NGIO??? • Number of HBAs • 200 MB/sec FC interfaces feature faster internal processors

  39. Sizing the SAN Storage Manager Entry level system: • Dual Processor, single PCI bus, 1 GB RAM Mid-level departmental system: • Dual Processor, dual PCI bus, 2 GB RAM Enterprise-class system: • Quad Processor, triple PCI bus, 4 GB RAM

  40. SAN Storage Manager PC scalability

  41. Entry level SAN Storage Manager PC scalability Departmental SAN Enterprise class

  42. SAN Storage Manager PC scalability May 2002

  43. www.storageperformance.org/Results/SPC-1/DataCore_2003-08-11_SANsymphony/www.storageperformance.org/Results/SPC-1/DataCore_2003-08-11_SANsymphony/ SPC-1 Submission Identifier: A00015 Tested Storage Configuration (TSC) Name: DataCore SANsymphony Network Edition Metric Reported Results SPC-1 IOPs 50,003.55 SPC-1 Price-Performance $6.11/SPC-1 IOPS™ Total ASU Capacity 1,407GB Data Protection Level Mirroring SPC-1 LRT 1.68 ms Total TSC Price (including three-year maintenance) $305,608 DataCore SPC-1 Results (August 2003)

  44. http://www.storageperformance.org/Results/SPC1/Fujitsu_2003_08_11_ETERNUS3000-M600M/2003-08-11_Fujitsu_ETERNUS3000-M600M_SPC1-FDR.pdfhttp://www.storageperformance.org/Results/SPC1/Fujitsu_2003_08_11_ETERNUS3000-M600M/2003-08-11_Fujitsu_ETERNUS3000-M600M_SPC1-FDR.pdf SPC-1 Submission Identifier: A00016 Tested Storage Configuration (TSC) Name: Fujitsu Storage Systems ETERNUS 3000 Model 600M Metric Reported Results SPC-1 IOPs 64,249.77 SPC-1 Price-Performance $32.72/SPC-1 IOPS™ Total ASU Capacity 15,609GB Data Protection Level Mirroring SPC-1 LRT 2.31 ms Total TSC Price (including three-year maintenance) $2,102,147 Fujitsu Softek SPC-1 Results (August 2003)

  45. DataCore SANSymphony Cost/Performance $6.11 perSPC-1 IOPS™ 50,003SPC-1 IOPS™

  46. SANsymphony Performance Conclusions • FC switches provide virtually unlimited bandwidth with exceptionally low latency so long as you do not cascade switches • General purpose Intel PCs are a great source of inexpensive MIPS. • In-band SAN management is not a CPU-bound process. • PCI bandwidth is the most significant bottleneck in the Intel architecture. • FC Interface cards speeds and feeds are also very significant

  47. SAN Disk Subsystem Modeling framework: • Components • FC front end interface bandwidth • Internal PCI bus bandwidth • Number of Intel processors and their speed • Cache memory • Fibre Channel Disk interfaces • SCSI Disks • Internal segmented buffering • Seek, Rotational Delay, Data transfer rate • Logical:physical disk mapping • Linear mapping • Disk striping • RAID mapping

  48. SAN Storage Manager – Next Steps • Cacheability of Unix and NT workloads • Domino, MS Exchange • Oracle, SQL Server, Apache, IIS • Given mirrored writes, what is the effect of different physical disk configurations? • JBOD • RAID 0 disk striping • RAID 5 write penalty • Asynchronous disk mirroring over long distances • Backup and Replication (snapshot)

  49. Questions ?

  50. www.datacore.com

More Related