1 / 21

Storage Area Networks

Storage Area Networks. The Basics. Storage Area Networks. SANS are designed to give you: • More disk space • Multiple server access to a single disk pool • Better performance • Option of disk distributed across multiple locations. Direct Attached Storage.

Télécharger la présentation

Storage Area Networks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Storage Area Networks The Basics

  2. Storage Area Networks • SANS are designed to give you: • • More disk space • • Multiple server access to a single disk pool • • Better performance • • Option of disk distributed across multiple locations

  3. Direct Attached Storage • Classically, for storage we had a single box with a bunch of disks attached: Public Network LUN1 LUN2 LUN0 Server SCSI Bus

  4. Attached Storage • The server speaks to the SCSI disks using a command language: • • Read from LUN0, Block 123 • • Write to LUN1, Block 456 • All this goes over the SCSI bus, which is directly attached to the server; only that server has access to the bus • The server would create a filesystem on the disk(s) and could then make the disk available to other computers via NFS, Samba, etc.

  5. Network Attached Storage • This idea is easily extended to an appliance approach. Configure a utility box with some disk that does only NFS or Samba/SMB, place on network Public Network NAS Server NFS Client NFS Server SCSI Bus

  6. NAS and Servers • Redundant web servers share the same data--but they both talk to the same NFS server Public Network NAS Server Web server, data NFS mounted NFS Server SCSI Bus Web server, data NFS mounted

  7. Attached Storage • We can also do things like place a RAID array on the NAS server. • This works, but it has some limitations: • • If the server goes down, there is no access to the disk • • File sharing goes through the network storage server and across the network, which can be slow • • Limitations on location of disks--must be near server, within range of the disk bus • • Adding or subtracting disk space can be difficult • What we want is a shared disk pool that all servers can access

  8. Storage Area Network • What we want is something that looks like this: NFS Client Public Ethernet Net SAN Participants Disk Pool

  9. Storage Area Network • Notice: • • You can take down a server and still maintain access to the disk pool via the other SAN participants • • Disk added to the pool is available to all servers, not just one • • Shared, high speed access to the disk pool; can run clustered copies of SQL database or web server if the SQL databases or web servers are also SAN participants • • Can still serve up the disk pool via an NFS or SMB server on a SAN-connected box • • “serverless backups”--just send command to copy blocks from disk A to disk B. Snapshots easier, shortened backup windows--you can have a SAN particpant handle moving a volume to tape

  10. Storage Area Network • So how does this work? It’s a scaled up version of the old system. • • The commands being sent are the same disk standard commands: either SCSI or ATA disk bus commands, READ, WRITE, etc. • • The network connecting the SAN servers to the disk is typically (but not always) higher speed, eg FibreChannel • • Some extra glue to allow for concurrent access by more than one server--need a shared filesystem • • Special filesystems to allow for concurrent access

  11. Storage Area Network • A popular choice: • • SCSI for the bus commands (commands sent over the wire) • • Fiber Channel for the SAN network • • EMC or similar for the glue volume software • Fiber Channel is 2+ Gbit/sec, and can be deployed across up to a 500m distance (sometimes) and up to 70 KM with special equipment

  12. Storage Area Network • Another option is to use gigabit ethernet for the SAN networking. • • Cheap! Commodity equipment, don’t need to learn new Fiber Channel stuff, reuse existing gear • • But also lower performance--fibre channel has higher BW, and can use more of it.

  13. ATA Over Ethernet • AoE uses ethernet plus ATA bus commands rather than SCSI. Low cost; but since ethernet frames are not routable all devices must be on the same network

  14. iSCSI • iSCSI uses SCSI bus commands over ethernet, encapsulated inside of TCP/IP • • Cheap hardware! • • Well supported in Linux, Solaris, and Windows world • • Because the SCSI is inside of TCP/IP, it is routable--which means you can do a SAN across wide area networks (with lower performance due to latency) and do things like mirror for disaster backup, or across campus on high performance networks • • Processing TCP/IP takes some overhead; some use TCP offload chips

  15. iSCSI • Each “disk”/LUN is a RAID array that understands iSCSI. NFS Client Public Ethernet Net

  16. iSCSI • The green network is a dedicated (usually) gigabit ethernet network that carries the SCSI commands encapsulated inside TCP/IP. The red network connects the SAN participants to other clients not on the SAN • Important point: TCP/IP is routable. That means that (modulo latency) the devices can be located anywhere. We could have a iSCSI SAN participant in Root Hall, and one in Spanagel. The Root iSCSI server can access the disk pool in Spanagel • We could also have a volume located at Fleet Numeric in the same SAN • The price we pay for this is having to process the TCP/IP overhead as iSCSI commands go up the network protocol stack. This can be alleviatedin part by TCP offload chips

  17. Volume Software • Remember, the iSCSI targets are just block devices. iSCSI says nothing about concurrent access or multiple hosts accessing the same devices • For that we need a SAN Filesystem. This deconflicts concurrent access by hosts to the block devices

  18. Volume Software NFS Client Public Ethernet Net Vol2 Vol1

  19. SAN Software • The “volume software” allows you to build a concurrent access filesystem out of one or more LUNs

  20. iSCSI • Example: Five compute servers need read access to one weather data set. If the servers are all on the SAN, they can directly access the data • Example: backup. Copy disk blocks directly, then have a tape drive SAN participant copy to tape • Example: storage expansion. Just add more disk, and it is available to all SAN participants

  21. Competitors • iSCSI’s competitor is for the most part fibre channel. The concept of fiber channel is almost identical, but the SCSI commands are simply encapsulated in a fibre channel frame • Fibre channel is typically higher performance--more data can be pushed across FC, and there is much less overhead processing FC frames • BUT it is higher cost • ATA Over Ethernet is very similar to FC in concept--directly inserting the ATA commands in ethernet frames. But it seems to have less market penetration

More Related