1 / 142

Poziom podwyższony Ochrona danych

Poziom podwyższony Ochrona danych. Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD. Dyski i zasilacze stanowią najsłabsze ogniwo HAS Dyski zawierają dane Dane muszą być chronione

pravat
Télécharger la présentation

Poziom podwyższony Ochrona danych

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Poziom podwyższony Ochrona danych Macierze dyskowe RAID Automatyczne biblioteki taśmowe Urządzenia do zapisu optycznego na dyskach CD.

  2. Dyski i zasilacze stanowią najsłabsze ogniwo HAS Dyski zawierają dane Dane muszą być chronione Dane muszą być odtwarzalne z pomocą dodatkowych systemów Disk storage systems podsystemy dyskowe JBOD (Just a Bunch of Disks) Dyski hot-pluggable warm-pluggable hot-spares write cache macierze dyskowe SAN NAS Wprowadzenie

  3. SGI InfiniteStorage Product Line High Availability DAS NAS SAN Redundant Hardware and FailSafe™ XVM Data Protection Legato NetWorker, XFS™ Dump, OpenVault™ HSM Data Protection HSM Data Sharing High Availability SGI Data Migration Facility (DMF), TMF, OpenVault™ Data Sharing XFS, CIFS/NFS, Samba, ClusteredXFS (CXFS™), SAN over WAN Storage Hardware TP900, TP9100, TP9300, TP9400, TP9500, HDS 99x0, STK Tape Libraries, ADIC Libraries, Brocade Switches, NAS 2000, SAN 2000, SAN 3000 Choose only the integrated capabilities you need Cracow ‘03 Grid Workshop

  4. Parametry magistrali

  5. Parametry magistrali, cd.

  6. Porównanie, cd.

  7. Dyski lustrzane i RAID • Pojedyncze dyski - MTBF godzin • dyski lustrzane (RAID 1) • hot-swap - niska wydajność wykorzystania powierzchni (~50%) • RAID (D.A. Patterson, G. Gibson R.H. Katz, A case for Redundant Array of Inexpensive Disks or RAID, University of California, Berkeley, 1987) • Grupa(-y) dysków sterowanych wspólnie • Jednoczesny zapis i odczyt na różnych dyskach • Bezawaryjność systemu - zapis informacji nadmiarowych • cache

  8. umieszczony w jednej obudowie z własnym zasilaczem, wyposażony w elementy redundantne w celu zapewnienia wysokiej dostępności , wyposażony na ogół w dwa lub więcej kontrolerów we/wy, wyposażony w pamięć typu cache w celu przyspieszenia komunikacji, skonfigurowany tak, by zminimalizować mozliwość wystąpienia błędu (sprzętowa realizacja standardu RAID). RAID, serwery danych, intensywny przepływ danych

  9. Software/Hardware każdy komputer - redundantne i/f HW RAID- ograniczony do jednej tablicy dysków Disk cache, procesor Tablice dysków: - kilka magistral dla kilku i/f komputerów, read-ahead buffers SW RAID - elastyczność, cenowo korzystniejszy HW RAID - Potencjalne pojedyncze punkty uszkodzeń: zasilacze chłodzenie instalacja zasilająca wewnętrzne sterowniki podtrzymanie bateryjne płyta główna RAID, cd.

  10. RAID features

  11. Porównanie trybów RAID

  12. Dobór trybu RAID

  13. Zakres zastosowań

  14. Zalety Centralizacja zarządzania i alokacji niezawodność i dostępność-rozbudowany failover sieciowy backup LAN-free backups LAN SAN Storage Area Network, SAN Storage

  15. SCSI vs. FC SCSI FC każdy z każdym 2 połączenia komputery .......... koncentratory dyski

  16. Dlaczego Fibre Channel? • Gigabit Bandwidth Now - 1 Gbs today, soon 2 and 4Gbs • High Efficiency - FC has very little overhead • Multiple Topologies - Point-to-point, Arbitrated loop, Fabric • Scalability - from Point-to-Point FC scales to complex Fabrics • Longer cable lengths and better connectivity than existing SCSI technologies • Fibre Channel is an ISO and ANSI standard

  17. Storage Area Networks today provide 1.06Gbs today 2.12Gbs this year 4.24Gbs in the future Multiple Channels expand bandwidth Wysoka przepustowość i.e.:800 Mbyte/s

  18. Topologie FC • Point-to-Point • 100MByte/s per connection • Just defines connection between storage system and host

  19. Topologie FC, dc. • FC-AL, Arbitrated Loop Dual Loop Some data flows through one loop while other data flows through the second loop Single Loop Data flows around the loop, passed from one device to another • Each port arbitrates for access to the loop • Ports that lose the arbitration act as repeaters

  20. Topologie FC, cd. • FC-AL, Arbitrated Loop with Hubs Hubs make a loop look like a series of point to point connections. 2 3 1 HUB 4 Addition and deletion of nodes is simple and non-disruptive to information flow.

  21. 2 1 SWITCH 3 4 Topologie FC, cd. • FC-Switches Switches permit multiple devices to communicate at 100 MB/s, thereby multiplying bandwidth. 2 1 SWITCH 3 4 Fabrics are composed of one or more switches. They enable Fibre Channel networks to grow in size.

  22. So how do these choices impact my MPI application performance ? Let‘s find it out by running ... Micro Benchmarksto measure basic network parameters like latency and bandwidth Netperf PALLAS MPI-1 Benchmark Real MPI Applications ESI PAM-Crash ESI PAM-Flow DWD / RAPS Local Model

  23. Benchmark HW Setup • 2 Nodes HP N4600 each with 8 processors running HP-UX 11i • Lots of NW Interfaces per node • 4 * 100BT Ethernet • 4 * Gigabit Ethernet Copper • 1 * Gigabit Ethernet Fibre • 1 * HyperFabric 1 • Point-to-point connections, no switches involved • J6000 benchmarks were run on a 16 node cluster at the HP Les Ulis benchmark center (100BT and HyperFabric 1 switched networks) • HyperFabric 2 benchmarks were run on the L3000 cluster at the HP Richardson benchmark center.

  24. Jumbo Frame Same Performance with Fibre and Copper GigaBit ? GigaBit Ethernet allows the packet size (aka Maximal Transfer Unit MTU) to be increased from 1500 Bytes to 9000 Bytes.Can be done anytime by invokinglanadmin –M 9000 <nid> Can be applied to both Copper and Fibre GigaBit interfaces.

  25. GigaBit MTU Size / Fibre vs Copper

  26. What is the Hyper Messaging Protocol (HMP) ? Standard MPI Messaging with remote nodes goes through the TCP/IP stack.Massive Parallelism with clusters is limited by OS overhead for TCP/IP.Example: PAM-Crash MPP on a HP J6000 HyperFabric workstation cluster (BMW data set) 25% System Usage / CPU on a 8x2 cluster 45% System Usage / CPUon a 16x2 cluster This overhead is OS related and has nothing to do with NW interface performance.

  27. What is HMP ? HMP Idea: Create a shortcut path for MPI applications in order to bypass some of the TCP/IP overhead of the OS Approach: Move the device driver from OS kernel into the application. This requires direct HW access privileges. Now available with HP MPI 1.7.1 Only for HyperFabric HW

  28. MPI benchmarking with the PALLAS MPI-1 benchmark (PMB) PMB is an easy-to-use MPI benchmark for measuring key parameters like latency and bandwidth Can be downloaded fromhttp://www.pallas.de/pages/pmbd.htm Only 1 PMB process per node was used to make sure that NW traffic is not mixed with SMP traffic. Unlike the netperf test this is not a throughput scenario.

  29. Selected PMB Operations PingPong with 4 Byte message(MPI_Send, MPI_Recv)measures message latency PingPong with 4 MB message(MPI_Send, MPI_Recv)measures half duplex bandwidth SendRecv with 4 MB message(MPI_SendRecv)measures full duplex bandwidth Barrier (MPI_Barrier)measures barrier latency

  30. PingPong 4 Bytes Barrier PingPong 4MBytes SendRecv 4MBytes 100baseT 61 μsec 112 μsec 10.8 MB/sec 22.1 MB/sec GigaBit 1.5k MTU 72 μsec 135 μsec 62.7 MB/sec 79.8 MB/sec GigaBit 9k MTU 72 μsec 142 μsec 62.1 MB/sec 92.8 MB/sec HyperFabric 1 61 μsec 125 μsec 56.8 MB/sec 110.8 MB/sec HyperFabric 2 53 μsec 105 μsec 97.0 MB/sec 160.0 MB/sec HyperFabric 2 LowFat MPI 27 μsec 54 μsec 112.0 MB/sec 143.6 MB/sec Shared Memory 1.7 μsec 1.8 μsec 544.8 MB/sec 548.0 MB/sec Selected PMB Results

  31. PingPong as measured

  32. PingPong per formula t = t0+n*fmax

  33. Cechy SAN ( Storage Area Networks ) • Centralized Management • Storage Consolidation/Shared Infrastructure • High Availability and Disaster Recovery • High Bandwidth • Scalability • Shared Data !?(niełatwe niestety !)

More Related