1 / 57

Agenda

Agenda.

phanson
Télécharger la présentation

Agenda

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Agenda

  2. Introduction to VMAX3 System configurations Quick comparison with VMAX2 families New engines Front end IO modules Management modules Drive enclosures Drives Virtual Matrix IB interconnect Hypermax OS

  3. VMAX3Introduction

  4. Today we have VMAX • Up to: • 8 engines • 3200 2.5” drives • 4PBu • 2TBr cache • 128 ports 40K • Up to: • 8 engines • 3200 2.5” drives • 2PBu • 1TBr cache • 128 ports 20K • Up to: • 4 engines • 1200 2.5” drives • 1.5PBu • 512GBr cache • 64 ports 10K

  5. Introducing VMAX3 • Up to: • 8 engines • 5760 drives • 4PBu • 16TBr cache • 256 ports VMAX 400 • Up to: • 4 engines • 2880 drives • 2.1PBu • 8TBr cache • 128 ports VMAX 200 • Up to: • 2 engines • 1440 drives • 0.5PBu • 2TBr cache • 64 ports VMAX 100

  6. VMAX V3 Platform Highlights RADICALSIMPLICITY INTEGRATED SYSTEM/DRIVE BAYS Standard 24” wide rack NATIVE 6Gb/S SAS BACK END Up to 4PB usable capacity EXTREMEPERFORMANCE IVY BRIDGE BASED ENGINES Faster Performance NEW USES FOR FLASH Vaulting, Metadata MASSIVESCALE VIRTUAL MATRIX INTERCONNECT 56GB/s Infiniband ENGINE BAY SEPARATION Dispersion up to 25M ULTRADENSITY 6

  7. VMAX 100 (sn x968) • 1-2 Engines • 2.1GHz, 24 IVB core • 512GB, 1TB memory • Up to 0.5 PBucapacity (using 4TB drives) • Up to 64 FE ports • Up to 1440 2.5” or 720 3.5” drives • Infiniband fabric (12-port Matrix) • Integrated service processor (MMCS)

  8. VMAX 200 (sn x967) • 1-4 Engines • 2.6GHz, 32 IVB core • 512GB, 1TB, 2TB memory • Up to 2.1 PBucapacity (using 4TB drives) • Up to 128 FE ports • Up to 2880 2.5” or 1440 3.5” drives • Infiniband fabric (12-port Matrix) • Integrated service processor (MMCS)

  9. VMAX 400 (sn x972) • 1-8 Engines • 2.7GHz, 48 IVB core • 512GB, 1TB, 2TB memory • Up to 4PBucapacity (using 4TB drives) • Up to 256 FE ports • Up to 5760 2.5” or 2880 3.5” drives • Infiniband fabric (18-port Matrix) • Integrated service processor (MMCS)

  10. Common Platform Features • Differentiation by type & quantity of Engines • i.e. Performance, System Drive Count, Capacity • Rack configurations with one or two engines • Single engine systems have no fabric • Multi-engine Systems will have Fabric • Online Fabric Upgrade when Engine # 2 is added • System Bay Dispersion • Up to 25 Meters from System Bay 1 • No Storage Bay dispersion

  11. Common Platform Features • Vault to Flash in the Engine • Flash IO module • No vault drives • No battery backup on any drives • Up to 4 FE IO modules per director • 4-Port, 16 Gbps FC IO module • Support for Third Party Racking

  12. Common Platform Features • System Configuration Rules are the same • Max 720 2.5” drive / 360 3.5” drives per engine • DAE Mixing in single increments • IO module population order (see later) • Drive / RAID Protection offerings • RAID1, RAID5 3+1 and 7+1, RAID6 6+2 and 14+2 • Dispersion capability (see later) • Mixing 2.5” & 3.5” DAEs behind an Engine

  13. System Configurations

  14. Single Engine System Bay System Bay 1 System Bay 2- n SPS for Engine No IB Matrix Infiniband Matrix Engine Zone A & B System PDU Daisy Chain DAE Work tray replaces KVM / Ethernet Switch KVM / Ethernet Switch Daisy Chain DAE Direct DAEs Daisy Chain DAEs No SPS for IB Matrix SPS for IB Matrix

  15. Dual Engine System Bay System Bay 1 System Bay 2- 4 SPS for Engine 1 Infiniband Matrix No IB Matrix Engine 1 Zone A & B System PDU Engine 2 Work tray replaces KVM / Ethernet Switch KVM / Ethernet Switch SPS for Engine 2 Engine 1 Direct DAE Engine 2 Direct DAE No SPS for IB Matrix SPS for IB Matrix

  16. Expansion Storage BayUsed with Dual Engine System Bay only Zone A & B System PDU Engine 2 Daisy Chain DAE Engine 1 Daisy Chain DAE

  17. Add DAEs Become Single Engine System Bay System Configs Add DAEs Decision Point Add Engine Become Dual Engine System Bay Build 1 DAE at a time Add DAE

  18. VMAX3 100K Configurations Single Engine /Rack Dual Engine /Rack System Bay 1 System Bay 2 System Bay 1 Engine 1 Engine 2

  19. VMAX3 200K Configurations Dual Engine /Rack Single Engine /Rack System Bay 1 System Bay 2 System Bay 3 System Bay 4 System Bay 1 System Bay 2 Engine 1 Engine 2 Engine 3 Engine 4

  20. VMAX3 400K Configurations(Single Engine System Bay) System Bay 1 System Bay 2 System Bay 3 System Bay 4 System Bay 5 System Bay 6 System Bay 7 System Bay 8 Engine 1 Engine 2 Engine 3 Engine 4 Engine 5 Engine 6 Engine 7 Engine 8

  21. VMAX3 400K Configurations(Dual Engine System Bay) System Bay 4 System Bay 2 System Bay 3 System Bay 1

  22. VMAX Dispersed Array, Single Engine per rack Engine 1 Engine 2 Engine 3 Engine 4 Engine 5 Engine 6 Engine 7 Engine 8 25 Meters from Sys Bay 1

  23. VMAX3 Dispersed Array, Dual Engine per rack 25 Meters from Sys Bay 1

  24. Quick Compares

  25. 10K vs VMAX3 100K

  26. 20K vs VMAX3 200K

  27. 40K vs VMAX3 400K

  28. New Engines

  29. New EnginesMegatron-lite, Megatron, Megatron-Heavy • 4U Enclosure • 2 Director Boards • Each Director board has its own redundant Power & Cooling • 2 Management Modules • 11 IO Slots per Director

  30. Megatron Sled

  31. New Engines

  32. Engine Physical Port Numbering IO Module Slots numbers 0-10 from Left to right • Ports numbered 0-3 from bottom to top on each IO Modules • IO modules with 2 Ports numbered 0 & 1 bottom to top Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 Slot 9 Slot 10 3 3 3 3 3 3 Management B Vault to Flash Vault to Flash Vault to Flash Vault to Flash 1 2 2 2 2 2 2 Director 2 or B 1 1 1 1 1 1 0 0 0 0 0 0 0 3 3 3 3 3 3 Management A Vault to Flash Vault to Flash Vault to Flash Vault to Flash 1 2 2 2 2 2 2 Director 1 or A 1 1 1 1 1 1 0 0 0 0 0 0 0 Management Module Vault to Flash Universal/FE Back-End Vault to Flash Universal/FE Fabric

  33. Engine Logical Port Numbering • SW designed to support 32 Logical Ports (Ports 0-31) • Ports 0,1,2,3 in Slot 1, and Ports 20,21,22,23 in Slot 7 are reserved for future use • Logical Ports numbered left to right bottom to top across slots 1-5, 7-9 Slot 0 Slot 1 Slot 2 Slot 3 Slot 4 Slot 5 Slot 6 Slot 7 Slot 8 Slot 9 Slot 10 7 11 15 19 27 31 Management B Vault to Flash Vault to Flash Vault to Flash Vault to Flash 1 6 10 14 18 26 30 Director 2 or B 5 9 13 17 25 29 0 4 8 12 16 24 28 7 11 15 19 27 31 Management A Vault to Flash Vault to Flash Vault to Flash Vault to Flash 1 6 10 14 18 26 30 Director 1 or A 5 9 13 17 25 29 0 4 8 12 16 24 28 Management Module Vault to Flash Universal/FE Back-End Vault to Flash Universal/FE Fabric

  34. Front-End IO Module Population Order • Back-End-IO Modules always present • Slot 4 & 5 • Front-End IO Modules are added in pairs on in Slots 2,3 & 8,9 • Same IO Module in same slot on both directors • Non-Bifurcated IO Modules will be populated left to right • 8Gbps Fibre Channel, 10 Gbps RDF, 1Gbps RDF, Compression • Bifurcated IO modules will be populated right to left • 16Gbps Fibre channel • Matrix Interconnect only works in Slot 10

  35. Emulation Types & Slice Mapping Dir 1 • Each Director Board will support up to 8 Slices • A through H • Slice A will be used for the Infrastructure management “IM” emulation • Slice B will be used for Enginuity Data Services “EDS” • Slice C will be used for the DS emulation • Slice D through H will be used for remaining emulations • Fiber (FA) • Each director board can support up to 5 Emulations with each emulation type only appearing once SliceH SliceG SliceF FA SliceE Slice D SliceC DS SliceB EDS SliceA IM

  36. Front End IO Modules

  37. 8Gbps FC IO Module Main Features • Quad Port 2,4,8Gbps Fibre Channel interface • 8 lane PCIE Gen2.0 interface • Either copper (4G) and optical (8G) media via SFP+ connector Chipset • PMC Sierra Tachyon QE8 Fibre Channel controller • Altera Cyclone EP1C3T100C8NFPGA for LED control • TI TUSB3410 RS232 to USB converter Support for Common Features • Power features - POLs, IOPIF, Sequencer and LC Filter • LED support - Power/Mark, Port • FRU and Hot Swappable • I2C interface to board resume, power sequencer • Dimensions: 3”w x 1.25”h x 7.875”d Front View Top View

  38. 16Gbps FC IO Module Main Features • Quad Port, 4, 8, 16Gbps, Fibre Channel interface • Two 4 lane PCIE Gen3 interfaces – bifurcated connection • Supports optical media via SFP+ connector – SM or MM Chipset • Dual Emulex Lancer • Dual port 16G FC controller • 256Mbit Flash (FW storage) • 256k serial EEPROM (Event log) for LANCER Support for Common Features • Power features - POLs, IOPIF, Sequencer and LC Filter • LED support - Power/Mark, Port • FRU and Hot Swappable • I2C interface to board resume, power sequencer • Dimensions: 3”w x 1.25”h x 7.875”d Front View

  39. Other IO Modules

  40. Back End SAS IO Module Main Features • Dual Port 4-lane 6G SAS interface • 8 lane PCIE Gen3.0 interface • Copper media via mini-SAS HD connector • IEEE1619 compliant AES encryption & T10 DIF Chipset • PMC Sierra SPCv 8x6G (PM8009) SAS controller • Serial EEPROM (PCIE parameters) for PM8009 • Flash device for PM8009 firmware storage Support for Common Features • Power features - POLs, IOPIF, Sequencer and LC Filter • LED support - Power/Mark, Ports • FRU and Hot Swappable • I2C interface to board resume, power sequencer • Dimensions: 3”w x 1.25”h x 7.875”d Front view Top view

  41. Management Modules

  42. Akula II Management Module • Akula LF (SAN) Main Features • Push-button RESET/NMI Switch • 2x Ethernet connections for Mgmt LAN, Service port • 2x Serial Port in micro DB9 connector for SPS • Resume PROM containing VPD information • Akula LF Chipset • Broadcom BCM53115MKFBG 10/100/1000 BASE-TX 6-port Switch • NXP LPC2132 Microcontroller • Management Common Features • Power features - POLs, IOPIF, Sequencer and LC Filter • LED support - Power/mark • FRU and Hot Swappable • I2C interface to board resume, power sequencer • Dimensions: 3”w x 1.25”h x 7.875”d • New features in Akula II • Provides monitoring of status of “other” SPS • Provides switchable power on USB port Front view Top view

  43. MMCS Definition • MMCS – Management Module Control Station – a management module (MM) with an embedded service processor (MiniSP) • Note - the service processor (SP/miniSP) is often referred to as MMCS and vise versa Embedded MiniSP MMCS MM

  44. MMCS in VMAX3 Engine • Each engine has two MM/MMCS • MMCS is named by the director next to it • MMCS is cooled by the director • If director is down, MMCS will follow MMCS-2 MMCS-1 Engine 1

  45. Why Multiple MMCS? • Run maintenance that affects MMCS state • MMCS, Director, DIMM and Engine replacement • Maintenance runs from a peer MMCS that is not affected • Also • MMCS health monitoring by peer(s) • Enables external connectivity redundancy • Enables SP redundancy (limited in GA) • May increase computation power if needed (future)

  46. MMCS Roles

  47. MMCS Roles Overview • Role defines the subset of services that a specific MMCS can provide • The Roles • Primary – similar to the legacy single SP (Feldman) • Secondary – runs a subset of the activity • Elevated secondary – secondary with few more responsibilities when primary is down • Why not identical? • Keep RCA simple – most activity on same SP • Ease maintenance • Only one should • Call home on symm errors • Invoke active actions on errors • Much simpler to implement – less coordination, less testing etc.

  48. Drive Enclosures

  49. VMAX3 DAE60– 3.5” • Platform Features • 60 3.5’’ SAS or SATA disk drives • High Availability(Juno power supply) • 6G SAS connectivity, 8 x4 mini SAS connectors • Mechanical • 4U, 19” wide NEMA, 39.5” deep • 225 lbs fully populated with 60 drives • Carrier with Paddle Card • NEBS level 3 certified • Service up to 31U in rack without a ladder • Power/Cooling • Input 1340 W, Output 1200 W, 90%+ efficient • N+1 fans with adaptive cooling • Four power rails on Baseboard • Four power cords (2 on each rail) • 15W/drive slot Front Rear

  50. VMAX3 DAE120 – 2.5” • Logic • Connectivity: 6Gb/s SAS • Connectors: 8 x4 Mini-SAS connectors • Mechanical • 3U, 19” wide NEMA, 39.5” deep • Loaded Weight: 150 lbs • Max. Drive Count: 120 2.5” drives • Rack Installation: Titan-D Rack • Power / Cooling • Max. Output Power: 2160W (10W/drive slot) • Power Architecture: N+1 • Total Power Inputs: 4 (AC) • SPS Support: Yes • Cooling Mode: Adaptive Cooling, N+2 Front Rear

More Related