1 / 37

R/Evolution 2000 Series

R/Evolution 2000 Series. Introduction As of Year 2007. Agenda. Short Sales Presentation for Customers Product Family Introduction Specifications and Architecture 2730 Installation WBI (Web Based Interface) GUI Introduction Vdisks and Volumes CLI (Command Line Interface) Introduction

Télécharger la présentation

R/Evolution 2000 Series

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. R/Evolution 2000 Series Introduction As of Year 2007

  2. Agenda • Short Sales Presentation for Customers • Product Family Introduction • Specifications and Architecture • 2730 Installation • WBI (Web Based Interface) GUI Introduction • Vdisks and Volumes • CLI (Command Line Interface) Introduction • Trouble Shooting Suggestions • Upgrading Firmware • Practice Exercises

  3. R/Evolution 2000 Series Product Family Introduction

  4. Terms Used • Controller Tray – An enclosure with one or two RAID I/O Modules • Expansion Tray - An enclosure with one or two JBOD I/O Modules • RAID I/O Module – A FRU containing the RAID controller and Host I/O module. Each controller tray contains one or two RAID I/O Modules. • Expansion I/O Module – A FRU with simple disk interface capabilities (no RAID). Each expansion tray contains one or two expansion modules • WBI – Web Browser Interface • CLI – Command Line Interface

  5. Product Description • Single-and Dual-controller hardware configurations • 2u, efficient packaging • Redundant cooling • Hot-swap components • DAS and SAN connectivity • Windows, Linux, Solaris • Up to four Direct-connect hosts • Targeted at SME and SMB environments • Fibre Channel Arbitrated Loop and Switch protocols • Support from 2 to 56 disks • 250 GB, 500 GB and 750 GB 7200 RPM SATA drive capacities • 73 GB, 146 GB, and 300GB 15K RPM SAS drive capacities • RAID 0,1,3,5,6,10,50 data protection • Hot spare support (Local and Global) • Embedded Management software is platform independent

  6. Product Comparison

  7. 12 Drive Sleds Status LEDs Tray ID Controller Modules Power & Cooling Modules Chassis Tour

  8. Chassis Tour - Disk Module Disk Slot Order

  9. Unit Locator Fault / Service Power Temperature Fault Chassis Tour – Status LEDs

  10. Chassis Tour - Controller Module FC Host Port 0 FC Host Port 1 Service Port CLI Port Expansion Port Ethernet Port

  11. Host Activity Link Speed Link Status System Locator OK to Remove Ethernet Link Status Expansion Port Status Fault / Service Ethernet Activity Cache Status FRU Status Chassis Tour - Controller LEDs

  12. SAS In Port SAS Out Port Service Port Expansion Module LEDs OK to Remove SAS Out Port Status System Locator Fault / Service Required Power SAS In Port Status

  13. Power and Cooling Module Ejection Handle AC Power Good DC-Fan Fault/ Service Required Power Switch Power Input

  14. R/Evolution Specifications and Architecture

  15. Mechanical Layout Power Supply Drive Sled I/O Module Mid-Plane

  16. Controller Module Scalability • Leverage existing chassis • ‘Plug in’ CPU • ‘Plug in’ HIMs (Host Interface Modules) HIM RAIDIO

  17. Host Interface Module (HIM) PBC PBC FC0 FC0 PBC PBC FC1 FC1 FC Controller FC Controller Host Interface Mezzanine (HIM) board (FC now, iSCSI, and SAS to follow ) SC Domain provides the following:- RAID - FPGA - SimulCache - EcoStor - Comm Channels SATA Disks EXP EXP Dongle Controller Module Architecture Controller B Controller A SimulCache Path Storage Controller Domain Storage Controller Domain SAS Controller SAS Controller Management Controller Management Controller SAS Expander SAS Expander

  18. Controller Module Architecture PBC PBC FC0 FC0 PBC PBC FC1 FC1 FC Controller FC Controller SATA Disks EXP EXP Dongle Controller A Controller B Storage Controller Intel Celeron 566MHz CPU Storage Controller Intel Celeron 566MHz CPU SAS Controller SAS Controller Management Controller Management Controller Management Controller provides: WBI, SNMP, CLI, DMS, SMI-S, and e-mail notification Management connection providing: WBI, SNMP, CLI and e-mail notification SAS Expander SAS Expander

  19. Controller Module Architecture PBC PBC FC0 FC0 PBC PBC FC1 FC1 FC Controller FC Controller SATA Disks EXP EXP Dongle Controller A Controller B SAS Domain provides access to all 56 disk drives Storage Controller Intel Celeron 566MHz CPU Storage Controller Intel Celeron 566MHz CPU SAS Controller SAS Controller Management Controller Management Controller SAS Expander SAS Expander

  20. Array Installation

  21. Installation Overview • Unpack the array • Obtain the necessary accessories and equipment • Mount the controller and expansion trays in a rack or cabinet • Connect the AC power to the two power modules • Perform initial power-up • Connect the management hosts to the controller tray • Connect the data hosts to the controller tray • Use WBI or the CLI to set the Ethernet IP address, netmask, and gateway address, for each controller module • Use WBI, or the CLI, to set the array date and time; change the management password • Set the basic array configuration parameters • Plan and implement your storage configuration

  22. Rack Mounting Steps Step 2 Step 3 Step 1

  23. Cabling Expansion Trays Cable from the SAS ‘Out’ port on the RAID I/O board… …to the SAS ‘In’ port on the expansion I/O board

  24. Cabling Expansion Trays (cont.) Repeat the cabling steps for the second data path…

  25. Cabling Additional Expansion Trays When powering up a system, power on the last module first and work your way up to the controller module. Cable from the SAS ‘Out’ ports… Adding another expansion tray is performed using the same principals of cabling from the ‘Out’ port to the “In” port. You can cable up to 4 JBODs …to the SAS ‘In” ports.

  26. Redundant Enclosure Cabling IOM B IOM A IOM B IOM A IOM A IOM B IOM B IOM A In In In In In In In In Out Out Out Out Out Out Out Out 2 4 3 1 SAS Expansion Channel 0 RAID A 0 0 RAID B 0 Improved system redundancy, Improved data availability • Reverse Cabling • Down one side • Up the next • If middle JBOD fails, all data on non-failing JBODs can still be accessed

  27. 2730 Interface Connections

  28. Establishing Connection Connecting the Management Hosts • RS232 Port: • Micro DB9 port • Requires special cable • Used to access the CLI • Does not require additional software Ethernet Port: Used to manage an array using the Web Based Interface or the CLI.

  29. Connecting to a terminal emulator • Start and configure a terminal emulator, such as HyperTerminal using the following settings • Terminal Emulator Display Settings • Terminal Emulation Mode VT-100 or ANSI (for color support) • Font Terminal • Translations None • Columns 80 • Terminal Emulator Connection Settings • Connector COM1 (typically) • Baud rate (bits/sec) 115,200 • Data bits 8 • Parity None • Stop bits 1 • Flow control None Make sure the cable is connected prior to starting the terminal emulator

  30. Setting up the IP Address • The command-line interface (CLI) embedded in each controller module enables you to access the module using RS-232 communication and terminal emulation software • For detailed information about the CLI, use the “help” command in the CLI. • From your network administrator obtain an IP address, subnet mask, and gateway address for each controller. • Each controller must have a different IP address. • Use the provided micro-DB9 serial cable to connect controller A to a serial port on a host computer • Your package contents include a micro-DB9-to-DB9 serial cable

  31. Applying the IP Address • At the prompt (#), type the following command to set the IP address for controller A • set network-parameters ip <address> netmask <netmask> gateway <gateway> controller <a> • where: • address is the IP address of the controller. • netmask is the subnet mask, in dotted-decimal format. • gateway is the IP address of a default router. • a|b specifies the controller whose network parameters you are setting • At the prompt (#), type the following command to set the IP address for controller B • set network-parameters ip <address> netmask <netmask> gateway <gateway> controller <b> • Type the following command to verify the new IP addresses: • show network-parameters • Exit the CLI by typing ‘exit’ at the command prompt • Verify Ethernet connectivity by pinging the IP addresses You can now manage the storage through the web interface

  32. VDisks

  33. What is a VDisk • Virtual Disk • Collection of disks set to a specific RAID Level • From the VDisk, volumes will be carved out • VDisk Properties • Can have an associated Global Spare • Can be expanded • Can contain up to 16 disks

  34. Hierarchy • Individual disk drives combine together to make a RAID Array – VDisk • Vdisks are carved up into volumes / LUNs / partitions • Vdisks are owned by one controller (hence volumes and LUNS also) • Volumes are presented to hosts by assigning LUN numbers • If a controller fails, LUNs are “failed over” to the other controller automatically • Surviving controller presents original controller and LUN ID • When failed controller is replaced, LUNs automatically fail back

  35. Relationship of Logical and Physical Storage Components Data hosts Mapping using LUNs Alternative Terms Partition / LUNs Volumes Logical Disks Logical Groups RAID Set RAID Array Virtual disks Disk Drives

  36. Managing Spares • Two types of spares: • VDisk spare is assigned to a specific virtual disk • Global spare is available for any failed disk in any redundant virtual disk • Dynamic Spares • When a disk is replaced, the controller rescans the bus, finds the new disk, and automatically starts reconstruction of the virtual disk • When a disk fails, the array looks for a vdisk spare first • If it does not find a properly sized vdisk spare, it looks for a global spare • If a reconstruct does not start automatically, it means that no validspares are available • To start a reconstruct, you must: • Replace the failed disk • Enable Dynamic Spares • OR add the new disk as a vdisk spare to the virtual disk or as a global spare • Remember that any global spares added might be used by any critical virtual disk, not necessarily the virtual disk you want

  37. VDisk Reconstruction • The array automatically reconstructs redundant virtual disks if a virtual disk becomes critical and a properly sized spare drive is available • A virtual disk becomes critical when one or more member drives fail • If a reconstruct does not start automatically, no valid spares are available • Remember that any global spares added might be used by any critical virtual disk, not necessarily the virtual disk you want Note – Although the critical virtual disk icon is displayed while the virtual disk is reconstructing, you can continue to use the virtual disk Note – Once you start reconstructing a virtual disk, you cannot stop it

More Related