1 / 46

Stretching an Oracle DB Across Sites with EMC VPLEX

Stretching an Oracle DB Across Sites with EMC VPLEX. Matthew Kaberlein Application Practice - Oracle Solutions. Discussion Flow. A Few Points To Remember From Discussion VPLEX Overview & Use Cases Stretched Oracle RAC With EMC VPLEX Blueprint

arupert
Télécharger la présentation

Stretching an Oracle DB Across Sites with EMC VPLEX

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stretching an Oracle DB Across Sites with EMC VPLEX Matthew Kaberlein Application Practice - Oracle Solutions

  2. Discussion Flow • A Few Points To Remember From Discussion • VPLEX Overview & Use Cases • Stretched Oracle RAC With EMC VPLEX Blueprint • RAC-on-VPLEX I/O types and flow • VPLEX Configuration Options To Note • RAC-on-VPLEX Guidelines & Considerations • VPLEX & RAC/VPLEX Component Failure Scenarios

  3. Access the Community to get WPs on topic today www.emc.com/everythingoracle

  4. Using VPLEX with an Oracle DB you can… Live Migrate a running DB from Site 1 to Site 2 (vice versa) using OVM or VMware Non-disruptively migrate a running DB from one Storage Array to another Deploy an Active-Active Stretch RAC implementation across 2 sites RAC Interconnect & FC Storage connectivity must be within synchronous distance & less than 5ms RTD Stretch RAC with VPLEX is a normal Oracle RAC install No special Pre-Install Tasks, Grid Infrastructure Install, RAC & DB Install steps No ASM Failure Group setup & No Host I/O mirroring No special network configuration for DBA No Voting disk in 3rd site A Few Points To Remember

  5. VPLEX OverviewWhat is VPLEX?What it Does?

  6. What is VPLEX • Enterprise class storage virtualization solution • Hardware & software Cluster  Enables scalability & availability • Aggregates & manages heterogeneous pools of SAN FC attached storage arrays, within & across data centers • Virtualizes HP, Hitachi, IBM, Oracle, NetApp, EMC, … storage arrays • Hosts see Storage arrays as a single storage array • VPLEX Cluster sits in the SAN between Hosts & FC storage arrays • VPLEX is the storage array to the DB servers • Designed to handle massive amount of I/Os  IOPS or MBs • Hardware has no single point of failure  Redundant everything • VPLEX AccessAnywhere clustering software  Allows read/write access to distributed volumes within & across data centers

  7. VPLEX Hardware – Engine / Director • 1 Engine = 2 Directors (picture below) • Non-disruptive hardware/software upgrades • 8000 LUNs (increasing next release Q1’ 2014) • 64GB Cache per Engine

  8. VPLEX Software • AccessAnywhere • Enables R/W storage virtualization over distance • Directory-based distributed cache coherence • Efficiently maintains cache state consistency across all Engines & Sites Host Read: Block 3 New Host Write: Block 3 Cache Directory D Cache Directory F Cache Directory H Cache Directory B Cache Directory C Cache Directory E Cache Directory G Cache Directory A Cache Cache Cache Cache

  9. VPLEX Deployment Information Over 800 PB deployed Over 2800 clusters deployed Largest deployment to date – 16PB ~6 - 9’s SYSTEM UPTIME 20 MILLION + RUN HOURS 40+ SUPPORTEDPLATFORMS

  10. What Are Some Use Cases With An Oracle DB?

  11. Mobility Non-disruptive storage infrastructure upgrades To new storage array To Tier-1 storage array Migrate live & running VMs between Data Centers Continuous Availability Stretching a RAC DB deployment across 2 sites Some VPLEX Use Cases

  12. Oracle DB Storage Array Migration • No Application downtime on storage array refreshes • No ASM host based migration (ie. Add/Drop/rbal disks) • Enables storage optimization by enabling restructuring of data across storage arrays • Data center migrations can be done as an on-line event • AOL - Migrated 48 arrays in 52 weeks from May 2010 - May 2011 – an average of 1 array every 5.4 bus. days Zero impact to host Applications when moving data LUN 1v LUN 1v Dev A Dev C Dev B Distributed Virtual Volume Data Mobility Data Mobility Storage Migration Datacenter Migration Array 1 Array 2 Array 3 Data Center 1 Data Center 2

  13. Relocates DB instance VM to another Server in Cluster & DB Data to another Storage array, both in another Data Center OVM/VMW migrates DB instance, VPLEX migrates DB data Enabled by: Sync distance, 5ms RTD for Storage & VM IP network, Shared Virtual disks to both Servers, Layer 2 network stretched OVM/VMW Live Migration between 2 sites Storage array Storage array

  14. Stretching a RAC over Metro distance • RAC nodes dispersed across 2 sites, Reading & Writing to a single logical DB • VPLEX virtualized storage used to simulate a single, local DB and RAC configuration Site 1 Oracle RAC Nodes 1 & 2 Site 2 Oracle RAC Nodes 3 & 4 DISTRIBUTED VIRTUAL VOLUMES RAC Interconnect Oracle Homes on VPLEX volumes Oracle ASM disks on VPLEX volumes  +DATA, +FRA, +REDO

  15. Stretch RAC-VPLEXWhat does it look like to us DBAs?

  16. RAC/VPLEX Metro Reference Architecture – DBA View “a typical CRS-RAC install/config” • Still must use a single subnet for Interconnect & Public networks: • Requires layer 2 of network extended across 2 sites • Brocade VCS/LAG, Cisco OTV • Look & feel of local RAC • Oracle Universal Installer is not aware of different subnets • DB Services still function properly • SCAN VIP still functions properly, both sites are used & load balanced • Still use non-routable IP address for Interconnect Site 1 Oracle RAC Nodes 1 & 2 Site 2 Oracle RAC Nodes 3 & 4 RAC Interconnect DISTRIBUTED VIRTUAL VOLUMES (like RAID-1) One set of Oracle Clusterware on VPLEX volumes  (+GRID) One set of Oracle ASM DB disks on VPLEX volumes  +DATA, +FRA, +REDO 4 sets of Oracle S/W & confg files on VPLEX volumes

  17. RAC-VPLEXWhat is a more detailed view?

  18. RAC/VPLEX Metro Reference Architecture Site 1 Oracle RAC Nodes 1 & 2 Site 2 Oracle RAC Nodes 3 & 4 RAC Interconnect Oracle S/W & confg files on VPLEX volumes “Identical” virtual volumes (actually the same volumes) Storage Network DISTRIBUTED VIRTUAL VOLUMES (ie. ASM Disks) Oracle Clusterware on VPLEX volumes (+GRID) VPLEX interconnect (dark fibre) Oracle ASM DB on VPLEX volumes VPLEX Cluster-1 VPLEX Cluster-2 Dedicated IP link Dedicated IP link VPLEX Cluster Witness Physical volumes Physical volumes Storage Array 1 Storage Array 2 Site 3

  19. Oracle DB to VPLEX to Storage Array I/O typesAssumes storage array is an intelligent cache based platform

  20. VPLEX Cache Used - Read Hit DB Server • Data found in VPLEX Cache: • 1) Read request sent to VPLEX • 2) Read instantly acknowledged via VPLEX & data sent back to host Oracle Instance 2) 1) Storage Array VPLEX Cache Cache M 1 M 2

  21. VPLEX Cache Not Used – Short Read Miss • Data not found in VPLEX Cache: • 1) Read request sent to VPLEX • 2) Data not found in VPLEX cache • 3) Read request sent to Storage • 4) Data found in Storage cache & piped in to VPLEX cache • 5) Data in VPLEX cache sent to host DB Server Oracle Instance 5) 1) Storage Array VPLEX 3) 2) Cache Cache 4) M 1 M 2

  22. VPLEX Cache Not Used – Long Read Miss • Data not found in VPLEX & Storage Cache: • 1) Read request sent to VPLEX • 2) Data not found in VPLEX cache • 3) Read request sent to Storage • 4) Data not found in Storage cache • 5) Data read from disk & piped in to Storage cache • 6) Data in Storage cache sent to VPLEX cache • 7) Data in VPLEX cache sent to host DB Server Oracle Instance 7) 1) Storage Array VPLEX 3) 2) 4) Cache Cache 6) 5) M 1 M 2

  23. VPLEX - Write-Through Cache Write-through Cache: 1) Write request sent to VPLEX-1 2) VPLEX-1 sends write request to Storage-1 Cache& VPLEX-2 3)  VPLEX-2 sends write request to Storage-2 Cache 4) Write acknowledgment sent back to VPLEX-1 from Storage 1 5) Write acknowledgement sent back to VPLEX-2 from Storage2 6) Write acknowledgement sent back to VPLEX-1 from VPLEX-2 7) When VPLEX-1 receives acknowledgement from both Storage 1 and VPLEX-2, the Write is acknowledgement sent back to host 8)  Data de-staged to disk later, from Caches on both Storage-1 & 2 DB Server Oracle Instance 7) 1) VPLEX-2 VPLEX-1 6) Cache Cache 2) 4) 5) 3) 2) Storage-2 Storage-1 Cache Cache 8) 8) M 1 M 2 M 1 M 2

  24. Why implement a Stretched RAC with VPLEX? Besides having a Business Case…

  25. Availability Failover and failback eliminated No Single Point of Failure Add Engines & RAC nodes non-disruptively Data is accessed locally VPLEX Witness is 3rd site arbiter Value of VPLEX with Oracle RAC • Scalability • VPLEX devices upgraded dynamically • Scale RAC as normally would do • Cache & performance attributes scale with number of engines • Storage capacities in excess of 1 PB fully supported Performance • All reads are local & leverages caches in VPLEX & Storage • VPLEX scales to meet performance requirements • No host CPU cycles used to mirror data to both locations DBA Simplicity • No ASM I/O mirroring administration • No 3rd site Voting disk to execute arbitrated failover • DB sessions can still use TAF • DR validation natural to architecture • Non-disruptive data mobility across storage frames

  26. EMC VPLEX Configuration Options

  27. RAC/VPLEX Metro Reference Architecture Site 1 Oracle RAC Nodes 1 & 2 Site 2 Oracle RAC Nodes 3 & 4 RAC Interconnect Oracle S/W & confg files on VPLEX volumes “Identical” virtual volumes (actually the same volumes) Storage Network DISTRIBUTED VIRTUAL VOLUMES (ie. ASM Disks) Oracle Clusterware on VPLEX volumes (+GRID) VPLEX interconnect (dark fibre) Oracle ASM DB on VPLEX volumes VPLEX Cluster-1 VPLEX Cluster-2 Dedicated IP link Dedicated IP link VPLEX Cluster Witness Physical volumes Physical volumes Storage Array 1 Storage Array 2 Site 3

  28. VPLEX Witness  VPLEX Cluster Arbiter Used to improve Application availability in the event of a site failure, VPLEX cluster failure and/or Inter-cluster communication failure Same approach as CRS Voting Disk in a 3rd location Connects to both VPLEX Clusters over IP network Up to 1 second RTD(round trip delivery) Monitors each VPLEX Cluster & VPLEX Cluster interconnect Deployed as a VM on a host in a 3rd site, in a failure domain outside the VPLEX clusters Use VMware DRS on physical host failure to failover VM Use VMware Fault Tolerance to ensure VM always up VPLEX Metro Cluster Configuration Options…

  29. VPLEX Detach Rules Predefined rules for Storage devices supporting a DB, that identifies which VPLEX cluster should detach its mirror leg, on VPLEX network communication failure, to allow the surviving VPLEX cluster to continue processing I/O requests Effectively, this defines a Preferred Site for storage devices To Avoid a VPLEX cluster split-brain situation And force The non-preferred site to suspend processing I/O requests to maintain data integrity for the device …VPLEX Metro Cluster Configuration Options

  30. RAC-on-VPLEX -- Guidelines & Considerations

  31. No need to deploy a Voting disk in 3rd site Maintained on each storage array in each location Use ASM to provide balanced I/O & capacity across front end LUNs (ASM disks) If using ASM today…. Use ASM External Redundancy for +DATA, +REDO, +FRA, etc… Create +GRID ASM disk group for CRS in 11gR2+ Contains OCR & Voting disk files Use ASM Normal or High Redundancy to create multiple copies Useful when using: Storage array local replication to create DB Clones to mount to another CRS stack Storage array remote replication for DR replication to mount DB to an already configured CRS stack at DR site Guidelines & Considerations…

  32. Use Virtual Provisioning for the DB for wide striping and balanced Capacity & I/O across storage pool Use 2 physical HBAs in each RAC node (VM node) Each HBA port connected to front-end ports on different VPLEX Directors of an Engine Use I/O Multipathing software (such as PowerPath) to enable multiple & balanced I/O paths to storage infrastructure Storage array Clones & Snaps still work, as well as Remote Replication for DR …Guidelines & Considerations

  33. Component Failure Scenarios

  34. On the next slide, where 2 or more lightning bolts, we assume simultaneous failures of multiple VPLEX components (rare)

  35. VPLEX Component Failure Scenarios Assuming C1 has bias C1 Continues I/O, C2 Suspends & Issues Dial Home C1 and C2 Continue I/O & Issue Dial Home C1 Continues I/O & C2 Down & Issues Dial Home C1 C1 C1 C1 C1 C1 C2 C2 C2 C2 C2 C2 W W W W W W C1 and C2 Continue I/O C2 Issues Dial Home C1 and C2 Continue I/O And Issue Dial Home C1 Continues I/O, C2 Suspends I/O & Issues Dial Home

  36. RAC & VPLEX Component Failure Scenarios

  37. In Summary • Using VPLEX with an Oracle DB you can… • Live Migrate a running DB from Site 1 to Site 2 (vice versa) using OVM or VMware • Non-disruptively migrate a running DB from one Storage Array to another • Deploy an Active-Active Stretch RAC implementation across 2 sites • RAC Interconnect & FC Storage connectivity must be within synchronous distance & less than 5ms RTD • Stretch RAC with VPLEX is a normal Oracle RAC install • No special Pre-Install Tasks, Grid Infrastructure Install, RAC & DB Install steps • No ASM Failure Group setup & No Host I/O mirroring • No special network configuration for DBA • No Voting disk in 3rd site

  38. Appendix: Will not review in OUG meetings

  39. Additional Failure Scenarios

  40. …RAC & VPLEX Component Failure Scenarios

  41. …RAC & VPLEX Component Failure Scenarios

  42. …RAC & VPLEX Component Failure Scenarios

  43. …RAC & VPLEX Component Failure Scenarios

  44. Some approaches to place host LUNs(ASM disks) in to VPLEX

  45. Encapsulation Method (Short outage to cutover) Host is shutdown VPLEX “claims” the volumes that were presented to the host Claimed Volumes are configured as Devices on the VPLEX and presented back to the host Hosts 1 3 VPLEX 2 Donor Array

  46. Host Migration Method (Online, no outage) • New VPLEX Target Volumes are presented to the Hosts via the SAN • Host Migration to VPLEX • VMware Storage VMotion • AIX LVM Mirroring • Solaris/ Veritas LVM Mirroring • ASM (Add/Drop disks) • Once Migration is complete, the Donor Mirror is broken and migration to the VPLEX is complete Hosts 1 2 VPLEX Donor Array

More Related