Download
slide1 n.
Skip this Video
Loading SlideShow in 5 Seconds..
A Practical Guide to Oracle 10g RAC Its REAL Easy! Gavin Soorma, Emirates Airline, Dubai Session# 106 PowerPoint Presentation
Download Presentation
A Practical Guide to Oracle 10g RAC Its REAL Easy! Gavin Soorma, Emirates Airline, Dubai Session# 106

A Practical Guide to Oracle 10g RAC Its REAL Easy! Gavin Soorma, Emirates Airline, Dubai Session# 106

311 Vues Download Presentation
Télécharger la présentation

A Practical Guide to Oracle 10g RAC Its REAL Easy! Gavin Soorma, Emirates Airline, Dubai Session# 106

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. A Practical Guide to Oracle 10g RAC Its REAL Easy! Gavin Soorma, Emirates Airline, Dubai Session# 106

  2. Agenda • RAC concepts • Planning for a RAC installation • Pre Installation steps • Installation of 10g R2 Clusterware • Installation of 10g R2 Software • Creation of RAC database • Configuring Services and TAF • Migration of single instance to RAC

  3. What Is a RAC Cluster? • Nodes • Interconnect • Shared disk subsystem • Instances • Database Interconnect Node Node Disks

  4. Node 1 Node 2 Instance 1 Instance 2 Interconnect LocalDisk Shared Storage LocalDisk Database vs Instance RAC Cluster consists of …. One or more instances One Database residing on shared storage Database

  5. Why RAC? • High Availability – survive node and instance failures • Scalability – Add or remove nodes when needed • Pay as you grow – harness the power of multiple low-cost computers • Enable Grid Computing • DBA’s have their own vested interests!

  6. What is Real Application Clusters? • Two or more interconnected, but independent servers • One instance per node • Multiple instances accessing the same database • Database files stored on disks physically or logically connected to each node, so that every instance can read from or write to them

  7. A RAC Database –what’s different? Contents similar to single instance database except … • Create and enable one redo thread per instance • If using Automatic Undo Management also require one UNDO tablespace per instance • Additional cluster specific data dictionary views created by running the script $ORACLE_HOME/rdbms/admin/catclust.sql • New background processes • Cluster specific init.ora parameters

  8. RAC specific Background Processes • LMON – Global Enqueue Service Monitor • LMD0 – Global Enqueue Service Daemon • LMSx – Global Cache Server Processes • LCK0 – Lock Process • DIAG – Diagnosability Process

  9. RAC init.ora Parameters *.db_cache_size=113246208 *.java_pool_size=4194304 *.db_name='racdb‘ racdb2.instance_number=2 racdb1.instance_number=1 *.log_archive_dest_1='LOCATION=USE_DB_RECOVERY_FILE_DEST' racdb2.thread=2 racdb1.thread=1 *.undo_management='AUTO' racdb2.undo_tablespace='UNDOTBS2' racdb1.undo_tablespace='UNDOTBS1'

  10. 10g RAC Implementation Steps • Hardware – Network Interface Cards, HBA cards etc • Interconnects – Physical cable, Gigabit Ethernet switch • Network – Virtual IP addresses • Plan the type of shared storage (ASM, OCFS etc) • Download latest RPM’s – ASM, OCFS • Install Clusterware (Cluster Ready Services) • Install 10g RAC software • Create RAC database • Configure Services and TAF ( Transparent Application Failover)

  11. RAC Database Storage • Oracle files (control file, data files, redo log files) • Server Parameter File ( SPFILE) • Archive log files • Flash Recovery Area • Voting File • Oracle Cluster Registry (OCR) File • OCFS version 2.x will support shared ORACLE_HOME

  12. Oracle Cluster Registry File • OCR contains important metadata about RAC instances and nodes that make up the cluster • Needs to be on a shared storage device • About 100MB in size • In Oracle 10g Release 2, higher availability for this critical component is provided by enabling a second OCR file location

  13. Voting Disk File • Contains information about cluster membership • Used by CRS to avoid ‘split-brain’ scenarios if any node loses contact over the interconnect • Mandatory to be located on shared storage • Typically about 20MB in size • Can be mirrored in Oracle 10g Release 2

  14. Shared Storage Considerations Mandatory for: Datafiles, Redo Log Files, Control Files, SPFile Optional for: Archive logfiles, Executables, Binaries, Network Configuration files Supported shared storage NAS (network attached storage) SAN ( storage area network) Supported file storage Raw volumes Cluster File System ASM

  15. Shared Storage Considerations • Archive log files cannot be placed on raw devices • CRS Files ( Voting Disk/Cluster Registry (OCR) cannot be stored on ASM • Software is installed on regular file system local to each node • Database files can exist on raw devices, ASM or Cluster File System (OCFS)

  16. Network Requirements • Each node must have at least two network adapters; one for the public network interface and one for the private network interface (the interconnect). • The public network adapter must support TCP/IP • For the private network, the interconnect must preferably be a Gigabit Ethernet switch that supports UDP. This is used for Cache Fusion inter-node connection • Host name and IP addresses associated with the public interface should be registered in DNS and /etc/hosts

  17. IP Address Requirements For each Public Network Interface an IP address and host name registered in the DNS One unused Virtual IP address and associated host name registered in the DNS for each node to be used in the cluster A private IP address and optional host name for each private interface Virtual IP addresses is used in the network config files

  18. Virtual IP Addresses • VIPs are used in order to facilitate faster failover in the event of a node failure • Each node not only has its own statically assigned IP address as well as also a virtual IP address that is assigned to the node • The listener on each node will be listening on the Virtual IP and client connections will also come via this Virtual IP. • Without VIP, clients will have to wait for long TCP/IP timeout before getting an error message or TCP reset from nodes that have died

  19. Sample /etc/hosts file racdb1:/opt/oracle> cat /etc/hosts # Do not remove the following line, or various programs # that require network functionality will fail. #127.0.0.1 itlinuxbl53.hq.emirates.com itlinuxbl53 localhost.localdomain localhost 57.12.70.59 itlinuxbl54.hq.emirates.com itlinuxbl54 57.12.70.58 itlinuxbl53.hq.emirates.com itlinuxbl53 10.20.176.74 itlinuxbl54-pvt.hq.emirates.com itlinuxbl54-pvt 10.20.176.73 itlinuxbl53-pvt.hq.emirates.com itlinuxbl53-pvt 57.12.70.80 itlinuxbl54-vip.hq.emirates.com itlinuxbl54-vip 57.12.70.79 itlinuxbl53-vip.hq.emirates.com itlinuxbl53-vip

  20. Setup User equivalence using SSH • To install on all nodes in the cluster by launching OUI on one node • Will not prompt for password • OUI will use ssh or rcp to copy files to remote nodes ssh-keygen -t dsa cat id_dsa.pub > authorized_keys • Copy authorized_keys from this node to other nodes • Run the same command on all nodes to generate the authorized_keys file • Finally all nodes will have the same authorized_keys file

  21. Setting up User Equivalence ITLINUXBL53 ssh-keygen -t dsa cat id_dsa.pub > authorized_keys scp authorized_keys itlinuxbl54:/opt/oracle ITLINUXBL54 ssh-keygen -t dsa cat id_dsa.pub >> authorized_keys scp authorized_keys itlinuxbl53:/opt/oracle/.ssh ssh itlinuxbl54 hostname ssh itlinuxbl53 hostname

  22. Configure the hang check timer • Monitors the Linux kernel for hangs • If hang occurs module reboots the node • Define how often in seconds module checks for hangs • Define how long module waits for response from kernel [root@itlinuxbl53 rootpre]# /sbin/insmod hangcheck-timer hangcheck_tick=30 hangcheck_margin=180 Using /lib/modules/2.4.21-37.ELsmp/kernel/drivers/char/hangcheck-timer.o [root@itlinuxbl53 rootpre]# lsmod | grep hang hangcheck-timer 2672 0 (unused)

  23. Case Study Environment • Operating System: LINUX X86_64 RHEL 3AS • Hardware: HP BL25P Blade Servers with 2 CPU’s (AMD 64 bit processors) and 4 GB of RAM • Oracle Software: Oracle Database 10g Enterprise Edition Release 10.2.0.1.0 - 64bit • Two Node Cluster: ITLINUXBL53.hq.emirates.com, ITLINUXBL54.hq.emirates.com • Shared Storage: OCFS for Cluster Registry and Voting Disks. ASM for all other database related files • Database Name: racdb • Instance Names: racdb1, racdb2

  24. Oracle 10g CRS Install • Oracle 10g Clusterware – Cluster Ready Services • Oracle’s own full-stack clusterware coupled with RAC • Replaces earlier dependency on third-party clusterware • Oracle CRS replaces the Oracle Cluster Manager (ORACM) in Oracle9i RAC • CRS must be installed prior to the installation of Oracle RAC

  25. CRS Installation – Key Steps • Voting Disk – about 20MB (Oracle9i Quorum Disk) Maintains the node heartbeat and avoids the node split-brain syndrome • Oracle Cluster Registry – about 100MB Stores cluster configuration and cluster database information • Private Interconnect Information Select the network interface for internode communication A Gigabit Ethernet interface is recommended • Run root.sh Start CRS daemon processes – evmd, cssd, crsd

  26. Oracle Cluster File System • Shared disk cluster file system for LINUX and Windows • Improves management of data by eliminating the need to manage raw devices • Can be downloaded from OTN http://oss.oracle.com/projects/ocfs • OCFS 2.1.2 provides support on Linux for Oracle Software installation as well

  27. Install the OCFS RPM’s [root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-support-1.1.5-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-support ########################################### [100%] [root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-tools-1.0.10-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-tools ########################################### [100%] [root@itlinuxbl54 recyclebin]# rpm -ivh ocfs-2.4.21-EL-smp-1.0.14-1.x86_64.rpm Preparing... ########################################### [100%] 1:ocfs-2.4.21-EL-smp ########################################### [100%]

  28. OCFSTOOL – Generate Config

  29. The OCFS Configuration File [root@itlinuxbl53 etc]# cat /etc/ocfs.conf # # ocfs config # Ensure this file exists in /etc # node_name = itlinuxbl53.hq.emirates.com ip_address = 10.20.176.73 ip_port = 7000 comm_voting = 1 guid = 5D9FF90D969078C471310016353C6B23

  30. OCFSTOOL – Format Partition

  31. OCFSTOOL – Mount File System

  32. OCFSTOOL – Mount File System

  33. OCFSTOOL – Mount File System

  34. ASM Architecture RAC Database ASM Instance ASM Instance Oracle DB Instance Oracle DB Instance Clustered Servers Disk Group Clustered Pool of Storage

  35. Install the ASMLIB RPM’s [root@itlinuxbl53 recyclebin]# rpm -ivh oracleasm-support-2.0.1-1.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-support ########################################### [100%] [ [root@itlinuxbl53 recyclebin]# rpm -ivh oracleasm-2.4.21-37.ELsmp-1.0.4-1.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasm-2.4.21-37.ELs########################################### [root@itlinuxbl53 recyclebin]# rpm -ivh oracleasmlib-2.0.1-1.x86_64.rpm Preparing... ########################################### [100%] 1:oracleasmlib ########################################### [100%]

  36. Creating the ASM Disks [root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL1 /dev/sddlmab1 Marking disk "/dev/sddlmab1" as an ASM disk: [ OK ] [root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL2 /dev/sddlmac1 Marking disk "/dev/sddlmac1" as an ASM disk: [ OK ] [root@itlinuxbl53 init.d]# ./oracleasm createdisk VOL3 /dev/sddlmaf1 Marking disk "/dev/sddlmaf1" as an ASM disk: [ OK ] [root@itlinuxbl53 init.d]# ./oracleasm listdisks VOL1 VOL2 VOL3 [root@itlinuxbl54 init.d]# ./oracleasm scandisks Scanning system for ASM disks: [ OK ]

  37. The Cluster Verify Utility(cluvfy) • Perform pre-installation and post-installation checks at various stages of the RAC installation • Available in 10g Release 2 ./runcluvfy.sh comp nodereach -n itlinuxbl53,itlinuxbl54 –verbose ./runcluvfy.sh stage -pre crsinst -n itlinuxbl53,itlinuxbl54 –verbose ./runcluvfy.sh comp nodecon -n itlinuxbl53,itlinuxbl54 –verbose ./runcluvfy.sh stage -post hwos -n itlinuxbl53 -verbose

  38. Install the cvuqdisk RPM for cluvfy [root@itlinuxbl53 root]# cd /opt/oracle/cluster_cd/clusterware/rpm [root@itlinuxbl53 rpm]# ls cvuqdisk-1.0.1-1.rpm [root@itlinuxbl53 rpm]# export CVUQDISK_GRP=dba [root@itlinuxbl53 rpm]# rpm -ivh cvuqdisk-1.0.1-1.rpm Preparing... ########################################### [100%] 1:cvuqdisk ########################################### [100%]

  39. 10g Clusterware Installation

  40. Prerequisites Validation

  41. Configuring the 10g RAC Cluster

  42. Configuring the 10g RAC Cluster

  43. Configuring the Network Interfaces

  44. Oracle Cluster Registry (OCR)

  45. Mirroring the OCR

  46. Voting Disk

  47. 10g Clusterware OUI – Remote Installation

  48. 10g Clusterware – root.sh

  49. Configuration Assistants

  50. 10g RAC phase one complete!