1 / 50

Configuring Warehouse Builder in RAC Environments

Configuring Warehouse Builder in RAC Environments. Objectives. After reading this appendix, you should be familiar with: Devising a plan for installing and configuring OWB in your RAC environment

cargan
Télécharger la présentation

Configuring Warehouse Builder in RAC Environments

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Configuring Warehouse Builder in RAC Environments

  2. Objectives • After reading this appendix, you should be familiar with: • Devising a plan for installing and configuring OWB in your RAC environment • Using Oracle Universal Installer and the OWB Repository Assistant to install the OWB repository and register it on all cluster nodes • Replicating files among nodes as necessary • Changing a database configuration parameter to enable nodes to synchronize more quickly • Using Repository Browser to monitor node activity • Locating log files and collecting other information to troubleshoot node failures and monitor recovery

  3. Scope of RAC Discussed in This Appendix • It is assumed that the RAC architecture has already been designed, installed, and configured in your workplace. • Warehouse Builder will be installed and configured to fit the given RAC architecture. • Warehouse Builder will be installed by staff with some degree of RAC experience and expertise. • The Oracle Warehouse Builder 11g Release 2 Installation Guide has a detailed section on RAC. • There is detailed Oracle documentation on configuring Oracle for RAC.

  4. Clusters • An Oracle RAC system is composed of a group of independent servers, or nodes. • Interconnected nodesact as a single server. • Cluster software hides the structure. • Disks are availablefor read and write by all nodes. • The operating systemis the same on each machine. Interconnect Node Node Node Node Clusterwareon each node Disks

  5. Oracle Real Application Clusters (RAC) • Multiple instancesaccessing the samedatabase • Instances spread oneach node • Physical orlogical accessto each database file • Software-controlleddata access Interconnect Node Node Node Node SharedCache Instancesspreadacross nodes Databasefiles

  6. Benefits of RAC • High availability: Survive node and instance failures. • No scalability limits: Add more nodes as you need them tomorrow. • Pay as you grow: Pay for just what you need today. • Key grid computing feature: • Growth and shrinkage on demand • Single-button addition and removal of servers • Automatic workload management for services • Goals of RAC • High availability through: • Load balancing • Failover handling

  7. OWB Certification on RAC • OWB versions certified for RAC: • OWB 10.1.0.4 is certified (both Database 10.1 and 10.2). • OWB 10g R2 is certified. • OWB 11g R1 is certified. • OWB 11g R2 will be certified. • OWB for RAC is certified on a listed set of database platforms. • For the latest certification and porting information, check OTN and MetaLink.

  8. Typical Connect Failover Scenarios • Database instance dies (crashes or is brought down for maintenance). • Control Center Service dies if a control center is on a node where the database instance dies (rest of RAC is all right). • Connections may fail if a target instance for a request is down. • Listener dies. • Control center reconnections may fail if the control center resides on a node where the listener is down. • Connections may fail if the target listener for a request is down. • Node dies (crashes or brought down for maintenance). • Control Center Service dies if the control center is on a dead node. • Connection may fail if the target node for a request is down. • Session death must be handled by client.

  9. Control Center Service Failover on RAC 1. Control Center Service (CCS) node fails. 2. CCS goes down. 3. CCS comes up on a different node of the RAC. 4. CCS resumes activities. Control CenterService Control center Single logical instance

  10. Supported and Unsupported RAC Features • Load balancing • Connection load balancing based on server (supported) • Client-side load balancing (not supported) • Failover types • Connect failover: node, listener, instance down (supported with properly configured tnsnames.ora) • Transparent Application Failover (not supported) • Multiple concurrent Control Center Services (not supported)

  11. Lesson Progress Report Install OWB. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. Configure database and OWB. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 4. Use Runtime Assistant on other nodes to register the repository. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. Monitor nodes. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes.

  12. Single shared disk or local disk on each node? Local disks Shared disk Single Shared Disk Versus Local Disk Per Node • This decision is usually made before OWB is considered; it is not an OWB decision. • Usually, most of the RAC architecture has been decided; OWB only needs to fit. • Advantages of shared disk: • Centralized product installation and maintenance • No need to replicate files among nodes • No need to find and collect log files among nodes

  13. Extra Tasks Required of Non-Shared Local Disk Environments • Warehouse Builder must be installed on every node. • rtrepos.properties and tnsnames.ora must be copied to each node (tnsnames must be placed in two places per node, in addition to the Oracle home). • In a nonshared disk environment, you have a log directory on every nonshared disk. • Log directory primarily contains Control Center Service log files, written to whichever node was active at the time. • You must look in all nodes to find the most common log file, possibly by using time stamps.

  14. Lesson Progress Report Install OWB. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. Configure database and OWB. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 4. Use Runtime Assistant on other nodes to register the repository. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. Monitor nodes. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes. 9. Search logs on nodes, run helpful utilities, avoid common mistakes.

  15. Installing OWB on Real Application Clusters • OWB server software must be present on every node of the RAC cluster. • OWB Control Center Service requires this. • The Control Center browser enables you to nominate one node and register all other nodes. • You need to install OWBonly on the nodes that you want OWB to run on. • Only one Control Center Service is running at a time. OWB run-time software

  16. Shared disk or local disk on each node? Cluster Installation Local Installation OWB Installation Decisions for RAC Local disks Shared disk Select “LocalInstallation”(install to shared disk;not to nodes). Cluster or LocalInstallation? “Cluster Installation”installs OWB locallyto all chosen nodes simultaneously. This lesson shows acluster installation to two nodes. “Local Installation”installs OWB onenode at a time.

  17. Installing OWB with Oracle Universal Installer NOTE: these slidesshow OWB beinginstalled to anOracle 10g R2database.

  18. Specifying Oracle Home for All Nodes

  19. Selecting Cluster or Local Installation • When installing to a local disk on each node, rather than to a shared disk, there are two choices: • Cluster Installation • Local Installation Cluster or LocalInstallation? “Cluster Installation”installs OWB locallyto all chosen nodes simultaneously. “Local Installation”installs OWB onenode at a time. Install once Install Install

  20. Selecting Cluster Installation Secured Shell Environment (SSH) is required for Cluster Installation(allows connecting to eachnode without a password). OWB will be installedto both nodes.

  21. Installing to the Same Path on All Nodes OWB will be installed to this home on both cluster nodes.

  22. Installation Includes Copying to Remote Nodes In this example, OWB is firstinstalled locally to the OWB home of node stbde03. Because Cluster Installationwas selected, that OWB home is then copied to node stbde04.

  23. Executing the root.sh Configuration Script in Each Cluster Node This dialog box is merely a reminder to run theroot.sh script on eachnode as “root.” You cannot use it to execute the script.

  24. Lesson Progress Report Install OWB. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. Configure database and OWB. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 4. Use Runtime Assistant on other nodes to register the repository. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. Monitor nodes. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes.

  25. Installing the Repository to One Node and Registering It to Other Nodes Shared disk Disk drive on a network server Node 1 Node 2 Database or listener or node might die. DB instance:OWBRAC1 DB instance:OWBRAC2 OWB repositoryinstalled on this node OWB repositoryregistered on this node Host name: STDB03 Net service name:SN_OWBRAC1 Host name: STDB04 Net service name:SN_OWBRAC2

  26. Installing the OWB Repository with the Repository Assistant Install OWB repository only once,to one node. Then register the repository onall other nodes.

  27. Connection Information for the Node Installing OWB repository to node 1, STBDE03 Physical machine namefor STBDE03 node Database instance nameon STBDE03 node Net service name must be usedfor RAC; stored in tnsnames.ora. OWB on RAC requires multiple service names, one for the cluster as a whole, plus service names for each node on the cluster. The OWB 10gR2 Installation Guide has a RAC section detailing this.

  28. Finishing Installation of the Repository to a Node Use the Repository Assistant as you normally would in a non-RAC setup. Net Service Nameof node 1

  29. Rerunning the Repository Assistant to Register the Repository on All Other Nodes

  30. Finishing OWB Repository Registration

  31. Only One Database Parameter Specifically Recommended for RAC • MAX_COMMIT_PROPAGATION_DELAY • Change default of 99 to 0. • A value of 0 aligns the nodes for faster synchronization. • This change is recommended, not required. • For details, see MetaLink note 341963.1, Part 1, Page 51, item 13.

  32. rtrepos.properties Must Be Replicated to All Nodes in owb/bin/admin path

  33. Moving a Copy of rtrepos.propertiesto Each Node Connecting from node 1 tonode 2 to copy the file Copying the file to node 2

  34. OWB RAC Locations Use Net Service Names Log in using Net Service Name forRAC protection,if running jobs. Define OWB location using Net Service Name.

  35. Sample TNSNAMES.ORA File SN_OWBRAC2 =  (DESCRIPTION =    (ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521))    (ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521))    (LOAD_BALANCE = yes)    (CONNECT_DATA =      (SERVER = DEDICATED)      (SERVICE_NAME = owbrac.us.oracle.com)      (INSTANCE_NAME = owbrac2)    )  ) SN_OWBRAC1 =  (DESCRIPTION =    (ADDRESS = (PROTOCOL = TCP)(HOST = stbde03-vip)(PORT = 1521))    (ADDRESS = (PROTOCOL = TCP)(HOST = stbde04-vip)(PORT = 1521))    (LOAD_BALANCE = yes)    (CONNECT_DATA =      (SERVER = DEDICATED)      (SERVICE_NAME = owbrac.us.oracle.com)      (INSTANCE_NAME = owbrac1)    )  ) Place file in these three folders: ORACLE_HOME/network/admin OWB_INSTALLED_HOME /network/admin(for OWB Import) OWB_INSTALLED_HOME /owb/network/admin

  36. RAC Deployment Control center Locations are defined using net service names, not host:port:service. Single logical instance Designrepository Control center

  37. RAC Deployment: Alternate Node Control center Single logical instance Designrepository Control center

  38. Lesson Progress Report Install OWB. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. Configure database and OWB. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 4. Use Runtime Assistant on other nodes to register the repository. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. Monitor nodes. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes.

  39. Logging In to OWB Browser

  40. Select the Service Node Report

  41. Service Node Report Shows the Status of Nodes Node 1 Node 2

  42. Disabling a Node Disable node 1 by clearing the Enabled check box and clicking the Update Node Details button. Node 1 Node 2

  43. Enabling a Node Click Refresh. Node 2 is now the active node. Node 1 Node 2

  44. Lesson Progress Report Install OWB. 1. Decide whether to use OWB with shared storage or with nonshared local disks on each RAC node. 2. Select Cluster or Local Installation; run root.sh on each node. Configure database and OWB. 3. Install OWB repository only once, to one node. If using a shared disk, install repository there. 4. Use Runtime Assistant on other nodes to register the repository. 5. On the database, set MAX_COMMIT_PROPAGATION_DELAY to 0. 6. Replicate RTREPOS.PROPERTIES and TNSNAMES.ORA on each node. 7. Define OWB locations with TNS names, not host:port:service. Monitor nodes. 8. Use OWB Browser to monitor, enable, or disable Control Center Service on a node. Troubleshoot. 9. Search logs on nodes, run helpful utilities, avoid common mistakes.

  45. Useful Diagnostics for OWB RAC Problems • The main diagnostic is service_doctor.sql. • Show_service.sql will show whether the Control Center Service is available (running) or not available. • You can determine the same by using the Service Node Report; if the node is marked as enabled, you know that the service is available. • In SQL*Plus, select * on user_jobs. • Dbms_job checks every six minutes on RAC. • It identifies the node on which each job is monitoring • owb_collect.sql has system information on the database and repository. • Check for errors defining and replicating tnsnames.ora.

  46. Using SQL*Plus Scripts to Test Availability of Control Center Service Host name of node 2 Stop service > not available Start service > available

  47. Result of Not Replicating rtrepos.propertiesto a Node Hint: Think of replicating the rtrepos.propertiesfile to all RAC nodes every time you create a repository by using the Repository Assistant.

  48. Using OWB With or Without a Control Center Service • You can execute OWB mappings with or without a Control Center Service (CSS). • An RDBMS-only installation on RAC, without a CSS, is possible. • With a CSS, you can: • Use Control Center Manager • Use process flows • Run PL/SQL or SQL*Loader mappings • Have transparent failover and auditing support • Without a CSS, you can: • Run only PL/SQL mappings • Just save generated mapping code to file and execute later • Still have transparent failover and auditing support No CSS

  49. Further Study of RAC • For in-depth study of RAC, consider these Oracle University courses: • Oracle Database 10g: Real Application Clusters (D17276GC10) • Oracle 10g Database: RAC Deployment Workshop (D44424GC10) • Oracle Database 10g: RAC Basic Concepts and Architecture Seminar (D44422GC10), an in-class one-day course • Oracle Database 11g: RAC Administration (D50311GC11) • Oracle Database 11g: RAC Overview and Architecture Seminar (D53954GC10)

  50. Summary • In this appendix, you should have become familiar with: • Devising a plan for installing and configuring OWB in your RAC environment • Using Oracle Universal Installer and the OWB Repository Assistant to install the OWB repository and register it on all cluster nodes • Replicating files among nodes as necessary • Changing a database configuration parameter to enable nodes to synchronize more quickly • Using Repository Browser to monitor node activity and enable or disable nodes • Locating log files and collecting other information to troubleshoot node failures and monitor recovery

More Related