1 / 14

Oracle SuperCluster for ESB Networks Utilities Management

Oracle SuperCluster for ESB Networks Utilities Management. By Simon Holt, ESB Networks. Introductions. Simon Holt DBA / Architect > 20 years with Oracle RDBMS All 24x7 Critical systems Police, Heavy Industry, Telecoms, Banking, Utilities Chair Irish Tech User Group. ESB Networks

malha
Télécharger la présentation

Oracle SuperCluster for ESB Networks Utilities Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Oracle SuperClusterfor ESB Networks Utilities Management By Simon Holt, ESB Networks

  2. Introductions Simon Holt • DBA / Architect • > 20 years with Oracle RDBMS • All 24x7 Critical systems • Police, Heavy Industry, Telecoms, Banking, Utilities • Chair Irish Tech User Group ESB Networks • Semi-state, regulated company • Transmission asset holder • Maintain LV/MV infrastructure • Outage management • Control of power flow

  3. NMS – A Critical Application • Outage Management, Command and Control – 24x7x365 operations • Interfaced to SCADA, Call Entry, SAP, public web applications, and many others • New Control Centre; ongoing investment • Performance sensitivity. Needs to perform under pressure.

  4. Current Situation – Drivers for Change • Ageing Hardware • De-supported software landscape • Increasing reporting requirements and ETL • New Functionality needs; Smart Metering and Powerflow.

  5. Current Physical Architecture DMX 4000 DMX 4000 Dark Fibre – SRDF IBM P590: 5 LPARS, Db, Model Build, Services, Application, Dev/Test IBM P590 MA 70 MA 70 Physical Standby Dataguard HP DL 380 HP DL 380

  6. Logical Architecture Map data from GIS Call Entry, SAP B/W, “Powercheck” NMS Database NMS Model Build Database Updated Asset Model ETL take on Bespoke NMS Historical Database APEX applications / reports

  7. Requirements • Hardware to support Weblogic and RDBMS • Consolidation of existing databases • Low-latency, high-performance storage – All Flash? • HA mandatory • DR to be as comprehensive and as automated as possible • Consider Oracle MAA (Massively Available Architecture)

  8. Option 1 – Old Hardware, New Storage • Storage latency key to NMS / ETL response • ASM on iSCSI Raid 0+1 delivers 1 – 3ms latency • Total Flash Solution option possible • Promise of < 10uS latency response! • Ability to replace existing storage • Savings – tile space and power

  9. Option 2 – Commodity Hardware • Blade Cabinet plus blades – upgrade and scale out • Commodity kit cost reduction! • HA resilience options • Gateway to MAA architecture and Active Dataguard

  10. Option 3 – Storage Considerations • Static nature of Violin storage • Hitatchi Data Systems Hus VM option: • Cabinet based, expandable, upgradeable, flexible • Raw capacity of 338Tb accelerated Flash module • Flash module comparable to Violin • Performance test exceeded expectations • Cabinet could house blade chassis • Support and migration assistance

  11. Option 4 – Initial NMS Consulting Suggestions Suggested – High Level Hardware/Software Diagram Assumptions Product: Oracle Utilities Network Management System 1.11.0.4 Platform: Linux Chipset: x86-64bit Total Electric Customers Expected: 2,200,000 Secondary Network Modeled: No Number of Switchable Devices: ? Max Number of Device Operations Per Hour: 300 Max Number of Calls Per Hour: 40,000 Max Number of System Operators: 75 Max Number of Call Entry Users: 200 Notes: Oracle recommends that the Disaster Recovery be setup as a replica of the production machine. However, in an effort to best utilize the proposed hardware Oracle recommends that testing, training and development be setup as virtual environments within the DR cluster. The testing, training and development machines could be shutdown as part of the disaster recovery procedures. IVR Homegrown Call Entry SAP Calls, Outage Status, Callbacks, Callback Responses “Focus” Calls Customer Record Extracts Device Status, Analog Measurements Meter Status, Meter Ping Request Mobile (future) AMI (future) Crews Status Changes Order Status and Updates Batch Process GIS (Intergraph) High Level Integration Diagram. Please see Context Diagram for detailed data flows. TCP/IP Network SCADA • Business Intelligence • 200 Users (200 Max) • 4GB RAM Minimum • Supported Clients: • Windows XP Pro SP3 (32-bit) • Windows 7 Professional (32-bit/64-bit) • Oracle Linux 6.3 • Operators / Dispatchers • 75 Users (75 Max) • 4GB RAM Minimum • Supported Clients: • Windows XP Pro SP3 (32-bit) • Windows 7 Professional (32-bit/64-bit) • Oracle Linux 6.3 WAN Fiber Channel Fiber Channel Test, Training, ModelValidation • Call Entry Users • 200 Users (200 Max) • 4GB RAM Minimum • Supported Clients: • Windows XP Pro SP3 (32-bit) • Windows 7 Professional (32-bit/64-bit) • Oracle Linux 6.3 Data Guard RSYNC Disaster Recovery (DR) SAN (Cluster Compatible) Storage 1TB RAID 0+1 Production SAN (Cluster Compatible) Storage 1TB RAID 0+1 Network Printer Test Sun Server X3-2 Node 1&2 Disaster Mode – DR Sun Server X3-2 (Node 1&2) Production Sun Server X3-2 (Node 1 & 2) Oracle VM 1 - NMS RDBMS Server 4 cores x 2.9GHz 16 GB RAM Oracle Linux 6.3 64 bit Oracle Database EE (RAC Optional) SAN Storage (Recommended) Oracle VM 2 - NMS BI RDBMS Server 2 cores x 2.9GHz 8 GB RAM Oracle Linux 6.3 64 bit Oracle Database EE (RAC Optional) SAN Storage (Recommended) Oracle VM 3 – NMS Core Server 6 cores x 2.9GHz 64 GB RAM Oracle Linux 6.3 64 bit Oracle NMS Services & Interfaces Oracle VM 4 – NMS Application Server (core) 2 cores x 2.9GHz 24 GB RAM Oracle Linux 6.3 64 bit NMS App Server (WebLogic 10.3.6) Oracle VM 5– BI App Server 2 cores x 2.9GHz 16 GB RAM Oracle Linux 6.3 64 bit OBIEE – Outage Analytics Note: Node 1 and Node 2 are replicas containing same number of Oracle VM instances. At least 1 core and 8 GB memory (not shown on this diagram) should be allocated to the VM manager (if a virtualized environment is to be configured). Oracle VM 1 - NMS RDBMS Server 4 cores x 2.9GHz 16 GB RAM Oracle Linux 6.3 64 bit Oracle Database EE (RAC Optional) SAN Storage (Recommended) Oracle VM 2 - NMS BI RDBMS Server 2 cores x 2.9GHz 8 GB RAM Oracle Linux 6.3 64 bit Oracle Database EE (RAC Optional) SAN Storage (Recommended) Oracle VM 3 – NMS Core Server 6 cores x 2.9GHz 64 GB RAM Oracle Linux 6.3 64 bit Oracle NMS Services & Interfaces Oracle VM 4 – NMS Application Server (core) 2 cores x 2.9GHz 24 GB RAM Oracle Linux 6.3 64 bit NMS App Server (WebLogic 10.3.6) Oracle VM 5 – BI App Server 2 cores x 2.9GHz 16 GB RAM Oracle Linux 6.3 64 bit OBIEE – Outage Analytics Note: Node 1 in the DR data center is intended to be the primary node to resume operations when the primary operations data center is not available. At least 1 core and 8 GB memory (not shown on this diagram) should be allocated to the VM manager (if a virtualized environment is to be configured). Suggested as the same configuration as the Production and DR clusters as identified in this diagram. Note: Node 1&2 of the Test cluster is suggested for full test environment and when not in use for full scale testing, then it can be used for training, development and nominal testing environments. To perform fail-over testing it is suggested that the training and development environments be shutdown prior to the use of the full scale testing and for the failover testing to commence on the testing on this cluster. Training and Model Validation servers do not need to be clustered and may not require Business Intelligence either. The accompanying spreadsheet may be used to determine the cores and memory that could/should be allocated to the QA/Test, training, development, and model validation environments. Production Cluster DR Cluster Sun Server X3-2 2 CPU’s @ 8 cores x 2.9GHz 16 Cores Total 128 GB RAM Sun Server X3-2 Node 1 2x600GB 15K rpm internal disks - RAID-1 Primary NMS Server Backup OBIEE Outage Analytics Sun Server X3-2 Node 1 2x600GB 15K rpm internal disks - RAID-1 Primary NMS Server Backup OBIEE Outage Analytics NMS Node 1 NMS Node 1 Sun Server X3-2 2 CPU’s @ 8 cores x 2.9GHz 16 Cores Total 128 GB RAM Sun Server X3-2 Node 2 2x600GB 15K rpm internal disks - RAID-1 Backup NMS Server Primary OBIEE Outage Analytics Sun Server X3-2 Node 2 2x600GB 15K rpm internal disks - RAID-1 Backup NMS Server Primary OBIEE Outage Analytics NMS Node 2 NMS Node 2 Sun Server X3-2 2 CPU’s @ 8 cores x 2.9GHz 16 Cores Total 128 GB RAM Sun Server X3-2 2 CPU’s @ 8 cores x 2.9GHz 16 Cores Total 128 GB RAM Note: Disaster Recovery Node 1 is a replica of production configuration when in disaster recovery mode.

  12. Option 5 – Exa* Solution Production and DR Sites Oracle Secure Backup Admin Server Both sites comprise of: • Exalogic • 1/8th Rack • 4 Compute Nodes • Remote ZFS replication InfiniBand Network Oracle Secure Backup Media Server OEM Ethernet InfiniBand Network Platinum Support Gateway Ethernet Fiber Channel • MSL tape library • Two LTO Drives • Offsite Backups • Vaulting Ethernet InfiniBand Network • ZFS Storage Appliance • Backups of database & non-database files • Snapshots • Clones • Exadata • ¼ Qtr Rack • 48 Cores • 36 1.2TB HP 10K drives • Active Dataguard to DR site

  13. Option 6 - SuperCluster Solution Production and DR Sites Oracle Secure Backup Admin Server Both sites comprise of: Oracle Secure Backup Media Server Ethernet InfiniBand Network Platinum Support Gateway Fiber Channel or SAS OEM • MSL tape library • Two LTO Drives • Offsite Backups • Vaulting Ethernet • SparcSupercluster • Two T5-8 Compute Nodes with 4 16 Core CPUs • 1 TB Memory per node • 6 Exadata HC Storage Cells, 288TB raw • High Speed Infiniband Network • 3 36-port InfiniBandswitches • 1 48-port 1-Gigabit Ethernet Switch • ZS3 Storage 80TB raw • 4 Ldoms ( 2 for DB , 2 for App) • Backup to disc of database & non-database files • Snapshots • Clones

  14. Questions ? Simon Holt ESB Networks simon.holt@esb.ie

More Related