1 / 21

Large PI System Redundancy, Performance and Security Strategies

Large PI System Redundancy, Performance and Security Strategies. Prepared by Craig Taylor (ctaylor@caiso.com) Christopher Russo (crusso@alum.mit.edu) Presentation to 2002 OSISoft Users Conference. Agenda. California ISO System Overview PI-UDS Hardware Cutover

Télécharger la présentation

Large PI System Redundancy, Performance and Security Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Large PI System Redundancy, Performance and Security Strategies Prepared by Craig Taylor (ctaylor@caiso.com) Christopher Russo (crusso@alum.mit.edu) Presentation to 2002 OSISoft Users Conference

  2. Agenda • California ISO System Overview • PI-UDS Hardware Cutover • PI-UDS Network Monitoring Tool • Large PI-UDS Primary Design Goals • Tips/Potential Problems for Large Systems • Discussion

  3. California ISO Factoids • Territories covered: • Pacific Gas and Electric • Southern California Edison • Comision Federal de Electricidad • Covers 124,000 square miles • 21,000 circuit miles of transmission • Approximately 600+ generators • 45,000 Megawatt summer peak load • $23 billion energy consumed annually

  4. PI System Cutover • Energy Management System Cutover December 2001 • From ABB Spider to ABB Ranger • Ranger system improved reliability • Included Universal Data Server hardware upgrades • 150,000+ points • Currently world’s largest single system

  5. PI System Hardware Changes

  6. PI-UDS Issues Description and Containment • ISO experienced client disconnects: • Potential service denial causes • Large DataLink data queries tied up PI Archiving Subsystem • Network suspected in some cases • Disk subsystem suspected • Question was: How do we identify and fix? We decided to get more network use information and identify pi hogs • We wrote program to organize PICONFIG data in web page

  7. Monitoring UDS Use with PICONFIG • Visual Basic program ran PICONFIG Command (every 5 minutes) • PICONFIG commands: @login UDS_SERVER,piadmin,password,5450 @mode list @table pinetmgrstats @ostru ID,ConStatus,ConTime,ConType,MsgRecv,MsgSent,Name,PeerAddress,PeerName,PID,RecvErrors,SendErrors,BytesRecv,BytesSent @select ID=* @ends

  8. Monitoring UDS Use with PICONFIG • PICONFIG Output: 1948,[0] Success,20-Feb-02 07:46:24,PI-API connection,224.,224.,pideE,IP.IP.IP.IP,PeerName,-1,0.,0.,8620.,42437 56,[0] Success,13-Feb-02 10:56:06,PI-API connection,107.,107.,pideE, IP.IP.IP.IP, PeerName,-1,0.,0.,4752.,19628 3000,[0] Success,4-Mar-02 05:55:29,PI-API connection,30.,30.,pideE, IP.IP.IP.IP, PeerName,-1,0.,0.,1600.,1081. 2727,[0] Success,4-Mar-02 02:07:11,PI-API connection,1282.,1282.,pideE, IP.IP.IP.IP, PeerName,-1,0.,0.,1.0027E+005,2.614E+005 2932,[0] Success,26-Feb-02 10:40:58,PI-API connection,1.0431E+005,1.0432E+005,pideE, IP.IP.IP.IP, PeerName,-1,0.,0.,4.2131E+005,2.5783E+006 3202,[0] Success,4-Mar-02 08:29:19,PI-API connection,15054,42290,pideE, IP.IP.IP.IP, PeerName,-1,0.,0.,7.0155E+005,1.2982E+008 8625,[0] Success,1-Mar-02 13:54:18,PI-API connection,79446,79446,PIPeE, IP.IP.IP.IP, PeerName,-1,0.,0.,4.1275E+007,1.934E+007 • The important information is: • PeerName (The person’s computer) • Connection Time • BytesRecv and BytesSent (Data amount transferred)

  9. Monitoring UDS Use with PICONFIG • PICONFIG Output:

  10. Monitoring UDS Use with PICONFIG • Web Pages

  11. Monitoring UDS Use with PICONFIG • Web Pages

  12. Monitoring UDS Use with PICONFIG IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name IP.IP.IP.IP, Peer Name

  13. Monitoring UDS Use with PICONFIG NOTE: IP Addresses And Peer Names Removed for Security

  14. Monitoring Tool Benefits • Enabled quick PI user “hog” identification • Helped to identifying and optimize abusive queries

  15. PI System Hardware Changes

  16. Christopher Russo Russo & Associates www.russoandassociates.com • Large PI-UDS Primary Design Goals • Tips/Potential Problems for Large Systems

  17. Designing Large PI SystemsPrimary Goals • Reliability & Robustness • Avoiding single-element failure • Redundancy • Clustering Solutions • Hardware (GeoCluster, Marathon, Legato, EMC) • Software (MSCS, Unix) • Performance • Server tweaking • Archive parameters, bottlenecks • Redundant Solutions • IP Load Balancing • Different Servers • PI to PI Distribution

  18. Peak Performance Tips • Decide what you’re realistically going to need • Consider dedicated systems for specific applications • OS/Hardware Specific • Network performance and latency • Disk performance: disk striping, fiber-channel, dedicated hardware • Processor, network subsystem • PI-Specific Parameters • Archive Subsystem Tuning • Archive cache record tuning • Update subsystem tuning • Tuning for real-time performance versus historical retrieval • Considerations for totalizer users • “Pre-digesting” of specific calculations

  19. Large System Potential Problems • Performance monitoring only manages some bottlenecks • Certain requests can still “nuke” the server • Loading and unloading records into memory is time-consuming • Shutdown times increase linearly with archive ratio size • Memory image cannot exceed 2 GB • Microsoft IP Load-Balancing doesn’t help PI • Connections are “stateful” and are not all equal • Achieving “bumpless” transfer is difficult without hardware solutions

  20. Archive Subsystem Bottlenecks

  21. Our PI 4 Wish List • A true multi-threaded archive subsystem • A connection and request logging facility • Not just who and how much, but what • A way to restrict expensive API queries or users • A point-database change-log feature • The issue of “meta-data” • Some current workarounds with scripts • Implemented in PI 3.3 • Questions & Discussion

More Related