1 / 19

Logging

Logging. Mike Lamont Georges Henry Hemlesoet AB/OP Discussions with M. Pace & C. Roderick. On-line database Data needed instantaneously to interact with the accelerator Database is between the accelerator equipment and the client (operator) LSA – Accelerator Settings database

cpamela
Télécharger la présentation

Logging

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Logging Mike Lamont Georges Henry Hemlesoet AB/OP Discussions with M. Pace & C. Roderick

  2. On-line database Data needed instantaneously to interact with the accelerator Database is between the accelerator equipment and the client (operator) LSA – Accelerator Settings database MDB – Measurement database LDB – Logging database LASER – Alarms database E-Logbook – electronic logbooks CESAR – SPS-EA controls TIM – Technical Infrastructure Monitoring database CCDB – Controls Configuration (well,... semi-on-line) Off-line database Data not needed directly to interact with the accelerator Database serves as source for on-line databases Synchronization mechanisms propagate the data for consistency Layout database Assets (MTF) Documentation (EDMS) Drawings (CDD) Accelerator databases

  3. Purpose of logging system • Need to know what’s going on in and around the LHC by logging heterogeneous time series data, in order to • manage information required to improve the performance of the machine and individual systems; • meet specific INB requirements to record beam history; • make available long term operational statistics for management; • avoid duplicate logging developments.

  4. History • Logging Service (est. 2003) • Backend infrastructure exists since 2004 • requirements have changed and scope has greatly enlarged (from LHC machine  experiments, injectors, and expert data) • Measurement database (est. 2005) • Provides Java clients with short term persistence and data filtering; produces accelerator statistics • Requirements evolving, mostly due to LHC HWC • Shares infrastructure with the Logging service • LSA Accelerator Settings database (est. 2005) • Data store of all accelerator settings and related information for operation • Performance and availability directly impact accelerator operation • Shares infrastructure with the Logging service since 2006

  5. f f PC MK MS PIC WIC QPS ELEC TIM EAU BCT BLM BPM Cryo CIET Coll CV COMM BIC BETS Rad VAC SU Exp VAC CNGS f f f f f f f f f f f f f f f f f f f f f f f f Logging - conceptual view METERbrowsing & data extraction TIMBERbrowsing & data extraction Measurement DB7-day persistence Logging DB20-year persistence Filtered data transferred every 15’ Filtered data transferred every 2h MDB LDB max 5” data loading max 2Hz data sampling max 10’ data loading max 1Hz data sampling filter for data reduction Equipment – DAQ – FEC Equipment – DAQ – PLC Equipment – DAQ – PLC

  6. Measurement Service Data persistence of 1 week Data loading not more frequent than once every 5 seconds per client Data sampling at 2Hz max per variable Java clients from the pub-sub world Mainly cyclic beam-related data from injectors Intensities, profiles, losses,... Commissioning of LHC Individual system and integrated HWC data Power Converters, Kickers, Septa, RF, Radiation Monitors, Vacuum Beam Instrumentation to come Filtering towards logging every 15 minutes for each variable Logging toggle, dead band, dead time, fixed frequency METER interface for rapid data browsing, data extraction Logging Service Data persistence of 20 years Data loading not more frequent than once every 10 minutes per client Data sampling at 1Hz maximum per variable Clients Measurement Service (filtered) Injectors, LHC HWC Derived data, statistics PVSS Industrial systems Cryo, QPS, Collimators, Survey, Vacuum, PIC, WIC, Experiments, CNGS, CIET TIM database (filtered) Technical services No filtering between PVSS and logging TIMBER interface for off-line correlation, data extraction Logging – Current Situation

  7. Clone PC MK MS PIC WIC QPS ELEC TIM EAU BCT BLM BPM Cryo CIET Coll CV COMM BIC BETS Rad VAC SU Exp VAC CNGS f f f f f f f f f f f f f f f f f f f f f f f f Possible evolution: Dedicated HWC MDB LDB Measurement DB7-day persistence Logging DB20-year persistence f HWC Measurement DBpersistence period TBD MDB HWCMDB f max 5” data loading max 2Hz data sampling 5Hz data sampling filter for data reduction Equipment – DAQ – FEC Equipment – DAQ – PLC Equipment – DAQ – PLC

  8. Actually logged data Predicted to-be-logged data Data estimation Expected to reach the benchmark of1 Terrabyteof data in June 2007 Latest estimation as presented in Nov-2006

  9. OP Logging Acquisition & Buffering Logging PVSS(Vacuum, LHC Cryo, QPS, PIC, …) TIM(Cooling, Ventilation, Electricity) Main source for operation diagnostics LOGGING DB Java Logging Client (All devices except PVSS and TIM) • Logging systems operational today

  10. Main Goals • Applications • Reconstruction of a past event • Surveillance of a parameter’s evolution • Beam statistics • CNGS : events’ counting from neutrino beam interactions • LHC Post Mortem • Means to analyse and improve machine performance • Constraints • High availability service: 7 day / 24 h during machine operation • Zero loss tolerance for critical applications • Beam statistics • LHC events • High scalability

  11. Evolution of Coverage • Outstanding enlargement of scope since 2004 • 2003-2004 • TT40-TI8 Beam Runs • 2005 • LHC HWC (FGCs) • 2006 • SPS + CNGS • Executed CYCLEs • Allowed to reach data stamping standardization in front end • 2007 • PS Complex BCT + VACUUM • LHC injection lines • New services (cf. later slide) • Coverage extended to ALL machines • Data volume today > 60 106records in DB / day

  12. Standard Configuration LOGGING • CONTINUOUS logging • JAPC MONITORINGpublishes to Logging • Each acquired value is transferred to Logging • Configuration from XML • Apply to 95% Clients • Pre-defined list of parameters in XML file • Configuration from LSA • Apply to LHC HWC FGCs • When a SOC is modified/enabled/disabled, logging automatically re-configured • XML / LSA configuration maintained by AB/OP Java API FixDisplay AcquiredVALUE XML file Parameter List JAPC MONITORING LSA DB LHC HWC Operational SOC AcquiredVALUE JAPC

  13. Customized Configuration LOGGING • ON DEMAND logging • Data logged when decided by Operator • Acquisition Module : any Java Application • Used by CESAR Application • Generic Java API provided by Logging Java API Acquired VALUE AcquiredVALUE JAPC MONITORING On request Logging ANY Java Application JAPC AcquiredVALUE Any module

  14. Data MEAS DB TimeSeries LOGGING • Data Source • Any client supporting JAPC • Data Type • Scalar, Vector, String • Data Rate • Synchronous subscriptions • On user • Asynchronous subscriptions • Operational limit: 2 Hz • TimeSeries • {Acquired value, time stamp} pair • Time Stamp • UTC time in milliseconds • cycleStamp if device is cycle-aware • acquisitionStamp if device is NOT cycle-aware JAPC Monitoring JAPC AcquiredVALUE japc-tgm japc-cmw japc-fgc Japc-dim

  15. Data Grouping and Shaping • Data processing may be asked by Clients • Data Converter : implemented where it is most suitable • Data Grouping before writing to DB Buffering in Logging module • Writing at specific client-defined frequency GROUPING Converter MEAS DB LOGGING Converter FixDisplay JAPC MONITORING JAPC Japc-tgm Japc cmw Japc-concentrator

  16. Already operational • Transfer Line • HWC Soc • Power supplies • TIM measures • PVSS (Cryogenics) • QPS

  17. Under test • Will be deployed soon • LHC RF • LHC BLM • LHC BT (Kickers) • Already defined • DIP (Data interchange with experiments)

  18. Challenges for this year • Integration of the new features • Scalability test • Instrumentation & other hardware Integration • Logging on demand (Programs) • Definition of things to be logged • Machine protection, timing • BPM Concentrator • Converter modules • Deployment of new Hardware • For HWC and beyond

  19. Conclusions • System already operational • Clarification of responsibilities needed • CO / OP / Specialists • HWC main user • Big redesign in progress • Hardware • Software • Will need a lot of attention of the operation • In term of performance (risk of private logging) • In term of amount of data (Already 1TB of Data for HWC of 1 sector) • Data accessibility

More Related