290 likes | 430 Vues
VO-specific systems for the monitoring of the LHC computing activities on the GRID. Julia Andreeva, CERN (IT/GS) NEC09, September 2009, Varna, Bulgaria. Outline. Monitoring from the VO perspective, motivation
E N D
VO-specific systems for the monitoring of the LHC computing activities on the GRID Julia Andreeva, CERN (IT/GS) NEC09, September 2009, Varna, Bulgaria
Outline Monitoring from the VO perspective, motivation Overview of the existing VO-specific monitoring systems and their role in operating of the WLCG infrastructure Experiment Dashboard as an example of the common application used by all LHC VOs. High-level cross-VO view based on data from VO-specific systems Summary NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Monitoring from the VO perspective- why is it so important? We are still accumulating practical experience operating Grid infrastructure We are not yet aware of all possible troubles which could happen to the infrastructure as a whole or to individual components As a consequence, we do not yet have enough knowledge to create perfect monitoring system which would alarm us in any critical situation or even better would predict such situation before it happens All this implies considerable involvement of the user community to the operations As current experience shows (CCRC08,STEP09 and beyond) as a rule VO communities, in particular people taking computing shifts, are those who detect problems in the first place VO monitoring tools are the main monitoring instrument for the moment. They are aggregating and promptly adapting new experience in operating of the Grid infrastructure NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Main areas covered by the VO monitoring tools Job processing (sharing and usage of the resources, performance, reasons of the failures and correspondingly related problems with the involved Grid services or VO applications) Data transfer (throughput, efficiency, reasons for the failures and related problems with the involved Grid services) Overall status of sites serving a given VO (site commissioning , computing shifts) NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Variety of tools used by the LHC VOs ALICE - MonAlisa for job processing MonAlisa and Experiment Dashboard for data transfer ATLAS Panda and Experiment Dashboard for job processing Experiment Dashboard for data transfer CMS - ProdMon and Experiment Dashboard for job processing - Phedex for data transfer LHCb Dirac for job processing Dirac for data transfer All experiments are using SAM and Experiment Dashboard for monitoring of site status and status of the services at the sites SLS for monitoring of services at Tier0 - NEC09, Varna, Julia Andreeva (CERN, IT/GS)
ALICE example Monitoring system of ALICE based on MonAlisa monitoring systems. Monalisa services at all ALICE sites for site-level monitoring + MonAlisa repository for a high-level view on the scope of the ALICE VO NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Monitoring of ATLAS DDM Monitoring of ATLAS DDM is implemented in Dashboard framework. The information sources are ATLAS DDM services at the sites. Data repository is implemented in ORACLE backend located at CERN. Widely used by ATLAS community. Up to 1K unique visitors per month, More that 100K pages are viewed daily NEC09, Varna, Julia Andreeva (CERN, IT/GS)
CMS example Monitoring of the CMS transfers is coupled with the CMS Data distribution system PhEDEX. Provides information about transfer rate, transfer quality, the status of the queue for transfer requests, etc… For CMS Job Monitoring see next talk of Irina Sidorova NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Monitoring of the LHCb computing activities by Dirac In LHCb both Data transfer and job monitoring are provided by Dirac NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard as an example of system used by 4 LHC VOs • Experiment Dashboard is in production for 4 LHC VOs • Widely used by the experiments for their everyday work (3K unique visitors (unique IP addresses) of the CMS production server in August 2009) • Covers full range of the LHC computing activities • Works transparently across various Grid infrastructures • Developed as a result of the joined effort of the Dashboard team , developers in the LHC experiments and in other monitoring projects. In collaboration with institutes from Taiwan, Russia, France and Great Britain NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Collaboration with JINR and other Russian institutions Russia is actively participating in the WLCG monitoring activity, namely contributing to the Dashboard project. From Russian side this work is coordinated by Vladimir Korenkov Strong contribution from JINR: Irina Sidorova Elena Tikhonenko Sergey Belov Sergey Mitsyn Alexander Uzhinskiy Andrey Nechaevskiy Among our JINR colleagues there are many young developers recently graduated from Dubna University Very much hope that this collaboration will continue NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard applications Generic applications: • Job Monitoring • Task monitoring for the analysis users • Site availability based on SAM tests • Site Status Board VO-specific applications: • ALICE Data Transfer Monitoring • ATLAS Data Management Monitoring • ATLAS Production Monitoring • Central Repository for Production Monitoring Data for CMS NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Development principles Do not develop and deploy new sensors unless nothing is in place for a given purpose Where possible use common solutions (technology and implementation). All Dashboard applications regardless of their functionality and information sources are developed in the Dashboard framework Involvement users in the development process NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard Framework UI Data storage and aggregation DB Access Layer (DAO) Machine-readable format publisher DB Access Layer (DAO) Other applications Dashboard Data Collecting Agents DB Access Layer (DAO) System is modular This allows to have flexible approach while implementing needs of the customers Information sources Dashboard agents NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard Framework (Examples) UI Data storage and aggregation DB Access Layer (DAO) Machine-readable format publisher DB Access Layer (DAO) Other applications Dashboard Data Collecting Agents DB Access Layer (DAO) For ATLAS Data Management Monitoring all components are implemented Information sources Dashboard agents NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard Framework (Examples) UI Data storage and aggregation DB Access Layer (DAO) Machine-readable format publisher DB Access Layer (DAO) Other applications Dashboard Data Collecting Agents For CMS Production Monitoring Dashboard is used to store, aggregate and archive data and to publish it in XML format. While UI is developed by the CMS Production Team DB Access Layer (DAO) Information sources Dashboard agents NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Experiment Dashboard Framework (Examples) For new SAM portal information is not imported into Dashboard DB. In SAM DB some additional tables are created and availability calculations are implemented inside ORACLE SAM instance. UI Data storage and aggregation DB Access Layer (DAO) Machine-readable format publisher DB Access Layer (DAO) Other applications Dashboard Data Collecting Agents DB Access Layer (DAO) Dashboard is used only to create monitoring display and to publish data in the machine-readable format Information sources Dashboard agents NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Make users to take part in the development Data storage and aggregation Monitoring applications are successful when they are developed in the close collaboration with user community. Good examples are Site Status Board and Dashboard Site Availability application based on SAM tests • CMS Experiment over last year put a lot of effort in site commissioning activity • Monitoring is a vital component of this process • New applications had been developed in close collaboration between Dashboard team and members of the CMS community involved in the site commissioning activity • Initially developed for CMS, Dashboard Site Availability application had been requested by other LHC VOs . Now in production for all 4 LHC VOs. • Same for Site Status Board , had been developed for CMS was later requested by ALICE and LHCb. UI Dashboard plots demonstrating improvement of the quality of the sites used by CMS. DB Access Layer (DAO) Dashboard Data Collecting Agents Information sources NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Make users to take part in the development Data storage and aggregation Site Status Board • These are users (people taking part in computing shifts and site commissioning activity) who define the set of columns, their content, which metrics are considered for overall status of the site, what is the validity interval for a given metric, which columns should be shown in the UI by default , alternative views, etc… • Dashboard provides a framework to be filled in by the customized information. • Historical information as well as straight forward navigation to the primary information source is available UI DB Access Layer (DAO) Dashboard Data Collecting Agents Information sources NEC09, Varna, Julia Andreeva (CERN, IT/GS)
High level cross-VO view The VO-specific monitoring systems are working in the scope of a single experiment Non-expert users or users external to a given VO do not know how to find required information It is difficult if at all possible to compare and correlate information of different VOs. Global cross-VO view is missing Recent development aims to solve this problem The systems providing high level view are being designed . They are based on integration of the experiment-specific monitoring systems, Dashboard framework and GridMap visualization system NEC09, Varna, Julia Andreeva (CERN, IT/GS)
GridMap visualization system • GridMap visualization tool had been developed in the context of CERN Openlab collaboration between CERN and EDS company • The main motivation for GridMap development is to provide a high level view of the monitoring data collected from the distributed infrastructure in a intuitive and useful way. • Perfect match of the requirements for visualization of the distributed hierarchical infrastructure and GridMap visualization NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Use-cases for GridMap • Multiple use cases had been defined for GridMap : • GridMap for Experiment Work Flows • GridMap for status of services defined as critical by the LHC VOs • GridMap for Site Status Board • Analyzing the results of the CCRC08 and STEP09 one of the main conclusions was that sites are a bit disoriented regarding monitoring. Too many monitoring tools… Which ones to use? Which ones to trust? How to understand whether VOs served by the site are happy about site performance ? • Siteview application aims to provide estimation of site performance from the VO perspective • http://dashb-siteview.cern.ch/gridmap-vo-siteview/ NEC09, Varna, Julia Andreeva (CERN, IT/GS)
High-level monitoring system for sites serving LHC VOs ATLAS Grid Map for a particular site ALICE • Central repository for common metrics • (transfer rate, parallel jobs, success rate, etc…) CMS LHCb • Common metrics distributions by time NEC09, Varna, Julia Andreeva (CERN, IT/GS) EGEE'08 - Julia Andreeva (CERN, IT/GS)
Siteview(1/4) Map is split in 4 groups: • Overall status of the site from the VO perspective • Job processing activity • Incoming data transfer • Outgoing data transfer • Size of the cell is defined by the scale of a given activity, colour is defined by the success rate NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Siteview (2/4) Assists users to navigate to the primary information source NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Siteview (3/4) • Click to get more information about failures NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Siteview (4/4) • Click to get to the primary information source NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Integration with Google Erath Experiment specific monitoring systems provide input data Dashboard agents publish this information in the KML format Strong contribution to the development of Sergey Mitsyn (JINR) Application will be shown during the LHC demo at the EGEE conference in Barcelona NEC09, Varna, Julia Andreeva (CERN, IT/GS)
Summary • Practical experience in operating Grid infrastructure and in using it by the LHC community (in particular during CCRC08, STEP09) proved that VO-specific monitoring systems are the vital part of the operations and are currently the main source of the monitoring information • Having a wide range of VO-specific monitoring tools in place, we were still missing the high level view of the computing activities for LHC experiments altogether both at the global and at the site level • This issues are being addressed in the current development. • Siteview application is being developed and evaluated by the LHC community NEC09, Varna, Julia Andreeva (CERN, IT/GS)