1 / 30

CMS Tier 1 at JINR V.V. Korenkov for JINR CMS Tier-1 Team JINR

CMS Tier 1 at JINR V.V. Korenkov for JINR CMS Tier-1 Team JINR. XXIV International Symposium on Nuclear Electronics & Computing, NEC2013 2013, September 13. Outline. CMS Grid structure role of Tier-1s CMS Tier-1s CMS Tier-1 in Dubna History and Motivations (Why Dubna?)

mandy
Télécharger la présentation

CMS Tier 1 at JINR V.V. Korenkov for JINR CMS Tier-1 Team JINR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CMS Tier 1 at JINRV.V. Korenkovfor JINR CMS Tier-1 TeamJINR XXIV International Symposium on Nuclear Electronics & Computing, NEC2013 2013, September 13

  2. Outline • CMS Grid structure • role of Tier-1s • CMS Tier-1s • CMS Tier-1 in Dubna • History and Motivations (Why Dubna?) • Network infrastructure • Infrastructure and Resources • Services and Readiness • Staffing • Milestones • Conclusions

  3. CMS Grid Structure

  4. Tier Structure of GRID Distributed Computing:Tier-0/Tier-1/Tier-2 Tier-0 (CERN): • accepts data from the CMS Online Data Acquisition and Trigger System • archives RAW data • the first pass of reconstruction and performs Prompt Calibration • data distribution to Tier-1 Tier-1 (11 centers): • receives a data from the Tier-0 • data processing (re-reconstruction, skimming , calibration etc) • distributes data and MC to the other Tier-1 and Tier-2 • secure storage and redistribution for data and MC Tier-2 (>200 centers): • simulation • user physics analysis 3 Ian.Bird@cern.ch

  5. CMS Tier-1 in Dubna

  6. Tier1 center The Federal Target Programme Project: «Creation of the automated system of data processing for experiments at the LHC of Tier-1 level and maintenance of Grid services for a distributed analysis of these data» Duration:2011 – 2013 • March 2011 - Proposal to create the LCG Tier1 center in Russia(official letter by Minister of Science and Education of Russia A. Fursenko has been sent to CERN DG R. Heuer): • NRC KI for ALICE, ATLAS, and LHC-BLIT JINR (Dubna) for the CMS experiment September 2012 – Proposal was reviewed by WLCG OB and JINR and NRC KI Tier1 sites were accepted as a new “Associate Tier1” Full resources - in 2014 to meet the start of next working LHC session.

  7. Why in Russia? Why Dubna?

  8. In frames of the RDIG project (a participant of the WLCG/EGEE projects), a grid-infrastructure excepted by LHC experiments has been successfully launched as a distributed cluster RuTier2 (Russian Tier2) and JINR cluster JINR-LCG2 is the main one in RDIG as to its performance. JINR-LCG2 ~40% of CPU time in RDIG for 2011-2013

  9. JINR Central Information and Computing Complex (CICC) LocalJINR users (no grid) Jobs run by JINR Laboratories and experiments executed at CICC January - September 2013. Grid users (WLCG) JINR-LCG2 Normalised CPU time by LHC VOs. January - September 2013. More than 3 million jobs run Total normalised CPU time – 20 346 183 kSI2K-hours http://lit.jinr.ru/view.php?var1=comp&var2=ccic&lang=rus&menu=ccic/menu&file=ccic/statistic/stat-2013 http://accounting.egi.eu/

  10. CMS Computing at JINR • the first RDMS CMS web-server (in 1996); • full-scale CMS software infrastructure support • since 1997 • JINR CMS Tier2 center is one of • the most reliable and productive • CMS Tier2 centers worldwide • (in the top ten best) the most • powerful RDMS CMS Tier2 center • CMS Regional Operation Center • are functioning in JINR since 2009 • The core services needed for WLCG Tier-1 are • computing service, a storage service, information service. • The primary Tier-1 tasks can be divided into • recording raw data from CERN and storing them on tape; • recording processed data from CERN and storing them on disk; • providing data to other Tier-1 / Tier-2; • reprocessing raw data; • event simulation calculations.

  11. Russia Normalized CPU time per SITE and VO (2012-2013) All VO Russia - 409,249,900 JINR - 183,008,044 CMS Russia - 112,025,416 JINR - 67,938,700 (61%) 10

  12. Network infrastructure

  13. The Core of LHC Networking: LHCOPN and Partners

  14. Infrastructure and Facilities

  15. JINR CMS Tier-1 progress • Disk & server installation and tests: done • Tape system installation: done • Organization of network infrastructure and connectivity to CERN via GEANT:done • Registration in GOC DB and APEL: done • Tests of WLCG services via Nagios: done

  16. JINR monitoring Network monitoring information system - more than 423 network nodes are in round-the-clock monitoring

  17. Services and Readiness

  18. CMS-specific activity • Currentlycommissioning Tier-1 resourceforCMS: • Local Tests of CMS VO-services and CMS SW • The PhEDExLoadTest(tests of data transfer links) • Job Robot Tests (or testsviaHammerCloud) • Long-running CPU intensive jobs • Long-running I/O intensive jobs • PHDEDX transferredofRAW inputdatatoourstorageelementwithtransferefficiencyaround 90% • Preparedservicesanddatastorageforthereprocessingof 20128TeVreprocessing

  19. CMS Tier-1 Readiness

  20. Data transfer link to CERN CMS Tier-1 in Dashborad

  21. Frames for Grid cooperation of JINR • Worldwide LHC Computing Grid (WLCG) • Enabling Grids for E-sciencE (EGEE) - Now is EGI-InSPIRE • RDIG Development • CERN-RFBR project “GridMonitoringfrom VO perspective” • BMBF grant “Development of the grid-infrastructure and tools to provide joint investigations performed with participation of JINR and German research centers” • “Development of grid segment for the LHC experiments” was supported in frames of JINR-South Africa cooperation agreement; • Development of grid segment at Cairo University and its integration to the JINR GridEdu infrastructure • JINR - FZU AS Czech Republic Project “The grid for the physics experiments” • NASU-RFBR project “Development and support of LIT JINR and NSC KIPT grid-infrastructures for distributed CMS data processing of the LHC operation” • JINR-Romania cooperation Hulubei-Meshcheryakovprogramme • JINR-Moldova cooperation (MD-GRID, RENAM) • JINR-Mongolia cooperation (Mongol-Grid) 22

  22. Staffing Korenkov V. Mitsyn V. Dolbilov A. Trofimov V. Shmatov S.

  23. Milestones

  24. Milestones of the JINR CMS Tier-1 Deployment and Commissioning

  25. Main tasks for next years • Engineering infrastructure (system of uninterrupted power supply and climate-control) • High-speed reliable network infrastructure with the allocated reserved channel to CERN (LHCOPN) • Computing system and storage system on the basis of disk arrays and tape libraries of high capacity • 100% reliability and availability.

  26. US-BNL CERN Bologna/CNAF Ca-TRIUMF Russia: NRC KI Taipei/ASGC NDGF US-FNAL JINR UK-RAL Amsterdam/NIKHEF-SARA De-FZK Barcelona/PIC 26 June 2009 Lyon/CCIN2P3

  27. The 6th International Conference "Distributed Computing and Grid-technologies in Science and Education" (GRID’2014) Dubna, 30 June-5 July 2014 GRID’2012 Conference 22 countries, 256 participants, 40 Universities and Institutes from Russia, 31 Plenary, 89 Section talks

  28. Conclusions • In 2012-2013 CMS Tier1 prototype was created in Dubna • Disk & server installation and tests • Prototype tape system installation and tests • Organization of network infrastructure and connectivity to CERN via GEANT • Registration in GOC DB and APEL • Tests of WLCG services via Nagios • CMS-specific tests • Commissioning data transfer links (T0-T1, T1-T1, T1-T2) in progress • We expect to meet the start of next LHC run with full resources required (for the end of 2014)

More Related