1 / 22

Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR

Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR. AIS-GRID School 2013, April 25. Tier 0 at CERN: Acquisition, First pass reconstruction, Storage & Distribution. 1.25 GB/sec (ions). 1. Ian.Bird@cern.ch.

woody
Télécharger la présentation

Tier 1 in Dubna for CMS: plans and prospects Korenkov Vladimir LIT, JINR

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Tier 1 in Dubna for CMS: plans and prospectsKorenkov Vladimir LIT, JINR AIS-GRID School 2013, April 25

  2. Tier 0 at CERN: Acquisition, First pass reconstruction, Storage & Distribution 1.25 GB/sec (ions) 1 Ian.Bird@cern.ch

  3. Tier Structure of GRID Distributed Computing:Tier-0/Tier-1/Tier-2 Tier-0 (CERN): • accepts data from the CMS Online Data Acquisition and Trigger System • archives RAW data • the first pass of reconstruction and performs Prompt Calibration • data distribution to Tier-1 Tier-1 (11 centers): • receives a data from the Tier-0 • data processing (re-reconstruction, skimming , calibration etc) • distributes data and MC to the other Tier-1 and Tier-2 • secure storage and redistribution for data and MC Tier-2 (>200 centers): • simulation • user physics analysis 2 Ian.Bird@cern.ch

  4. Wigner Data Centre, Budapest • New facility dueto be ready at the end of 2012 • 1100m² (725m²) in an existing building but new infrastructure • 2 independent HV lines • Full UPS and diesel coverage for all IT load (and cooling) • Maximum 2.7MW Slide from I.Bird (CERN, WLCG) presentation at GRID2012 in Dubna

  5. WLCG Grid Sites • Today >150 sites • >300k CPU cores • >250 PB disk • Tier 0 • Tier 1 • Tier 2

  6. Russian Data Intensive Grid infrastructure (RDIG) The Russian consortium RDIG (Russian Data Intensive Grid), was set up in September 2003 as a national federation in the EGEE project. Now the RDIG infrastructure comprises 17 Resource Centers with > 20000 kSI2K CPU and > 4500 TB of disc storage. • RDIG Resource Centres: • – ITEP • – JINR-LCG2 (Dubna) • – RRC-KI • – RU-Moscow-KIAM • – RU-Phys-SPbSU • – RU-Protvino-IHEP • – RU-SPbSU • – Ru-Troitsk-INR • – ru-IMPB-LCG2 • – ru-Moscow-FIAN • – ru-Moscow-MEPHI • – ru-PNPI-LCG2 (Gatchina) • – ru-Moscow-SINP • - Kharkov-KIPT (UA) • BY-NCPHEP (Minsk) • UA-KNU

  7. Country Normalized CPU time (2012-2013) All Country - 19,416,532,244 Russia-410,317,672 (2.12%) Job726,441,73123,541,182 ( 3.24%) 6

  8. Country Normalized CPU time per VO (2012-2013) 7

  9. Russia Normalized CPU time per SITE and VO (2012-2013) All VO Russia - 409,249,900 JINR - 183,008,044 CMS Russia - 112,025,416 JINR - 67,938,700 (61%) 8

  10. Frames for Grid cooperation with CERN • 2001:EU-dataGrid • Worldwide LHC Computing Grid (WLCG) • 2004: Enabling Grids for E-sciencE (EGEE) • EGI-InSPIRE • CERN-RFBR project “Grid Monitoring from VO perspective” • Collaboration in the area of WLCG monitoring • WLCG today includes more than 170 computing centers where more than 2 million jobs are being executed daily and petabytes of data are transferred between sites. • Monitoring of the LHC computing activities and of the health and performance of the distributed sites and services is a vital condition of the success of the LHC data processing. • WLCG Transfer Dashboard • Monitoring of the XRootD federations • WLCG Google Earth Dashboard • Tier3 monitoring toolkit 9

  11. JINR –LCG2 Tier2 site • Provides the largest share to the Russian Data Intensive Grid (RDIG) contribution to the global WLCG/EGEE/EGI Grid-infrastructure. JINR secured 46% of the overall RDIG computing time contribution to the solution of LHC tasks • During 2012, CICC has run more than 7.4 million jobs, the overall CPU time spent exceeding 152 million hours (in HEPSpec06 units) • Presently, the CICC computing cluster comprises 2582 64-bit processors and a data storage system of 1800 TB total capacity.

  12. WLCG Tier1 center in Russia Proposal to create the LCG Tier1 center in Russia (official letter by Minister of Science and Education of Russia A. Fursenko has been sent to CERN DG R. Heuer in March 2011). The corresponding point to include in the agenda of next 5x5 meeting Russia-CERN (October 2011) - for all four experiments ALICE, ATLAS, CMS and LHCb - ~10% of the summary Tier1 (without CERN) resources - increase by 30% each year - draft planning (proposal under discussion) to have prototype in the end of 2012, and full resources in 2014 to meet the start of next working LHC session. Discussion about distributed Tier1 in Russia for LHC and FAIR 11

  13. Joint NRC "Kurchatov Institute"– JINR Tier1 ComputingCentre Project: «Creation of the automated system of data processing for experiments at the Large Hadron Collider (LHC) of Tier1 level and maintenance of Grid-services for a distributed analysis of these data» Terms: 2011-2013 Type of project: R&D Cost: federal budget - 280 million rubles (~8.5 MCHF), extrabudgetary sources - 50% of the total cost Leading executor: RRC KI «Kurchatov institute» for ALICE, ATLAS, and LHC-B Co-executor: LIT JINR (Dubna) for the CMS experiment Project goal: creation in Russia of a computer-based system for processing experimental data received at the LHC and provision of Grid-services for a subsequent analysis of these data at the distributed centers of the LHC global Grid- system. Core of the proposal: development and creation of a working prototype of the first- level center for data processing within the LHC experiments with a resource volume not less than 15% of the required one and a full set of grid-services for a subsequent distributed analysis of these data.

  14. The Core of LHC Networking: LHCOPN and Partners

  15. JINR CMS Tier-1 progress Disk & server installation and tests: done Tape system installation: done Organization of network infrastructure and connectivity to CERN via GEANT:done Registration in GOC DB and APEL: done Tests of WLCG services via Nagios: done

  16. CMS-specific activity • Currentlycommissioning Tier-1 resourceforCMS: • Local Tests of CMS VO-services and CMS SW • The PhEDExLoadTest(tests of data transfer links) • Job Robot Tests (or testsviaHammerCloud) • Long-running CPU intensive jobs • Long-running I/O intensive jobs • PHDEDX transferredofRAW inputdatatoourstorageelementwithtransferefficiencyaround 90% • Preparedservicesanddatastorageforthereprocessingof 20128TeVreprocessing

  17. Services 17 Security (GSI) Computing Element (CE) Storage Element (SE) Monitoring and Accounting Virtual Organizations (VOMS) Workload management (WMS) Information service (BDII) File transfer service (FTS + PhEDEx) SQUID Server CMS user services (Reconstruction Services, Analysis Services etc)

  18. Milestones of the JINR CMS Tier-1 Deployment and Commissioning

  19. US-BNL CERN Bologna/CNAF Ca-TRIUMF Russia: NRC KI Taipei/ASGC NDGF US-FNAL JINR UK-RAL Amsterdam/NIKHEF-SARA De-FZK Barcelona/PIC 26 June 2009 Lyon/CCIN2P3

  20. Staffing Korenkov V. Mitsyn V. Dolbilov A. Trofimov V. Shmatov S.

More Related