1 / 16

GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP

GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide. V. Ilyin SINP MSU. RUNNet (Russian UNiversity Network) ‏ Link to NORDUNet (then to GEANT) — 10 ( to 40 ) Gbps Backbone (Moscow, St-Petersburg, Moscow-St-Petersurg):

morrie
Télécharger la présentation

GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GRID development in Russia 1) Networking for science and higher eductation 2) Grid for HEP 3) Digital Divide V. Ilyin SINP MSU

  2. RUNNet (Russian UNiversity Network)‏ Link to NORDUNet (then to GEANT) — 10 (to 40) Gbps Backbone (Moscow, St-Petersburg, Moscow-St-Petersurg): 2004 — 1 Gbps, 2009 — 10 Gbps DIGITAL DIVIDE But: Moscow-Novosibirsk O(100 Mbps)‏ Moscow-Tambov 18 Mbps Rostov-on-Don – Stavropol 26 Mbps Rostov-on-Don – Krasnodar 20 Mbps Moscow-Perm 46 Mbps Moscow-Chelyabinsk 40 Mbps Rostov-on Don–Makhachkala 4 Mbps

  3. RASnet — Russian Academy of Sciences Networking PoP GEANT in Moscow — now 10+2.5 Gbps Backbone in Moscow — 10 Gbps DIGITAL DIVIDE But Moscow to Khabarovsk 40 Mbps Novosibirsk 30 Mbps Ekaterinburg 70 Mbps Nijniy Novgorod 1 Mbps Kazan 60 Mbps St-Petersburg 20 Mbps

  4. RBNet — transport backbone networking for science and higher education in Russia Exchange point for science-education networks (Moscow-IX) GigaNAP (MoscowLight)‏ Associating with Internet2/NLR networking in USA GLORIAD — f/o ring around the Earth (Russia, USA, China, the Netherlands, Canada, S. Korea)

  5. Use of Dark Fiber in NREN Backbones 2005 – 2008Greater or Complete Reliance on Dark Diber 2005 2008 TERENA Compendium 2008: www.terena.org/activities/compendium/

  6. ICFA SCIC Network Monitoring Report http://icfa-scic.web.cern.ch/ICFA-SCIC/

  7. August 2009 Three major networking providers for science and higher education in Russia RBNet (RIPN), RUNNet (Informica), RASNet (SCC RAS)‏ have established the National association of research and educational е-Infrastructures «e-ARENA» to join and coordinate efforts of organizations and institutions in the e-infrastructure field with main goal to achieve the provision of the networking and grid services for science and higher education in Russia at the level of Europe, US, S.Korea, Taiwan... e-Arena ~ DANTE ...

  8. LHC Computing Grid (LCG) – start in Sept 2003

  9. Russia Tier2 facilities in World-wide LHC Computing Grid: • cluster of institutional computing centers; • major centers (now six ones) are of Tier2 level RRC KI, JINR, IHEP, ITEP, PNPI, SINP MSU), other are joining – INR RAS, MEPhI, SpbGU, LPI, BINP. each of the T2-sites operates for all four experiments - ALICE, ATLAS, CMS and LHCb. • basic functions determined the main data flows: analysis of real data; MC generation; users data support

  10. RuTier2 in the World-Wide Grid RuTier2 Computing Facilities are operated by Russian Data-Intensive Grid (RDIG) We are creating RDIG infrastructure as Russian segment of the European grid infrastructure EGEE http://www.egee-rdig.ru • RuTier2 sites (institutes) are RDIG-EGEE Resource Centers • Basic grid services (including VO management, RB/WLM etc) are provided by SINP MSU, RRC KI and JINR • Operational functions are provided by IHEP, ITEP, PNPI and JINR • Regional Certificate Authority and security are supported by RRC KI • User support (Call Center, link to GGUS in FZK) - ITEP

  11. Transfer Rates during Phedex Load Tests RRC-KI SINP max 71 MB/s av 50 MB/s max 101 MB/s av 80 MB/s JINR ITEP max 44.8 MB/s av 35 MB/s max 33.6 MB/s av 25 MB/s

  12. RuTier2 in the World-Wide Grid RuTier2/RDIG http://www.egee-rdig.ru in 6++ sites RRC KI, JINR, SINP MSU, ITEP, IHEP, PNPI (++ INR RAS, MEPhI, LPI, StPSU, BINP)‏ CPU 7000 KSI2000 (~ 170 Tflops, 2010 -> ~200-250 Tflops)‏ DISC 2000 TByte (2010 -> ~3 Pbyte)‏ ~ 10000s jobs per day

  13. WLCG T2 resources accountingCPU pledged inc. efficiency in May 2009 2009 CPU CPU use including efficiency 60% as in TDR (KSI2K-Hrs)‏ Pledge KSI2K Sites ALICE ATLAS CMS LHCb Total JINR 357816 175783 40544 29966 604109 RRC-KI 218219 227197 81051 27280 553747 SINP-MSU 67881 64597 51300 30260 214038 ITEP 103920 117634 77301 53019 351874 PNPI 115552 77901 84364 18098 295915 IHEP 123327 76248 70294 9357 279226 INR RAS 41293 0 22656 3157 67106 MEPhI 127580 519 0 0 128099 SpbSU 65540 1025 0 3554 70119 FIAN (LPI) 0 13742 0 0 13742 RuTier2 6200 1221128 754646 427510 174691 2577975 In May RuTier2 provided to Experiments 2678400 KSI2K*hrs, thus 96% of the 2009 pledged (even when major centers, RRC KI and JINR, made the infrastructure update)‏

  14. WLCG Tier2 accounting by countriesCPU used in February 2009 vs pledged 2008 Russia

  15. Thus, the participation of Russia HEP institutes in the WLCG is very successful. The RDIG grid infrastructure is effective regional system of resource centers operated in the interests of LHC Experiments. RDIG is ready to support the analysis of the LHC data in the 2009-2010 year working session. This is good basis for further participation of Russian physicists in the LHC physics program in real collaboration with the world-wide ATLAS, CMS, ALICE and LHCb teams. This experience gives solid basis for future activity of Russia in the analysis of FAIR and XFEL data, by constructing corresponding computing (grid) systems. Now the experience obtained by Russian teams in the RDIG/WLCG/EGEE development is used in the National project of constructing the grid infrastructure for nanoindustry.

  16. EGEE project (the last stage – EGEE-III) is finishing. From May 2010 new European project will start European Grid Infratsructure (EGI)‏ to get a sustainable grid infrastructure for European science. New organizational model – like in European networking GEANT (project/infratsructure) - DANTE (central operational body) – NRENs/PolicyBoard EGI – EGI.eu (Amsterdam) – NGIs/Council The EGEE (global) grid infrastructure will be starting point for the EGI in 2010. Russia participated in the GEI discussions and EGI design from the beginning in 2007 (membership in the EGI Design Study Policiy Board). The e-Arena has been establishing, in particular, as a legal body for Russian participation in GEANT as Russian NREN and in EGI as Russian NGI. DIGITAL DIVIDE in progress !? But (!) in May 2009 all three major infrastructure consortia GEANT3, EGI, PRACE have got a decision that only ELIGIBLE European countries can participate in the projects to be applied to EC FP7 infrastructure Call in November 2010. Thus only these countries are now eligible partners in the European networking, grid and supercomputing infrastructure development!?

More Related