330 likes | 440 Vues
Russia participation in EGEE stable core infrastructure - new applications/new resources/new users V.A. Ilyin on behalf of RDIG consortium Russia- EU meeting at RRC “Kurchatov Institute 24 April 2007. RDIG – Russian Data Intensive Grid Consortium (EGEE-II partners):
E N D
Russia participation in EGEE stable core infrastructure - new applications/new resources/new users V.A. Ilyinon behalf of RDIG consortium Russia- EU meeting at RRC “Kurchatov Institute 24 April 2007
RDIG – Russian Data Intensive Grid Consortium (EGEE-II partners): RRC KI, JINR, SINP MSU, IHEP, ITEP, PNPI, KIAM RAS, IMPB RAS RDIG – regional federation in EGEE-II (SA1, NA2, NA3, NA4) full functional segment in global grid infrastructure EGEE-II RDIG now: more than 100 members , (~>) 15 resource centres, 6 (++) sciences to serve, ~ 2000 KSI2K CPU and ~ 50 (300++) Tbyte of data
• RDIG Resource Centres: – ITEP – JINR-LCG2 – Kharkov-KIPT-LCG2 – RRC-KI – RU-Moscow-KIAM-LCG2 – RU-Phys-SPbSU – RU-Protvino-IHEP – RU-SPbSU – Ru-Troitsk-INR-LCG2 – ru-IMPB-LCG2 – ru-Moscow-FIAN-LCG2 – ru-Moscow-GCRAS-LCG2 – ru-Moscow-MEPHI-LCG2 – ru-PNPI-LCG2 – ru-Moscow-SINP-LCG2 Status page: http://grid.sinp.msu.ru/grid/roc/rc ++
RDIG ROC ROC – regional operation center Grid services SINP MSU Certificate Authority and security RRC KI Monitoriing and Accounting JINR, SINP MSU VO management and support SINP MSU User support ITEP, KIAM RAS Operational support of resource centers PNPI, IHEP EGEE-II - grid operator on duty 6 teams working in weekly rotation CERN, France, Italy, UK, Russia,Taipei
CIC-on-duty http://egee.sinp.msu.ru
RDIG testing new MW Testing/adaptation new gLite components (SA1, -> SA3 through CERN-INTAS project): PNPI, JINR, IHEP, SINP MSU Testing new MW components (NA4 ARDA): - Metadata catalog,Fireman catalog, gridFTP, ... (JINR, SINP MSU) - testing gLite for ATLAS and CMS (PNPI, SINP MSU) Evaluation of new MW: 2004: evaluation of GT3 (SINP MSU, JINR) 2005: evaluation OMII (JINR, KIAM RAS) 2005: evaluation GT4 (SINP MSU, JINR, KIAM RAS + CERN+GT4)
VOs RDIG supports 16 VOs In 2006 ~ 500 000 jobs (about 50% non-HEP)
VOs • Infrastructure VO's (all RC's): – dteam – ops • Most RC support the WLCG/EGEE VO's – Alice – Atlas – CMS – LHCb • Supported by some RC's: – gear – Biomed – Fusion • Regional VO's – Ams, eearth, photon, rdteam, rgstest, fusion_grid ++ crystal apps, engineering apps, medical apps ...
ATLAS: statistics for production jobs Jobs per user Jobs per site Production mode of the grid usage: millions jobs by few users on hundreds sites
CMS: statistics for analysis jobs Jobs per user Jobs per user Still not chaotic usage of the grid … Physicists are sleeping … … 2008 …
RDIG - training, induction courses PNPI, RRC KI, JINR, IHEP, ITEP, SINP MSU: Induction courses ~ 600 Courses for application developers ~ 60 Site administrator training ~ 100 2007 – series of physicists training how to use EGEE/RDIG in their daily activity (LHC coming ...) today more than 100 physicists from different LHC centers in Russia got this training
RDIG web RDIG portal http://egee-rdig.ru RDIG ROC http://grid.sinp.msu.ru RDIG CA http://ca.grid.kiae.ru/RDIG RDIG monitoring http://rocmon.jinr.ru:8080 User Support http://ussup.itep.ru RDIG forum http://www.gridclub.ru
Russia in World-wide LHC Computing Grid : • cluster of institutional computing centers; • major centers are of Tier2 level (RRC KI, JINR, IHEP, ITEP, PNPI, SINP MSU), other are (will be) Tier3s. each of the T2-sites operates for all four experiments - ALICE, ATLAS, CMS and LHCb. this model assumes partition/sharing of the facilities (DISK/Tape and CPU) between experiments. • basic functions: analysis of real data; MC generation; users data support plus analysis of some portion of RAW/ESD data for tuning/developing reconstruction algorithms and corresponding programming. Thus, approximately equal partitioning of the storage: Real AOD ~ Real RAW/ESD ~ SimData
Russia in World-wide LHC Computing Grid T1s for Russia: ALICE - FZK (Karlsruhe) to serve as a canonical T1 center for Russia T2 sites ATLAS - SARA (Amsterdam) to serve as a canonical T1 center for Russia T2 sites LHCb - CERN facilities to serve as a canonical T1 center for Russia T2 sites CMS - CERN as a CMS T1 centre for the purposes of receiving Monte Carlo data from Russian T2 centres. Also as a special-purpose T1 centre for CMS, taking a share of the general distribution of AOD and RECO data as required.
Russia in World-wide LHC Computing Grid RDIG resource planning for LHC:
Russia in World-wide LHC Computing Grid Moscow 1 Gbps (ITEP, RRC KI, SINP MSU, …LPI, MEPhI) plans to 10 Gbps in 2007 for some centers IHEP (Protvino) 100 Mbps fiber-optic (some plans on 1 Gbps) JINR (Dubna) 1 Gbps f/o development to 10 Gbps in 2007 (autumn) BINP (Novosibirsk) 45-100 Mbps (GLORIAD++) INR RAS (Troitsk)f/o under realization (100 Mbps …) PNPI (Gatchina) f/o link under testing (100 Mbps - 1Gbps) SPbSU (S-Peterburg) 1 Gbps (potentially is available)
Russia in World-wide LHC Computing Grid International connectivity for RDIG/LCG: for connection with LCG centers in Europe: GEANT2 PoP has been upgraded in December 2006 to 2.5 Gbps (plans for 10 Gbps) dedicated connectivity to “host” T1’s (kind of LHC OPN extension) by use of GLORIAD: Jan'07 1 Gbps connection Moscow-Amsterdam. Feb'07 Moscow-CERN – 1 Gbps (just today 310 Mbps) testing by JINR – 30 Mbyte/s disk-to-disk.
But we need in dedicated solutions now! CERN-T1 as FTS server for Russian sites (since February) ! By use of Moscow-CERN lightpath by RBNet/GLORIAD March, 19 Transfer rate 14 – 32 MBs 99% of successful transfers and 990 GB have been transferred October-November, 2006 (CNAF as FTS T1 for JINR) Transfer rates less than 2 MBs Ratio of successful transfers ~20-30%
To 2nd year of RDIG/EGEE-II and toward to EGEE-III • Manpower: • involve regional universities • new intervention to major universities: • MFTI+RRC KI • MSU+JINR • MEPhI+RRC KI + ... • StPSU+PNPI • ++ New magisterial course on e-Infrastructure. could it be in the form of international cooperation?
To 2nd year of RDIG/EGEE-II and toward to EGEE-III • Stable core infrastructure: • search for new application areas and users • new resource centers should come with new users and applications • To get new apps and users: • grid core services should “know” Windows jobs (virtualization technology) • grid core services should “know” MPI clusters (of small/medium size) - “90%” of jobs usually asked supercomputing (parallelizing) in reality could be parallelized on (max) 10-15 CPUs.
To 2nd year of RDIG/EGEE-II and toward to EGEE-III • Stable operating of RDIG as full-functional segment of EGEE-II in production mode • Stable “24x7” providing basic grid services at national level, involvement in grid operation management in EGEE-II • Stable functioning CA at national level • Start of massive user support • Start of chaotic use of the grid by LHC users Flagship applications: LHC, Fusion (toward to ITER), nanotechnology Current interests from: medicine, engineering, ++ RDIG has started the (long) way to the sustainable grid infrastructure. EGEE-III - to form Joint Research Unit led by RRC KI. The idea on EGI has got interest and support by RDIG community.