1 / 29

A Plan for HEP Data Grid Project in Korea

August 5, 2002. CDF/D0 Grid Meeting. A Plan for HEP Data Grid Project in Korea. Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University. Contents. HEP Data Grid Project in Korea Current Status and Plan Inside of CHEP CHEP and EU Data Grid CHEP and Fermilab

kayla
Télécharger la présentation

A Plan for HEP Data Grid Project in Korea

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. August 5, 2002 CDF/D0 Grid Meeting A Plan for HEP Data Grid Project in Korea Kihyeon Cho Center for High Energy Physics (CHEP) Kyungpook National University

  2. Contents • HEP Data Grid Project in Korea • Current Status and Plan • Inside of CHEP • CHEP and EU Data Grid • CHEP and Fermilab • Conclusions

  3. HEP Data Grid Project in Korea

  4. HEP Data Grid in Korea HEP Data Grid in Korea • In Korea, HEP Data Grid Project has started on March 2002 funded by Ministry of Information and Communication. • Five year project (2002-2006) • 30 High Energy Physicists (Ph.D level) from 5 Experiments (CDF, AMS, CMS, K2K, PHENIX) are involved in this project. • This project includes to make Regional Data Center (Tier-1) for CMS, AMS, CDF(?) in Asia. The Regional Data Center will be located at CHEP (Center for High Energy Physics) at Kyungpook National University.

  5. 1. CMS Tier-1 Regional Center (CERN) 4. Belle Exp (Japan) Konkuk U CHEP Kyungpook N U Gyeongsang N U Korea U Seoul N U 2. AMS Regional Center (CERN) 5. K2K Exp (Japan) Sungkyunkwan U Chonnam N U Ewha W U Yonsei U Dongshin U 6. PHENIX Grid (USA) 3. CDF Grid (USA) KBSI Data Grid Cluster … Other users Participation of Institutions in the HEP Data Grid Project  Center for High Energy Physics ( C H E P ) Kyungpook National University Daegu, Korea • Established in July 1, 2000. • A national center designated by the Korean Ministry of Science and Technology and supported by the Korean Science and Engineering Foundation (KOSEF) • 47 HEP physicists with doctoral degrees and 120 graduate students from 12 institutes inside Korea participate in the center.

  6. Experiments for HEP Data Grid Project in Korea US FNAL (CDF) Space Station (AMS) USBNL (PHENIX) EuropeCERN (CMS) Korea CHEP JapanKEK (K2K)

  7. Project (2002-2006) Final Goal • Networking • Multi-leveled hierarchy (both for data and for computation) • Tier-1 : CHEP • Tier-2 : SNU, Sungkyunkwan University (SKKU) • Network Services • Videoconferencing • OO Database access • Data Storage Capability in Data Center • Storage : 1100 TB Raid Type Disk • Tape Drive : 60 Tape Drives of IBM 6590 • TSM Server/HPSS : 3Pbyte • Computing powers (1000 CPU Clusters) • Tier-1 Institute (CHEP) : 600 CPU linux clusters • Tier-2 Institute (SNU, SKKU) : each 200 CPU linux clusters

  8. Project (2002-2006) 3 PBytes storage Supercomputers Commodity PCs 1000 CPUs Processing Fabric LANs KOREN National network Gbit Networks The Goal of HEP Data Grid Network (1) For Data Grid abroad • Network between CERN(Tier-0) and CHEP(Tier-1), Korea  TEIN ( Gbps) • Network between Fermilab, USA and CHEP(Tier-1), Korea  APII (1 Gbps) • Network between KEK, Japan and CHEP(Tier-1), Korea APII and Hyunhae (1 Gbps) (2) For Domestic Data Grid • Network between CHEP(Tier-1) and Tier-2  155 Mbps ~ 1 Gbps • Network between CHEP(Tier-1) and Other  45 ~ 155 Mbps

  9. KOREN/TEIN and APII Test beds APII China (IHEP) Seoul USA StarTap, ESNET Fermilab, BNL etc. APII USA (Chicago) 1 Gbps TEIN (CERN) Daejeon Daegu APII JAPAN (KEK) CHEP Daegu KOREN Topology

  10. Current Status at CHEP KoreaCHEP

  11. Current Resources Hardware • CPU : 40 CPU (1.7 GHz) Clusters • HDD • about 3 Tbyte (80 GB IDE X 40) • 100Gbyte Raid + 600 Gbyte Tape library • Network CHEP ----- KNU ---- KOREN ---- Star tap ---- Fermilab 1Gbps 1 Gbps 45Mbps ? • The actual network performance between CHEP and FNAL is 3~5 Mbps (30~50 Gbyte/day).

  12. KOREN L3 Switch (3군) Servers Current Network between CHEP and FNAL (Actual 3~5Mbps)  FNAL Star Tap APII 45Mbps KyungpookNan’l Univ. Gigabit Ethernet Gigabit Ethernet TEIN 10Mbps C6509 CERN Gigabit Ethernet CHEP Gigabit Switch IBM 8271 40 … … PCs Servers

  13. Current Status at CHEP PC Clusters

  14. Current Status at CHEP Data Grid Software and Middleware • Installed Globus 2.0 on 12 PCs. (10 @CHEP, 1@SNU, 1@Fermilab) • Constructed Private CA (Certificate Authority). • Installed MDS (metacomputing directory service). • Installed GridFTP, Replica Catalog, Replica Management. • Test the Grid test-bed • between CHEP(Tier-1) and SNU(Tier-2) • between CHEP(Tier-1) and Fermilab

  15. CHEP and EDG (EU Data Grid) TEIN (Tran Eurasia Information Network ) EuropeCERN EDG (EU Data Grid) KoreaCHEP

  16. G E A N T T E I N K O R E N CERN CHEP 10 Mbps 1 Gbps TEIN (Trans Eurasia Information Network) • Actual network topology (map) of the TEIN Pilot Project • The bandwidth of the TEIN • Current :10 Mbps with 20M PCR • At the end of this year : 45 Mbps • Proportion of traffic exchanged with other Asian and European countries • This project will also include connectivity with KEK in Japan. • Possible future extensions of the TEIN Pilot Project • The ultimate goal of the connectivity between CHEP and CERN is more than Gbps (Lambda).

  17. The Status of EDG test bed in Korea • Three researchers have visited CERN for EDG test bed between July 14~24, 2002. • PKI was created for Korean test bed. • C=KR,O=KCHEPDG,OU=SNU : Seoul National University • C=KR,O=KCHEPDG,OU=KNU : Kyungpook National University • Representatives for institute: • Bockjoo Kim : Seoul National University • Kihyeon Cho : Kyungpook National University • We got the certificates from France. • We have learned basic components of EDG. • UI : User Interface • CE : Computing Element • SE : Storage Element • WN : Worker Node • RB : Resource Broker, Gate Keeper • LCFG : Server for installing EDG software for each component

  18. EDG Jobs Life Cycle Check resources for given job User submits job to UI and requests data to SE from UI Submit job to Worker node and send output to user

  19. Future plan for EDG test bed • CHEP will join VO (virtual Organization) as CMS. • To install basic components at CHEP. • To register the server for CA inside of Korea. • To maintain the test bed by running CMS simulation code. • We will have better network between Europe and Korea. • At the end of this year the bandwidth of TEIN will be 45 Mbps.

  20. CHEP and Fermilab Grid Fermilab in USA KCAF (DCAF in Korea) KoreaCHEP

  21. Why DCAF/Grid in the future? • CHEP has a plan to make a cron of CAF. • CAF (Central Analysis Farm) • Limited resources and spaces at FCC • At Run IIb, data size is 6 times more than now. • In case of network problems at FCC, it is dangerous. • DCAF • Users around regional area and/or around the world • Korea, Toronto, Karstrule, …. • Let us call DCAF in Korea as “KCAF”.

  22. …. DCAF in Korea (KCAF) DCAF Proposed schemes Central Analysis Farm (CAF) Resource Broker users

  23. Where is KCAF? APII (Asia Pacific Information Infrastructure) DCAF in Karstrule DCAF in Toronto CAF in USA DCAF in Korea (KCAF) Center for HEP (CHEP) at Kyungpook Nat’l University Daegu, Korea

  24. The Step for the Goal of KCAF • Step 1. To make MC production farm using KCAF • First, we start to construct 20/40 CPU test bed for KCAF. • After policy inside of CHEP (another test bed for EDG) and between CDF, we will decide how many CPUs for actual MC generation farm will be used among this year’s planed 100 CPUs. • Step 2. To handle real data • To extend the KCAF to the real data handling system using SAM, Gridftp, etc after settling down real data handling system. • Step 3. Final goal of CDF Grid • A gridification for KCAF related with EDG and CDF Grid

  25. Current CAF at FCC

  26. A Design of KCAF • Cron of CAF (Central Analysis Farm) • Users • Korean Group • Other Asian Group • Other around the world • Technical Request • KDC for Kerborse • Data Handling System • To connect between 1 Tbyte buffer and fcdfsam at FCC

  27. FCC (CAF) STKen fcdfsam Technical Request stager 1TB buffer (FCC) dCache CDFen (Raw Data) rcp, GridFTP bbftp Calibration Data rcp dCache CHEP (KCAF) User Desktop KCAF Head Node CAFGUI ICAFGUI KCAF Cluster stager SAM Station (KCAF) rcp cp Smaster FSS, Stager RemoteDesktop ICAF FTP server

  28. Future Plan (This year) • 100 PC clusters will be constructed at the end of this year. • 6 Tbyte HPSS system is supposed to come. • Contribute 1 T byte hard disk (now) + 4 T byte hard disk (December) to CDF for the network buffer between CHEP and FCC. • On early November, 2002, International HEP Data Grid Work Shop will be held at CHEP, Kyungpook National University in Korea.

  29. Conclusions • Grid is one of world trend at High Energy Physics area. Someday we should use it. • Our government is very interested in Grid project. • HEP Data Grid project in Korea is very active for both EU and USA side. • Any comments?

More Related