1 / 29

High Energy Physics (HEP) Computing

High Energy Physics (HEP) Computing. HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu, Oct. 16~18, 2008. High Energy Physics (HEP). People have long asked, “What is the world made of ?” “What holds it together ?”.

burian
Télécharger la présentation

High Energy Physics (HEP) Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Energy Physics (HEP) Computing HyangKyu Park Kyungpook National University Daegu, Korea 2008 Supercomputing & KREONET Workshop Ramada Hotel, JeJu, Oct. 16~18, 2008

  2. High Energy Physics (HEP) People have long asked, “What is the world made of ?” “What holds it together ?” High Energy Physics (HEP) is the study of the basic elements of matter and the forces acting among them.

  3. Some of the Questions we hope to Answer... What is the origin of mass? How many space-time dimensions do we live in? Are the particles fundamental or do they possess structure? Why is there overwhelmingly more matter than anti-matter in the Universe? What is the nature of the dark matter that pervades our galaxy?

  4. GermanyDESY US FNAL US SLAC USBNL EuropeCERN JapanKEK Major HEP Laboratories in the World

  5. Major HEP Experiments HEP collaborations are increasingly international.

  6. CMS Computing

  7. Large Hadron Collider (LHC) @CERN,where the web was born 6000+ Physicists 250+ Institutes 60+ Countries CMS TOTEM Atlas ALICE LHC-b LHCb: B-physics Challenges: Analyze petabytes of complex data cooperativelyHarness global computing, data & network resources

  8. LHC started just Now !

  9. “The CMS detector is essentially 100-megapixel digital camera that will take 40 M pictures/s of particle interaction.”by Dan Green. • The High Level Trigger farm writes RAW events with 1.5 MB at a rate of 150 Hz. 1.5 MB x 150/s x 107 s ≈ 2.3 Peta-Byte/yr

  10. LEP CMS Factor (1989/2000) (2008 ) 2 Nr. Electronic Channels 100 000 10 000 000 x 10 » » 4 Raw data rate 100 GB s 1 000 TB s x 10 » / » / 2 Data rate on Tape 1 MB s 100 MB s x 10 » / » / Event size 100 KB 1 MB x 10 » » 3 Bunch Separation 22 s 2 5 ns x 10 m 3 Bunch Crossing Rate 45 KHz 40 MHz x 10 Rate on Tape 10 Hz 100 Hz x 10 - 6 5 Analysis 0.1 Hz 10 Hz x 10 (Z , W) (Higgs) 0 LEP & LHC in Numbers x 1000

  11. KNU The LHC Data Grid Hierarchy ~2000 physicists, 40 countries ~10s of Petabytes/yr by 2010~1000 Petabytes in < 10 yrs?

  12. Service and Data Hierarchy • Tier-0 at CERN • Data acquisition & reconstruction of raw data • Data Archiving (Tape & Disk storage) • Distribution of raw & recon data -> Tier-1 centers • Tier-1 • Regional & global serivces • ASCC (Taiwan), CCIN2P3 (Lyon), FNAL (Chicago), GridKA (Kalsruhe), INFN-CNAF (Bologna), PIC (Barcelona), RAL (Oxford) • Data Archiving (Tape & Disk storage) • Reconstruction • Data Heavy Analysis • Tier-2 • ~40 sites (including Kyungpook National Univ.) • MC production • End-user Analysis (Local community use)

  13. LHC Computing Grid(LCG) Farms LCG_KNU

  14. Current Tier-1 Computing Resources • Requirements by 2008 • CPU: 2500 kSI2k • Disk: 1.2 PB • Tape: 2.8 PB • WAN: At least 10 Gbps

  15. Current Tier-2 Computing Resources • Requirements by 2008 • CPU: 900 kSI2k • Disk: 200 TB • WAN: At least 1 Gbps. 10 Gbps is recommended

  16. CMS Computing in KNU

  17. JP KOREN/APII KR-JP 10G North America (via TransPAC2) (via GLORIAD) 10G EU KR CN 2.5G(622M) 10G TEIN2 North/ORIENT 10G(2G) 622M+1G KREONET/ GLORIAD KR-CN 622 155 622 APII/TEIN2, GLORIAD (2007.10) VN 45 HK 622 PH 622 TH EU 155 3 x 622 45 MY 4 x 155 45 TEIN2 South SG ID AU Courtesy by Prof. D. Son and Dr. B.K. Kim

  18. CMS Computing Activities in KNU • Running Tier-2 • Participating in LCG Service Challenges, CSAs every year as Tier-2 • SC04 (Service Challenge): Jun. ~ Sep.,2006 • CSA06 (Computing, Software & Analysis): Sep. ~ Nov., 2006 • Load Test 07: Feb ~ Jun., 2007 • CSA07: Sep. ~ Oct., 2007 • Pre CSA08: Feb.,2008 • CSA08: May~June, 2008 • Testing, Demonstrating, Bandwidth Challenging • SC05, SC06, SC07 • Preparing physics analyses • RS Graviton search • Drell-Yan process study • Configured Tier3 and supporting Tier3’s (Konkuk U.)

  19. CSA07 (Computing, Software & Analysis) • A “50% of 2008” data challenge of the CMS data handling • Schedule: July-Aug. (preparation), Sep. (CSA07 start)

  20. CSA08 (Computing, Software & Analysis)

  21. Summary of CSA 07

  22. Transferred Data Volume from Tier-1 to KNU during CSA08

  23. Job Submission Activity during CSA08 MIT DESY KNU

  24. Transferred Data Volume from Tier-1 to KNU

  25. Job Submission Activity from Apr. to Oct. System Upgrade Down time stem MIT DESY KNU

  26. Configuring the Tier-3with KonKuk University

  27. Elements of Data Grid System • Data GRID Service (or Supported) Nodes: • glite-UI (User Interface) • glite-BDII (Berkeley Database Information Index) • glite-LFC_mysql (LCG file catalog) • glite-MON (Monitor) • glite-PX (Proxy server) • glite-SE_dcache (Storage Element) • glite-RB (Resource Broker, Job management) • glite-CE_torque (Computing element) • Worker node: data process and computation • Storage Element (File server): Store a large amount of data. 8 Nodes

  28. Tier-3 Federation 건국대 강원대 고려대 시립대 Seoul KOREN CMS Institution 20G 성균관 Suwon 충북대 40G 경북대 Resource: 40 CPU’s & 10 TB Daejeon Daegu 10G 전북대 10G 10G 서남대 10G 전남대 동신대 Busan 경상대 Gwangju

  29. Summary • HEP has pushed against the limits of networking and computer technologies for decades. • High Speed Network is vital for HEP researches. • LHC experiment has started justnow, and will produce ~10 PB/yr of data soon. We may expect 1 Tbps in less than a decade. • HEP groups in US, EU, Japan, China and Korea are collaborating for advanced net projects and Grid computing.

More Related