1 / 29

IET visits, 15 & 19 April 2010

From Zettabytes to Knowledge. IET visits, 15 & 19 April 2010. Wolfgang von Rüden CERN IT Department, Head of CERN openlab. From the International System of Units *. * http://en.wikipedia.org/wiki/Yotta-. CERN’s Tools. The world’s most powerful accelerator : LHC

geneva
Télécharger la présentation

IET visits, 15 & 19 April 2010

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. From Zettabytes to Knowledge IET visits, 15 & 19 April 2010 Wolfgang von RüdenCERN IT Department, Head of CERN openlab

  2. From the International System of Units * * http://en.wikipedia.org/wiki/Yotta- Wolfgang von Rüden, CERN

  3. CERN’s Tools • The world’s most powerful accelerator: LHC • A 27 km long tunnel filled with high-tech instruments • Equipped with thousands of superconducting magnets • Accelerates particles to energies never before obtained • Produces particle collisions creating microscopic “big bangs” • Very large sophisticated detectors • Four experiments each the size of a cathedral • Hundred million measurement channels each • Data acquisition systems treating Petabytes per second • Top level computing to distribute and analyse the data • A Computing Grid linking ~200 computer centres around the globe • Sufficient computing power and storage to handle the data, making them available to thousands of physicists for analysis Wolfgang von Rüden, CERN

  4. LHC

  5. The Large Hadron Collider (LHC) tunnel

  6. LHC experiments Wolfgang von Rüden, CERN

  7. The “ATLAS” experiment during construction 7000 tons, 150 million sensors, >1 petabyte/s

  8. CMS Closed & Ready for First Beam 3 Sept 2008 Wolfgang von Rüden, CERN

  9. About Zettabytes of raw data… Massive on-line data reduction required to bring the rates down to an acceptable level before storing the data on disk and tape.

  10. The LHC Computing Challenge • Signal/Noise: 10-9 • Data volume • High rate * large number of channels * 4 experiments • 15 PetaBytes of new data each year • Compute power • Event complexity * Nb. events * thousands users • 100 k of (today's) fastest CPUs • 45 PB of disk storage • Worldwide analysis & funding • Computing funding locally in major regions & countries • Efficient analysis everywhere •  GRID technology

  11. simulation Data Handling and Computation for Physics Analysis reconstruction event filter (selection & reconstruction) detector analysis processed data event summary data raw data batch physics analysis event reprocessing analysis objects (extracted by physics topic) event simulation interactive physics analysis les.robertson@cern.ch Wolfgang von Rüden, CERN

  12. How does it work?

  13. Proton acceleration and collision Protons are accelerated by several machines up to their final energy (7+7 TeV*) Head-on collisions are produced right in the centre of a detector, which records the new particle being produced Such collisions take place 40 million times per second, day and night, for about 150 days per year * In 2010-11 only 3.5 + 3.5 TeV

  14. Particle collisions in the centre of a detector

  15. Massive Online Data Reduction

  16. Tier 0 at CERN: Acquisition, First pass processingStorage & Distribution

  17. Tier 0 – Tier 1 – Tier 2 • Tier-0 (CERN): • Data recording • Initial data reconstruction • Data distribution • Tier-1 (11 centres): • Permanent storage • Re-processing • Analysis • Tier-2 (~130 centres): • Simulation • End-user analysis

  18. Fibre cut near Basel Data transfer • Full experiment rate needed is 650 MB/s • Desire capability to sustain twice that to allow for Tier 1 sites to shutdown and recover • Have demonstrated far in excess of that • All experiments exceeded required rates for extended periods, & simultaneously • All Tier 1s have exceeded their target acceptance rates Wolfgang von Rüden

  19. The Worldwide LHC Computing • The LHC Grid Service is a worldwide collaboration between: • 4 LHC experiments and • ~200computer centres that contribute resources • International grid projects providing software and services • The collaboration is brought together by a MoU that: • Commits resources for the coming years • Agrees a certain level of service availability and reliability • As of today 33 countries have signed the MoU: • CERN (Tier 0) + 11 large Tier 1 sites • 132 Tier 2 sites in 64 “federations” • Other sites are expected to participate but without formal commitment

  20. The very first beam-splash event from the LHC in ATLAS on 10:19, 10th September 2008

  21. 30 March 2010, first high energy collisions

  22. Capacity of CERN’s data centre (Tier0) • Compute Nodes: • ~7000 systems • 41’000 cores • Disk storage: • 14 Petabyte (>20 soon) • 60’000 disk drives • Tape storage: • Capacity: 48 Petabyte • In use: 24 Petabyte • Corresponds to ~15% of the total capacity in WLCG Wolfgang von Rüden

  23. Thank you !

More Related