1 / 6

WAN RAW/ESD Data Distribution for LHC

WAN RAW/ESD Data Distribution for LHC. T0  T1 dataflow T0 Mass Storage recording of the RAW data from the 4 LHC experiments T0 First ESD production RAW data and ESD export to the Tier1 centers one copy of the RAW data spread over the T1 centers of an experiment

Télécharger la présentation

WAN RAW/ESD Data Distribution for LHC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. WAN RAW/ESD Data Distribution for LHC Bernd Panzer-Steindel, CERN/IT

  2. T0  T1 dataflow • T0 Mass Storage recording of the RAW data from the 4 LHC experiments • T0 First ESD production • RAW data and ESD export to the Tier1 centers one copy of the RAW data spread over the T1 centers of an experiment several copies of the ESD data sets (3-6), experiment dependent ESD size ~= 0.5 * RAW data (each T1 2/3 or one copy of the ESD)  ~10PB per year (requirements from the latest discussions with the 4 experiments) • T1T0 Data import (new ESD versions, MC data, AOD, etc.) • near real time export to the Tier1 centers during LHC running (200 days per year) + ALICE heavy ion data during the remaining 100 days • data transfers are between mass storage systems near real time == from disk cache  sizing of the tape system == minimal data recall from tape Bernd Panzer-Steindel, CERN/IT

  3. Network • There are currently 7+ Tier1 centers • (RAL, Fermilab, Brookhaven, Karlsruhe, IN2P3, CNAF, PIC,…) • The T0 export requirements need at least a 10 Gbit/s link per Tier1 • (plus more if one includes the Tier1-Tier2 communications) • The CERN T0 needs at least a 70 Gbit/s connection • the efficiency is still unknown Bernd Panzer-Steindel, CERN/IT

  4. We need to start Service Data Challenges which • should test/stress all necessary layers for these large continuous data • transfers • network hardware circuit switching versus packet switching, QoS • transport TCP/IP parameters, new implementations • transfer mechanisms GRIDFTP • mass storage systems ENSTORE, CASTOR, HPSS, etc. • coupling to the mass storage systems SRM 1.x • replication system • data movement service (control and bookkeeping layer) • Key points : • resilience and error-recovery !! • resilience and error-recovery !! • resilience and error-recovery !! • modular layers • simplicity • performance Bernd Panzer-Steindel, CERN/IT

  5. Proposed timescales and scheduling Bernd Panzer-Steindel, CERN/IT

  6. These WAN service data challenges needs dedication of • material : cpu server, disk server, tape drives , etc. • services : HSM, load balanced GRIDFTP, network • personnel : for running the DC, debugging, tuning, software selection and tests on the T0 and the different T1 centers dedication of material and personnel for longer time periods months not weeks ! important for getting the necessary experience, only 2 years for a reliable working system worldwide, T1 – T0 network to be watched : • interference with ongoing productions (HSM, WAN capacity, etc.) need to start now with a more detailed plan and start to ‘fill’ the network right now ! challenging, interesting and very important who is participating when and how ????? discussion……………………….. Bernd Panzer-Steindel, CERN/IT

More Related