1 / 8

Jean-Yves Nief, CC-IN2P3

Xrootd @ CC-IN2P3. Jean-Yves Nief, CC-IN2P3. HEPiX, SLAC October 11th – 13th, 2005. Overview. CC-IN2P3: Tier A for BaBar since 2001. Xrootd deployed primarily for BaBar (2003). Smooth transition from the Objectivity architecture: The 2 systems are running on the same servers.

makenna
Télécharger la présentation

Jean-Yves Nief, CC-IN2P3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Xrootd @ CC-IN2P3 Jean-Yves Nief, CC-IN2P3 HEPiX, SLAC October 11th – 13th, 2005

  2. Overview. • CC-IN2P3: Tier A for BaBar since 2001. • Xrootd deployed primarily for BaBar (2003). • Smooth transition from the Objectivity architecture: • The 2 systems are running on the same servers. • Hybrid storage (disks + tapes): • Tapes: master copy of the files. • Disks: temporary cache. • Interfaced with the Mass Storage System (HPSS) using RFIO in Lyon. HEPIX conference, SLAC, October 11th-13th 2005

  3. (5) (4) (3) (6) (1) (2) Lyon architecture. (4) + (5): dynamic staging T1.root HPSS 20 data servers – 70 TB of disk cache 140 To: ROOT 180 To: Objy Slave server: Xrootd / Objy Slave server: Xrootd / Objy (6): random access Master server: Xrootd / Objy 2 servers (etc…) Client (1) + (2): load balancing T1.root ?

  4. Xrootd for other experiments. • Master copy of the data in HPSS for most of the experiments. • Transparent access to these data. • Automatic management of the cache resource. • Used on a daily basis within the ROOT framework by (up to 1.5 TB of disk cache used): • D0 (HEP). • AMS (astroparticle). • INDRA (nuclear physics). HEPIX conference, SLAC, October 11th-13th 2005

  5. Assessment … • Very happy with Xrootd ! • Fits really well our needs. • Random access between the client and data server. • Sequential access between MSS and servers. • Lots of freedom in the configuration of the service. • Administration of servers very easy (fault tolerance). • No maintenance to do even under heavy usage (more than 600 clients in //). • Scalability: very good prospects. HEPIX conference, SLAC, October 11th-13th 2005

  6. … and outlook. • Going to deploy it for Alice and also CMS (A. Trunov): • Xrootd / SRM interface. • Usage outside the ROOT framework: • I/O for some projects (e.g.: astrophysics) can be very stressfull compared to regular HEP applications. • Needs transparent handling of the MSS. • Using Xrootd POSIX client APIs for reading and writing. HEPIX conference, SLAC, October 11th-13th 2005

  7. offset « time » Xrootd vs dCache. I/O profile for Orca client • Doing comparison tests between the 2 protocols: • I/Os taken out from a CMS application (Orca). • Pure I/Os (random access). • Stress test using up to 100 clients accessing 100 files. • Sorry! Preliminary results cannot be revealed… • To be continued… STRONGLY ENCOURAGING PEOPLE TO DO SOME TESTING !

  8. Issues for the LHC era. • Prospects for CC-IN2P3: • 4 Pbytes of disk space foreseen in 2008. • Hundreds of disk servers needed ! • Thousands of clients. • Issues: • Choice of the protocol not innocent (€, $, £, CHF). • Need to be able to cluster hundreds of servers. • Point 2 is a key issue and has to be addressed !! • Xrootd is able to answer it. HEPIX conference, SLAC, October 11th-13th 2005

More Related