1 / 5

( non-archive) Storage Futures

( non-archive) Storage Futures. Wahid Bhimji SRM; FTS3; xrootd ; DPM collaborations; cluster filesystems. SRM ; FTS3; xrootd. SRM is currently required on all WLCG storage It has limitations; not much of the spec is used Some ( eg . CERN!) are talking about not using it

rune
Télécharger la présentation

( non-archive) Storage Futures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. (non-archive)Storage Futures Wahid Bhimji SRM; FTS3; xrootd; DPM collaborations; cluster filesystems

  2. SRM ; FTS3; xrootd • SRM is currently required on all WLCG storage • It has limitations; not much of the spec is used • Some (eg. CERN!) are talking about not using it • There is a WLCG WG to monitor alternatives (ensure interoperation; limit proliferation; etc.) • BUT ATLAS and LHCb require developmentto get away from SRM and some issues are not solved So.. • Storage for coming years needs a stable SRM interface. • In future it may not – there will be an interface of some sort – but it will be lighter (I hope). • FTS already supports gridftp-only endpoints and FTS3 will also offer http and xrootd. • Xrootd use is expanding • Big interest is “federated” storage –failover and “anydata anywhere” • (Other solutions e.g http can offer this and are not hep specific) • CMS is asking all sites to have xrootd interface by end of year • ATLAS is also pushing deployment – but use cases not clear…

  3. DPM collaboration • DPM support at CERN decreasing from current (v. good) level • CERN asking for collaborators to continue to maintain DPM • They say they will provide “minimal” support even without collaboration (bug fixes etc.) • Collaboration also has advantages in terms of getting needed developments • On the other hand – landscapes change: • dCache is maybe easier to use than before; StoRM maybe more stable; Lustre and HDFS are well established • Next years shutdown _may_ also be an opportunity to try something different • Though DPM also offering “DMLite” ontop of Lustre/HDFS

  4. Options and issues:my twist on a doc from Jens • Join collaboration (~1FTE or 2 x 0.5) • Do we have the skills for core development? • Does DPM have a long term (support) future? • Is the shutdown a chance to move to something “better” (e.g. for “hotfiles”) • Move to something else • dCache; Storm/Lustre; DMLite/Lustre; DMLite/HDFS • Migrating data (for ATLAS a recopy is fine but there is bound to be some hassle) • Migrating storage (onto something new / unfamiliar) lot of work – especially for smaller sites. • Have a lot of DPM experience (e.g. tuning) so alternative may not work out “better” for us

  5. Evaluation • Need to try dpm development to see how easy it is (e.g. with DMLite) • Need criteria if comparing alternatives. E.g: • Transition effort • Maintenance effort For our use cases: • Stability • Functionality • Performance (inc. ease of tuning) • Both of these take time (i.e. six months evaluating could be spent training in DPM) • Site-admin view should have high weight...

More Related