1 / 13

LHCb Planning

LHCb Planning. Pete Clarke (Uni. Edinburgh) Stefan Roiser (CERN, IT/ES). SHORT-TERM PLANNING. Operations. Planning until Summer Incremental Stripping to be started in April Will be limited by the performance of the tape systems Reminder on needed bandwidth for tape recall (MB/s)

chul
Télécharger la présentation

LHCb Planning

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb Planning Pete Clarke (Uni. Edinburgh) Stefan Roiser (CERN, IT/ES)

  2. SHORT-TERM PLANNING WLCG Ops Planning

  3. Operations • Planning until Summer • Incremental Stripping to be started in April • Will be limited by the performance of the tape systems • Reminder on needed bandwidth for tape recall (MB/s) • Operation will last for 8 weeks • Next incremental stripping planned for fall ’13 • Otherwise mainly Monte Carlo and User activities • CERN CASTOR to EOS migration close to be finished WLCG Ops Planning

  4. (preview on currently ongoing discussions) Mid-term planning WLCG Ops Planning

  5. CVMFS deployment • LHCb sticks to the target deployment day 30 April 2013 • No more software updates after that day at the “old shared software areas” • Usage of the dedicated mount point for our “conditions DB”currently under discussion • To be used in production after LS1 • Structuring of the online conditions also under discussion, will have impact on usage WLCG Ops Planning

  6. Tighter integration with T2 sites • Currently ongoing discussion on how to integrate some T2 sites into more workflows • Minimum requirements will be published • E.g. X TB of disk space, Y number of WNs, etc. • Those sites will be able to run e.g. also analysis jobs from local disk storage elements • Better monitoring and “performance measurements” of those sites will be needed • Publish LHCb measurements into IT monitoring (SUM, Dashboard) WLCG Ops Planning

  7. FTS3 integration and deployment • Currently ongoing discussions about needed features for the experiment and their implementation • E.g. Bringonline, number of retrials, • Test instance with all needed functionality close to deployment WLCG Ops Planning

  8. Federated storage • Federated storage usage will be implemented • Decision on technology (xroot, http) not yet taken but shall be only one of them • Idea to use fallback onto other storage elements only as exception WLCG Ops Planning

  9. WLCG Information System • Very good abstraction layer to underlying information systems • Can replace several e.g. BDII queries currently implemented within DIRAC (CE discovery, …) WLCG Ops Planning

  10. Monitoring • LHCb will feed its monitoring information into the IT provided infrastructure (e.g. SAM/SUM) • Better monitoring and ranking of T2 sites will be needed • Thresholds to be introduced • Better information to be provided to sites to find out about LHCb’s view on them • Eg. “why is my site currently not used” • Will be also provided through the Dirac web portal WLCG Ops Planning

  11. Other efforts to keep an eye on • perfSonar • Will be helpful for network monitoring, especially in view of T2 integration into more workflows • SL6 deployment • LHCb software pretty well decoupled from WN installation, no major problem foreseen • First full slc6 based software stack to be released soon • glExec • Is being tested within LHCb grid software WLCG Ops Planning

  12. Internal Reviews • LHCb is conducting 2 internal reviews • Of the fitness for purpose of the distributed computing system (based on DIRAC) • Of the Computing Model itself • Both due to report ~ mid 2013 WLCG Ops Planning

  13. Conclusions • Major trends are available • Federated storage, T2 integration, more monitoring • A final planning and technical decisions will be available after the closing of the currently ongoing reviews of Dirac and the computing model WLCG Ops Planning

More Related