1 / 20

Storage at RAL Service Challenge Meeting 27 Jan 2005

Storage at RAL Service Challenge Meeting 27 Jan 2005. What was GridPP1?. A team that built a working prototype grid of significant scale > 2,000 (9,000) CPUs > 1,000 (5,000) TB of available storage > 1,000 (6,000) simultaneous jobs

andrewyoung
Télécharger la présentation

Storage at RAL Service Challenge Meeting 27 Jan 2005

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Storage at RAL Service Challenge Meeting 27 Jan 2005

  2. What was GridPP1? • A team that built a working prototype grid of significant scale > 2,000 (9,000) CPUs > 1,000 (5,000) TB of available storage > 1,000 (6,000) simultaneous jobs • A complex project where 88% of the milestones were completed and all metrics were within specification Oversight Committee

  3. GridPP Deployment Status (9/1/05) GridPP deployment is part of LCG (Currently the largest Grid in the world) The future Grid in the UK is dependent upon LCG releases Three Grids on Global scale in HEP (similar functionality) sites CPUs • LCG (GridPP) 90 (16) 9000 (2029) • Grid3 [USA] 29 2800 • NorduGrid 30 3200 Oversight Committee

  4. UK Tier-2 Centres ScotGrid Durham, Edinburgh, Glasgow NorthGrid Daresbury,Lancaster, Liverpool, Manchester, Sheffield SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD,Warwick LondonGrid Brunel, Imperial, QMUL, RHUL, UCL Oversight Committee

  5. Multiple Experiments ATLAS LHCb CMS SAMGrid (FermiLab) BaBar (SLAC) QCDGrid PhenoGrid Oversight Committee

  6. Middleware Development Network Monitoring Configuration Management Grid Data Management Storage Interfaces Information Services Security Oversight Committee

  7. Storage Group • GridPP Storage Group • Development and support • Data and Storage • RAL, Edinburgh, Glasgow

  8. Overall Goals – GridPP2 • Provide SRM interfaces to: • The Atlas Petabyte Storage facility at RAL • Disk (for Tier 1 and 2 in UK) • Disk pools (for Tier 1 and 2 in UK) • Package and support

  9. GridPP in the UK • Tier 1: RAL • Tier 2: ScotGrid, NorthGrid, SouthGrid, London • Each T2 consists of several sites • Support tape at T1 • Disks and disk pools at T1 and T2

  10. Options • RAL-SRM • an SRM interface to ADS developed from EDG-SE • dCache • from DESY-FNAL with SRM interface • DRM – From LBNL • dpm – from EGEE SA1 Deployment

  11. (Short Term) Timeline • Provide a release of SRM to disk and disk pool by end of January 2005 • Was planned to coincide with the EGEE gLite “release” • Was planned to match the path toward the full gLite release • Now focusing more on production…

  12. (Short Term) Strategy • Focus on dCache • Andrew reported on Tier1 work SRM to ADS Storage Element SRM to disk dCache + dCache-SRM SRM to disk pool dCache has seen more testing

  13. Longer Term Strategy • Possibly, dual solution SRM to ADS Storage Element SRM to disk dCache + dCache-SRM SRM to disk pool

  14. Optimising SE Now: write via SE disk ADS SE VTP GridFTP Pipe via SE GridFTP VTP Directly to tapestore ??? ??? But need data protocol supported by both

  15. Longer Term Strategy • Actively interworking with other SRMs • Cross testing, development • SRM Collaboration • GSM within GGF

  16. Acceptance tests • SRM tests – SRM interface must work with: • srmcp (the dCache SRM client) • GFAL • gLite I/O • Disk pool test – must work with • dccp (dCache specific) • plus SRM interface on top

  17. Other Deployments Manchester

  18. Existing Edinburgh System LCFG Server(glenellen) SE Server(glenmorangie) CE Server(glenlivet) NFS mount indirect connection direct network connection WNs Disk Server(glenkinchie) Glenkinchie - 24TB RAID using 24 * 1TB partitions - limited to private networkGlenmorangie - dual PIII 1GHz, 2GB RAM - /storage  NFS to glenkinchie partition(s)

  19. Proposed Edinburgh System Classic SE Server(glenmorangie) dCacheSE Server(se) NFS mount Disk Server / dCache Pool Node(glenkinchie) Glenkinchie - patched & connected to Internet - dCache Pool software and GridFTP installed - Classic SE supported until existing data migrated - partitions classed as individual pools giving 24 max

  20. Summary • UK Storage Group • working with GridPP • Supporting SRM solutions in the UK • For Tier1 and Tier2 • and anyone else • Most other communities being steered towards SRB as part of a data management framework.

More Related