1 / 17

UK Computing

UK Computing. Glenn Patrick Rutherford Appleton Laboratory. RAL Tier-1. RAL Tier-1. 2002 - 312 cpus 4 racks holding 156 dual 1.4GHz Pentium III cpus. March 2003 - extra 160 cpus 80 x dual processor P4, 2.66GHz, Xenon Dec 2003 - extra 400/500 cpu 200 - 250 dual systems.

Télécharger la présentation

UK Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UK Computing Glenn Patrick Rutherford Appleton Laboratory LHCb Software Week

  2. RAL Tier-1 LHCb Software Week

  3. RAL Tier-1 2002 - 312 cpus 4 racks holding 156 dual 1.4GHz Pentium III cpus. March 2003 - extra 160 cpus 80 x dual processor P4, 2.66GHz, Xenon Dec 2003 - extra 400/500 cpu 200 - 250 dual systems Operating system now RedHat 7.3. Batch = PBS. CSF Legacy Equipment 250 cpus (450MHz - 1 GHz) Total ~1000 cpus LHCb Software Week

  4. RAL Tier-1 Mass Storage 2002 - 40TB 26 disk servers, each with 2 x 0.8TB filesystems. Disk Cluster March 2003 - Extra 40TB 11 new disk servers, each with 2 x 1.8TB filesystems Dec 2003 - Extra ~70TB Total ~150TB LHCb Software Week

  5. RAL DataStore STK 9930 (Powderhorn) tape robot June 2003 Updated with 8xSTK 9940B drives. Transfer speed 30MB/sec each drive Tape capacity 200GB 5,500 slots = 1PB potential capacity Current capacity limited by number of tapes. LHCb Software Week

  6. STK 9310 “Powder Horn” 9940B 9940B 9940B 9940B 9940B 9940B 9940B 9940B A A A A A A A A Switch_1 1 2 3 4 Switch_2 5 6 7 8 11 12 13 14 15 11 12 13 14 15 RS6000 fsc0 fsc1 RS6000 fsc0 fsc1 RS6000 fsc0 fsc1 fsc0 RS6000 fsc1 rmt1 rmt5-8 rmt2 rmt5-8 rmt3 rmt5-8 rmt4 rmt5-8 1.2TB 1.2TB 1.2TB 1.2TB Gbit network LHCb Software Week

  7. GRID at RAL LCG Currently 5 worker nodes in testbed. 15.8.2003 LCG On-Line 10.9.2003 Upgraded to LCG1-1_0_0 22.9.2003 Upgraded to LCG1-1_0_1 Amount of future hardware deployed in LCG depends on experiments and GRIDPP. EDG EDG 2.1 Deployed on development testbed EDG 2.0 On main production testbed. EDG 1.4 Gatekeeper into main production farm. LHCb Software Week

  8. LHCb ATLAS BaBar CMS LHCb Software Week

  9. Factor of 6.7 Still hope for ~10% share from 3 largest centres? LHCb Software Week

  10. Tier-1 Resources for LHCb • Requested for DC04 (April - June): • From Marco’s numbers (assuming same share as DC03) • CPU requirement - 366M SI2k*hours • 6TB of disk for "permanent copy" of all DSTs (may reduce to 1TB if pre-selection is used) to be used for analysis. • Existing disk servers (3.2TB) used to store MC production from RAL and other UK sites before transfer to tape/CERN.. • Mass storage of 7TB to store SIM+DIGI+ data from all UK sites. But actual resources will depend on competition from other experiments. LHCb Software Week

  11. CPU Requirements (KSI2K) x3 LHCb LHCb LHCb need ~20% of farm for 3 months LHCb Software Week

  12. ScotGrid NorthGrid SouthGrid London Grid UK Tier-2 Centres NorthGrid Daresbury, Lancaster, Liverpool, Manchester, Sheffield SouthGrid Birmingham, Bristol, Cambridge, Oxford, RAL PPD ScotGrid Durham, Edinburgh, Glasgow LondonGrid Brunel, Imperial, QMUL, RHUL, UCL LHCb Software Week

  13. Existing Hardware (April 2003) Estimated Hardware (Sept 2004) LHCb Software Week

  14. Liverpool • New MAP2 facility now installed • 940 3GHz/1GB/128GB P4 Dell Nodes • SCALIManage installed, RH9 • 20 CPU CDF facility now installed, Fermi RH Linux and 5.9TB disk • MAP memory upgrade (270 nodes) • EDG 2.0 being installed 10% of DC04 would take 17 days on all processors. Initial LHCb scheduling proposal: 50% of farm for ~1 month. ~1.1M SPECint2k LHCb Software Week

  15. ScotGrid Phase 1 complete ScotGRID Processing nodes at Glasgow (128 cpu) • 59 IBM X Series 330 dual 1 GHz Pentium III with 2GB memory • 2 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory • 3 IBM X Series 340 dual 1 GHz Pentium III with 2GB memory • 1TB disk • LTO/Ultrium Tape Library • Cisco ethernet switches • ScotGRID Storage at Edinburgh (5TB) • IBM X Series 370 PIII Xeon • 70 x 73.4 GB IBM FC Hot-Swap HDD Phase 2 - now commissioning Upgrade database server in Edinburgh 16-20TB disk storage in Edinburgh 5TB disk storage in Glasgow (relocation from Edinburgh) Edge servers for Edinburgh New kit for Glasgow - CDF and eDIKT LHCb Software Week

  16. Imperial Viking at London e-Science Centre: Upgrade to ~500 cpu cluster (33% HEP + bioinformatics +…) Ready to join LCG1 Ulrik - “Factor of 5 seems realistic” running across ~3 months. Note: Other potential resources coming online in London Tier 2... Royal Holloway ~100cpu UCL ~100 cpu Brunel(BITLab) 64 dual Xenon nodes + 128 more nodes Timescale? LHCb use? LHCb Software Week

  17. Manpower Currently, little dedicated manpower. Rely on effort shared with other tasks. Gennady has been main technical link for Tier1. Easy installation/maintenance of production & analysis software and tools LHCb Software Week

More Related