1 / 14

UKI-SouthGrid Overview GridPP27

UKI-SouthGrid Overview GridPP27. Pete Gronbech SouthGrid Technical Coordinator CERN September 2011. UK Tier 2 reported CPU – Historical View to present. SouthGrid Sites Accounting as reported by APEL. Resources vs Gridpp3 h/w generated MoU for 2011,12. JET.

shiro
Télécharger la présentation

UKI-SouthGrid Overview GridPP27

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UKI-SouthGrid OverviewGridPP27 Pete Gronbech SouthGrid Technical Coordinator CERN September 2011

  2. UK Tier 2 reported CPU – Historical View to present SouthGrid September 2011

  3. SouthGrid SitesAccounting as reported by APEL SouthGrid September 2011

  4. Resources vs Gridpp3 h/w generated MoU for 2011,12 SouthGrid September 2011

  5. SouthGrid September 2011

  6. JET • Since last meeting the site has been less well utilised. Partly due to down time associated with upgrades. • Essentially a pure CPU site • 1772 HepSPEC06 • 10.5 Tb of storage • All service nodes have been upgraded to glite3.2, with CREAM ce’s. SE is now 10.5TB • Aim is to enable the site for Atlas production work, but the Atlas s/w will be easier to manage if we setup CVMFS. • Oxford will help JET do this. SouthGrid August 2010

  7. Birmingham Tier 2 Site Not much has changed since the last meeting! Our hardware is still: 24 8-core machines (192 job slots) @ 9.61-HEP-SPEC06 (Local) ● 48 4-core machines (192 jobs slots) @ 7.93-HEP-SPEC06 (Shared) ● 177.35T of DPM storage across 4 pool nodes ● As for service nodes, we have: 4 CEs (2 CREAM and 2 LCG), serving the two clusters ● CREAM CE for local cluster also runs Torque ● 2 ALICE VO Boxes, 1 for each cluster ● An ARGUS server for the local cluster ● Usual BDII, APEL and DPM MySQL server nodes ● All these are running gLite 3.2 SL5 with the exception of the LCG Ces The main change from last time is we have deployed glexec on the local cluster – still waiting on a tarball install for the shared cluster Have just taken delivery of 2 new 8 core systems to replace the 4 quad core service machines. Our Future plans include: Decommission the LCG CEs ● Consolidate service nodes on to new machines ● Split the Torque server and CREAM CE ● Deploy CVMFS ● Turn the older service machines in to workers (maybe!) ● Hopefully most of this can be done in one go in the next month or so!

  8. Bristol • Status • StoRM SE with GPFS, 102TB “almost completely” full of CMS data • Currently running StoRM 1.3 on SL4, plan to upgrade as soon as there is a stable new release, so far 1.6, 1.7 have not been. • Bristol has two clusters, both controlled by Physics. Neither of the university HPC clusters are currently being used. • New Dell VM hosting node bought to run service VMs on, with help from Oxford. • Recent changes • New Cream ce’s front each cluster, one glite 3.2 and one using the new UMD release. (Installed by Kashif ) • Glexec and Argus have not yet been installed.

  9. Cambridge • Status • CPU : 246 job slots • 2445 HS06 • Storage : 201TB [si] online, plus 38TB exclusively used by Camont • Most services glite 3.2, exception is the DPM head node and the LCG-ce for the condor cluster. • DPM v1.8.0 on of the DPM disk servers, SL5 • XFS file system for the storage • Batch System – Condor 7.4.4, Torque 2.3.13 • Supported VOs: Mainly Atlas, LHCb and Camont • Recent Changes • CREAM CE with PBS installed • Also working on CREAM-Condor in parallel • APEL issues • Problems with the existing APEL implementation for Condor SouthGrid August 2010

  10. RALPP • 2056 CPU cores, 19655 HS06 • 980TB disk • We now run purely CreamCEs: 1 * glite 3.2 on a VM (soon to be retired), 2 * UMD (though at time of writing, one doesn’t seem to be publishing properly). • Lately a lot of problems with CE stability, as per discussions on the various mailing lists. • Batch system is still Torque from glite 3.1, but we will soon bring up an EMI/UMD torque to replace it (currently installed for test). • SE is dCache 1.9.5 – planning to ugrade to 1.9.12 in the near future. • Has been very busy over recent months. SouthGrid August 2010

  11. Oxford • Oxford’s workload is dominated by ATLAS analysis and production • Installed kit • Autumn 2010 upgrade added 256 cores based on dual 8-core AMD Opterons • These have dual disks striped with s/w raid to improve I/O. • And three new 36 bay disk servers took storage up to 290Tb to meet MoU requirements. • Recent Upgrades • Using Departmental money • 14 Dell R510 disk servers, faster and smaller chunks with 10Gbit networking • Some Dell6100 WN’s installed. • Two 10G network switches and new gigabit switches for the cluster • Are in talks with the University networking with an aim to convert our link from the computer centre to 10Gbit. The current plan is to us QoS to allow us to use idle bandwidth dependant on usage. The dual 10Gbit Campus JANET link is current running at ~3GBit in and 1Gbit out so there is spare available. SouthGrid August 2010

  12. Other Oxford Work • CMS Tier 3 • Supported by RALPPD’s PhEDEx server • Useful for CMS, and for us, keeping the site busy in quiet times • However can block Atlas jobs and during accounting period not so desirable • ALICE Support • There is a need to supplement the support given to ALICE by Birmingham. • Made sense to keep this in SouthGrid so Oxford have deployed an ALICE VO box • Site being configured by Kashif in conjunction with Alice support • UK Regional Monitoring • Kashif runs the nagios based WLCG monitoring on the servers at Oxford • These include the Nagios server itself, and support nodes for it, SE, MyProxy and WMS/LB • The WMS is an addition to help the UK NGS migrate their testing. • There are very regular software updates for the WLCG Nagios monitoring, ~6 so far this year. • Early Adopters • Take part in the testing of CREAM, ARGUS and torque_utils. Have accepted and provided a report for every new version of CREAM this year. • SouthGrid Support • Providing support for Bristol • Landslides support at Oxford and Bristol • Helping bring Sussex onto the Grid, (Been too busy in recent months though) SouthGrid September 2011

  13. Sussex • Sussex has a significant local ATLAS group, their system is designed for the high IO bandwidth patterns that ATLAS analysis can generate. • Up and running as a Tier 3 with the Feynman sub-cluster for Particle Physics, Apollo sub-cluster used by rest of University. • Feynman : 8 nodes, each node has 2 Intel Xeon X5650 @ 2.67GHz measured at ~15.67 HepSpec06 per core, total of 96 cores. 48GB Ram per node. Apollo currently has 38 nodes totalling 464 cores. The plan is to merge the 2 sub-clusters in next 6 months • 81T of Lustre storage shared by both sub-clusters. Everything fully interconnected with infiniband. Cluster is Dell hardware, using three R510 disk servers each with two external disk shelves (each with its own RAID controller). • CVMFS installed and working, being used by the ATLAS group as Sussex. • In process of installing and configuring grid services to become a Tier 2 site (UKI-SOUTHGRID-SUSX) for SouthGrid.We have registered the service nodes and got grid certificates for them. 4 machines are set up ready for BDII, CreamCE, Apel and SE. • BDII and Apel done, working on CE and SE. Hoping to be fully up and running within 2 months. SouthGrid August 2010

  14. Conclusions • SouthGrid sites utilisation generally improving, but some sites small compared with others. • Birmingham supporting Atlas, Alice and LHCb. • Bristol; Need to get new version of STORM working if the hope to be a CMS tier2 site • Cambridge; only partly using PBS so APEL still reports low. The Condor part does not report correctly into APEL. Accounting metrics come direct from ATLAS so less critical for that. • Could enable JET for ATLAS production as they now have enough disk, but ATLAS say they would prefer them to use CVMFS, so we have to help them do that. • Oxford upgraded to be optimised for ATLAS analysis, and is involved in many other areas. • RALPPD are at full strength , leading the way. • Sussex; need some small effort/support to bring them on line SouthGrid September 2011

More Related