1 / 18

White Rose Grid Infrastructure Overview

Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield C.Cartledge@sheffield.ac.uk +44 114 222 3008. White Rose Grid Infrastructure Overview. Contents. History Web site Current computation capabilities Planned machines Usage. YHMAN

xylia
Télécharger la présentation

White Rose Grid Infrastructure Overview

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chris Cartledge Deputy Director Corporate Information and Computing Services, The University of Sheffield C.Cartledge@sheffield.ac.uk +44 114 222 3008 White Rose Grid InfrastructureOverview

  2. Contents History Web site Current computation capabilities Planned machines Usage YHMAN Grid capabilities Contacts Training FEC, Futures

  3. White Rose Grid History 2001: SRIF Opportunity, joint procurement Leeds led: Peter Dew, Joanna Schmidt 3 clusters Sun SPARC system, Solaris Leeds, Maxima: 6800 (20 processors), 4*V880 (8 proc) Sheffield, Titania: 10 (later 11)* V880 (8 proc) York, Pascali: 6800 (20 proc), Fimbrata: V880 1 cluster 2.2, 2.4 GHz Intel Xeon, Myrinet Leeds, Snowdon 292 CPUs, linux

  4. White Rose Grid History continued Joint working to enable use across sites but heterogenous: a range of systems each system primarily to meet local needs up to 25% for users from the other sites Key services common Sun Grid Engine to control work in the clusters Globus to link clusters registration

  5. WRG Web Site There is a shared web site:http://www.wrgrid.org.uk/ Linked to/from local sites Covers other related projects and resources e-Science Centre of Excellence Leeds SAN and specialist graphics equipment Sheffield ppGrid node York, UKLight work

  6. Current Facilities: Leeds Everest: supplied by Sun/ Streamline Dual core Opteron: power & space efficient 404 CPU cores, 920GB memory 64-bit Linux (SuSE 9.3) OS Low latency Myrinet interconnect 7 * 8-way (4 chips with 2 cores), 32GB 64 * 4-way (2 chips with 2 cores), 8GB

  7. Leeds (continued) SGE, Globus/GSI Intel, GNU, PGI compilers. Shared memory & Myrinet MPI NAG, FFTW, BLAS, LAPACK, etc Libraries 32- and 64-bit software versions

  8. Maxima transition Maintenance to June 2006, expensive Need to move all home directories to SAN Users can still use it, but “at risk” Snowdon transition Maintenance until June 2007 Home directories already on the SAN Users encouraged to move

  9. Sheffield Iceberg: Sun Microsystems/ Streamline 160 * 2.4GHz AMD Opteron (PC technology) processors 64-bit Scientific Linux (Redhat based) 20 * 4-way, 16GB, fast Myrinet for parallel/large 40 * 2-way, 4GB for high high throughput GNU and Portland Group compilers, NAG Sun Grid Engine (6), MPI, OpenMP, Globus Abaqus, Ansys, Fluent, Maple, Matlab

  10. Also At Sheffield GridPP (Particle Physics Grid) 160 * 2.4GHz AMD Opteron 80* 2-way, 4GB 32-bit Scientific Linux ppGrid stack 2nd most productive Very successful!

  11. Popular! Sheffield Lots of users: 827 White Rose: 37 Utilisation high Since installation: 40% Last 3 months: 80% White Rose: 26%

  12. York £205k from SRIF 3 £100k computing systems £50k storage system remainder ancillary equipment, contingency Shortlist agreed(?) - for June Compute, possibly 80-100 core, Opteron Storage, possibly 10TB

  13. Other Resources YHMAN Leased fibre 2Gb/s Performance Wide area MetroLAN UKLight Archiving Disaster recovery

  14. Grid Resources Queuing Sun Grid Engine (6) Globus Toolkit 2.4 is installed and working issue over GSI-SSH on 64-bit OS (ancient GTK) Globus 4 being looked at Storage Resource Broker being worked on

  15. Training Available across White Rose Universities Sheffield: RTP - 4 units, 5 credits each High Performance and Grid Computing Programming and Application Development for Computational Grids Techniques for High Performance Computing including Distributed Computing Grid Computing and Application Development

  16. Contacts Leeds: Joanna Schmidt j.g.schmidt@leeds.ac.uk , +44 (0)113 34 35375 Sheffield Michael Griffiths or Peter Tillotson m.griffiths@sheffield.ac.ukp.tillotson@sheffield.ac.uk +44 (0) 114 2221126, +44 (0) 114 2223039 York: Aaron Turner aaron@cs.york.ac.uk ,+44 (0) 190 4567708

  17. Futures FEC will have an impact Can we maintain 25% use from other sites? how can we fund continuing GRID work? Different Funding models a challenge Leeds: departmental shares Sheffield: unmetered service York: based in Computer Science Relationship opportunities NGS, WUN, region, suppliers?

  18. Achievements White Rose Grid: not hardware, services People(!): familiar in working with Grid Experience of working as a virtual organisation Intellectual property in training Success: Research Engaging with Industry Solving user problems

More Related