1 / 46

Introduction to Grid Computing with High Performance Computing

Introduction to Grid Computing with High Performance Computing. Mike Griffiths White Rose Grid e-Science Centre of Excellence. Outline. Introduction High Performance Grid Computing e-Science The Evolving Grid The Local Compute Node Iceberg Registration. Objectives.

locke
Télécharger la présentation

Introduction to Grid Computing with High Performance Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Grid Computing with High Performance Computing Mike Griffiths White Rose Grid e-Science Centre of Excellence

  2. Outline • Introduction • High Performance Grid Computing • e-Science • The Evolving Grid • The Local Compute Node Iceberg • Registration

  3. Objectives • What is grid computing? • How the grid assists with problem solving lifecycle • Identify and Explain Buzzwords • Remove Hype

  4. Problem solving lifecycle • Problem definition and requirements capture • Model development • Languages (FORTRAN, C, C++, Java etc.) • Model Building SDK’s • Matlab and clones • Packages (ANSYS, FLUENT, CFX)

  5. Problem solving lifecycle • Problem solving environment • specialized software for solving one class of problems • Application user interface, portal • Model testing • Validation, verification • Results production • Scheduling tasks over the grid • Analysis and Visualisation

  6. Grid Technologies

  7. Grid Technologies • Simulation of large complex systems • Large scale multi site data mining, distributed data sets • Shared virtual reality • Interactive collaboration • Real-time access to remote resources.

  8. What Is Grid Computing • Virtualisation of resource • Increase processing power • Secure and flexible collaboration • The Grid Problem

  9. Electric Power Generation Analogy Access to Information Grid Customer Information Generators Information Distributed Over the Grid

  10. Pcwebopedia.com • A form of networking. Unlike conventional networks that focus on communication among devices, grid computing harnesses unused processing cycles of all computers in a network for solving problems too intensive for any stand-alone machine.

  11. IBM Definition • Grid computing enables the virtualization of distributed computing and data resources such as processing, network bandwidth and storage capacity to create a single system image, granting users and applications seamless access to vast IT capabilities. Just as an Internet user views a unified instance of content via the Web, a grid user essentially sees a single, large virtual computer.

  12. Sun Microsystems • Grid Computing is a computing infrastructure that provides dependable, consistent, pervasive and inexpensive access to computational capabilities.

  13. “The Grid Problem” • “Grid problem,” flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions, and resources—what we refer to as virtual organizations. • From “The Anatomy of the Grid” by Foster, Kesselman and Tuecke.

  14. Virtual Organisations

  15. Grid Characteristics Computing - Tflops The Grid Networks – High Bandwidth Data storage Peta byte

  16. Types of Grids • Cluster Grid • Beowulf clusters • Enterprise Grid, Campus Grid, Intra-Grid • Departmental clusters, • servers and PC network • Utility Grid • Access resources over internet on demand • Global Grid, Inter-grid • White Rose Grid, National Grid Service, Particle physics data grid

  17. Three Uses of Grid Computing • Compute grids • Data grids • Collaborative grids

  18. Distributed Supercomputing • Compute Clusters • Schedulers sun grid engine, pbs • Grid aggregates computational resources to compute large complex problems • Fast networks enabling true parallel computation and shared memory processing • Select compute resources according to Time and Financial constraints

  19. Architectures for High Performance Computing • Supercluster • e.g. Blue Gene (65536 dual processors in 64 cabinets) • Clusters • e.g. iceberg • Parallel applications using MPI • Symmetric multiprocessors • e.g. 4 processor shared memory V40 node on iceberg • Shared memory programming Open MP • Vector Processor • E.g Amdhal VP at MCC (80’s and 90’s)

  20. High Throughput Applications • Problems divided into many tasks • Grid schedules tasks • Seti@home • The mother of @home projects • Spin off for companies such as Entropia and United Devices • Other @home projects • Folding@home, fightAIDS@home, Xpulsar@home • Condor • Cycle scavenging from spare PC’s

  21. Statistics for SETI at Home (13/09/2004)

  22. SETI@home’s Most Promising Candidates

  23. Engine flight data London Airport Airline New York Airport Grid Diagnostics centre Maintenance Centre American data center European data center Grid TypesData Grid • Computing Network stores large volume of data across network • Heterogeneous data sources

  24. Grid Types - Collaborative • Internet videoconferencing • Collaborative Visualisation

  25. e-Science • More science relies on computational experiments • More large, geographically disparate, collaborative projects • More need to share/lease resources • Compute power, datasets, instruments, visualization

  26. e-Science Centres Centres of Excellence Regional Centres

  27. e-Science Organisations • National e-Science Centre • To stimulate and sustain the development of e-Science in the UK, to contribute significantly to its international development and to ensure that its techniques are rapidly propagated to commerce and industry. • Open Middleware Infrastructure Institute • Repository for UK Grid Middleware

  28. e-Science Requirements • Simple and secure access to remote resources across administrative domains • Minimally disruptive to local administration policies and users • Large set of resources used by a single computation • Adapt to non-static configuration of resources

  29. The Evolving Grid

  30. Comprising of two data clusters and two compute clusters. • Offer a significant resource for the UK e-Science community. • Clusters are located at • Manchester (data cluster), • Oxford (compute cluster), • CCLRC (data cluster) and • White Rose Grid (compute cluster). • More sites • Lancaster • Wesc • Bristol

  31. EGEE • The EGEE project brings together experts from over 27 countries • Build on recent advances in Grid technology. • Developing a service Grid infrastructure in Europe, • available to scientists 24 hours-a-day.

  32. Available Grid Services • Access Grid • White Rose Grid • Grid research • HPC Service • National Grid Service • Compute Grid • Data Grid (SRB) • National HPC Services • HPCx and CSAR (part of NGS) • Portal Services

  33. Sheffield Grid Node: Hardware • AMD based supplied by Sun Microsystems • Processors: 320 • Performance: 300GFLOPs • Main Memory: 800GB • Filestore: 9TB • Temporary disk space: 10TB • Physical size: 8 racks • Power usage: 50KW

  34. Sheffield Grid Node: Hardware,part 2 • 160 Processors Grid pp community • 160 Processors General Use • 20 x V40 each with 4x64 bit AMD Opteron (2.4GHz) and 16GB shared main memory. • 40 x V20 each with 2x64 bit AMD Opteron (2.4 GHz) and 4GB shared main memory • Comparing L2 Cash • AMD Opteron 1MB • Ultrac sparc III Cu (Titania) 8MB

  35. Sheffield Grid Node: Hardware, part 3 Inside a V20 unit.

  36. Sheffield Grid Node: Hardware 4 • Two main Interconnect types gigabit (commodity), Myrinet (more specialist) • Gigabit – Supported as standard good for job farms, and small to mid size systems • Myrinet – High End solution for large parallel applications has become defacto standard for clusters (4Gb/s)

  37. Sheffield Grid Node: Hardware • 64bit v 32 bit • Mainly useful for programs requiring large memory – available on bigmem nodes • Greater Floating Point accuracy • Future-proof: 32-bit systems are becoming obselete in HPC

  38. Sheffield Grid Node: Software 1 Ganglia DDT Portland, GNU Sun Grid Engine v6 Redhat 64bit Scientific Linux MPICH Opteron

  39. Sheffield Grid Node: Software 2 • Maths and Statistical • Matlab7.0, scilab 3.1 • R+ 2.0.1 • Engineering and Finite Element • Fluent 6.2.16, 6.1.25 and 6.1.22 als gambit, fidap and tgrid • Ansys v90 • Abaqus • CFX 5.7.1 • DYNA 91a • Visualisation • IDL 6.1 • OpenDX

  40. Sheffield Grid Node: Software 3 • Development • MPI, MPICH-gm • OpenMP • Nag, 20 • ACML • Grid • Globus 2.4.3 (via gpt 3.0) • SRB s-client tools to follow

  41. Registration • Local User Account • Obtain an e-Science Certificate • Register with the White Rose Grid • Apply for NGS Resource Go to the link http://www.shef.ac.uk/wrgrid/access/index.html

  42. Why obtain an e-Science Certificate • Enables secure single sign on to the White Rose Grid • Use portals e.g. the WRG Application portal • Access WRG, NGS, egee

  43. For More Information • The White Rose Grid • www.wrgrid.org.uk • The National e-Science Centre • www.nesc.ac.uk • The Globus Project™ • www.globus.org • Global Grid Forum • www.gridforum.org

  44. Grid Computing References • The Grid: Computing Without Bounds • Ian Foster, Scientific American, April 2003. • “The Anatomy of the Grid” • http://www.globus.org/research/papers/anatomy.pdf • Grid Services – “The Physiology of the Grid” • http://www.gridforum.org/ogsi-wg/drafts/ogsa_draft2.9_2002-06-22.pdf • Research Agenda for the Semantic Grid • http://www.semanticgrid.org/v1.9/semgrid.pdf

More Related