High Performance Computing – Supercomputers

# High Performance Computing – Supercomputers

Télécharger la présentation

## High Performance Computing – Supercomputers

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
##### Presentation Transcript

1. High Performance Computing – Supercomputers Robert Whitten Jr

2. Welcome! • Today’s Agenda: • Questions from last week • Parallelism • Supercomputers

3. What is a supercomputer? • The fastest computer at that moment • Top 500 list is updated every 6 months • Serial vs. parallel • Serial means you’re doing one thing at a time sequentially • Parallel means you’re doing multiple things at the same time • Parallel is the way to do things

4. Parallelism • Performing multiple tasks simultaneously will increase how much work can be done in a given amount of time

5. Parallel example • How long would it take a person to build a car? • Assume they are a pretty good mechanic • Assume they could build a car in a month by themselves • Now we’re talking about serial processing • What if 2 mechanics worked on the car? • What if 3?...4? • Now we’re talking parallel processing

6. Shared Memory Parallelism • If mechanics used the same bucket of bolts • Contention occurs whenever someone reached for the same bolt as someone else • Communication occurs when bolts (data) need to be passed back and forth between mechanics

7. Distributed memory parallelism • Now each mechanic has his own section of the car to work on • Problem decomposition • Sharing of a common resource is not a major factor now • What if one section is easier to do? • What if one mechanic is faster than the others? • Load balancing occurs when work has to be distributed among other mechanics

8. Supercomputers • Distributed systems • Teragrid • SETI@home • Clusters • Each nodes is an individual computer • Link them all together and you’ve got a cluster • Supercomputers • Proprietary (Cray, IBM, etc) • Custom interconnect networks

9. Distributed systems

10. Distributed systems • Typically heterogeneous systems • PCs, Macs, Linux, etc • All nodes are physically separated from other nodes • Geographically • Logically • Typically only communicate data back and forth • Works best if data can be divided into independent chucks with little interdependence

11. Clusters

12. Clusters • Typically homogeneous • Might be some difference in hardware, but minimal • Typically co-located • Share common network • Ethernet, infiniband, etc

13. Supercomputers

14. Supercomputers • Typically homogenous • Exceptions are out there (i.e. Roadrunner @ LANL) • Share common network fabric • Interconnect between processors • Interconnect between nodes • Nodes cannot be independent of each other • Service nodes • Login nodes • I/O nodes • Compute nodes

15. Jobs • Execution object on a distributed system • Can be interactive or batch • Interactive means a user has to be present to enter data • Batch means data is read from files and user does not have to be present • Allows for greater utilization of the machine since many jobs can be submitted at same time

16. Homework • Send me that email if you haven’t already whittenrm1@ornl.gov

17. Questions? http://www.nccs.gov Oak Ridge National Laboratory U. S. Department Of Energy17