1 / 41

Introduction to High Introduction to High Performance Computing Performance Computing

Introduction to High Introduction to High Performance Computing Performance Computing. (Using the Sun Grid Engine Job (Using the Sun Grid Engine Job Scheduler) Scheduler). Michael Griffiths and Deniz Savas Michael Griffiths and Deniz Savas.

dawne
Télécharger la présentation

Introduction to High Introduction to High Performance Computing Performance Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to High Introduction to High Performance Computing Performance Computing (Using the Sun Grid Engine Job (Using the Sun Grid Engine Job Scheduler) Scheduler) Michael Griffiths and Deniz Savas Michael Griffiths and Deniz Savas Corporate Information and Computing Services Corporate Information and Computing Services The University of Sheffield The University of Sheffield www.sheffield.ac.uk/wrgrid www.sheffield.ac.uk/wrgrid

  2. Outline 1. Introducing High Performance Computing 2. Using the Job Scheduler – Interactive Jobs 3. Batch Jobs 4. Task arrays 5. Running Parallel Jobs 6. GPUs and remote Visualisation 7. Beyond Iceberg Accessing the N8 tier 2 facility

  3. Supercomputing •Capability Computing Capability Computing •e.g. Parallel Fluent, Molecular dynamics, MHD e.g. Parallel Fluent, Molecular dynamics, MHD •Capacity Computing Capacity Computing •Throughput computing blast searches, pattern Throughput computing blast searches, pattern searches, data searching searches, data searching •Grid computing Grid computing •Heterogeneous resources e.g. ppgrid, DAME, Heterogeneous resources e.g. ppgrid, DAME, cosmogrid (large scale capability) cosmogrid (large scale capability) •At home projects, boinc, Condor etc. (capacity At home projects, boinc, Condor etc. (capacity computing) computing)

  4. Iceberg: Summary of Facts & Figures Processor cores: GPUs Total Main Memory: Filestore: Temporary Disk Space: 260 TB Physical size: Power consumption: 3440 cores + 16 • 31.8 TB 45 TB • • • • • 8 racks 83.7 KW

  5. iceberg cluster specifications Intel Ivy Bridge nodes all infiniband connected, containing; • 92 nodes each with 16 cores and 64 GB of memory ( i.e. 2 * 5-core Intel E5-2650-v2 ) • 4 nodes each with 16 cores and 256 GB of memory • TOTAL INTEL CPU CORES = 1152 , TOTAL MEMORY = 2400 GB Scratch space on each node is 400 GB Intel Westmere based nodes all infiniband connected, containing; • 103 nodes each with 12 cores and 24 GB of memory ( i.e. 2 * 6-core Intel X5650 ) • 4 nodes each with 12 cores and 48 GB of memory • TOTAL INTEL CPU CORES = 1152 , TOTAL MEMORY = 2400 GB Scratch space on each node is 400 GB GPU Compute nodes all infiniband connected, containing; • 8 NVIDIA Kepler K40M (12GB GDDR, 2880 thread processor cores) • 8 NVIDIA Fermi M2070 (6GB GDDR, 448thread processor cores)

  6. Iceberg Cluster There are two head-nodes for the iceberg cluster login login login HEAD NODE1 HEAD NODE2 qsh,qsub,qrsh qsh,qsub,qrsh Iceberg(1) Iceberg(2) Worker node Worker node Worker node Worker node Worker node Worker node There are 232 worker machines in the cluster All workers share the same user filestore Worker node Worker node Worker node Worker node Worker node Worker node

  7. Sheffield Advanced research Computer Dual processor nodes with the Intel Xeon E5-2630-v3, Haswell processor. 45 Standard memory Nodes: 4 Big memory nodes: 64 cores, 16GB/core 8 NVIDIA Tesla K80 GPUs each with 24GB memory for computation 1 Visualisation Node with an NVIDIA Quadro K4200 Graphical Processing Unit Filestore: Lustre paralle filestore Physical size: 4 racks • 720 cores , 4GB/core • • • • 600 TB • •

  8. Accessing HPC at The University of Sheffield http://www.sheffield.ac.uk/cics/research/hpc/iceberg/register All staff and research students are entitled to use iceberg Staff can have an account by simply emailing research-it@sheffield.ac.uk

  9. Setting up your software development environment • See “How do I find what software is available” at – http://www.shef.ac.uk/cics/research/hpc Excepting the scratch areas on worker nodes, the view of the filestore is identical on every worker. You can setup your software environment for each job by means of the module commands. All the available software environments can be listed by using the module avail command. Having discovered what software is available on iceberg, you can then select the ones you wish to use by using- module add or module load commands You can load as many non-clashing modules as you need by consecutive module add commands. You can use the module list command to check the list of currently loaded modules. • • • • • •

  10. Getting help Web site – http://www.shef.ac.uk/cics/research Iceberg Documentation – http://www.sheffield.ac.uk/cics/research/hpc/iceberg Training (also uses the learning management system) – http://www.shef.ac.uk/cics/research/training Discussion Group (based on google groups) – https://groups.google.com/a/sheffield.ac.uk/forum/?hl=en-GB#!forum/hpc – E-mail the group hpc@sheffield.ac.uk – Help on google groups http://www.sheffield.ac.uk/cics/groups Contacts – research-it@sheffield.ac.uk • • • • •

  11. Demonstration 1 • Using modules – List modules – Available Modules – Load Module • Compiling an Application – The fish program (used in next practice session)

  12. Using the Job Scheduler Interactive Jobs – http://www.sheffield.ac.uk/cics/research/hpc/using/interactive • Batch Jobs – http://www.sheffield.ac.uk/cics/research/hpc/using/runbatch •

  13. Running Jobs A note on interactive jobs • Software that requires intensive computing should be run on the worker nodes and not the head node. • You should run compute intensive interactive jobs on the worker nodes by using the qsh or qrsh command. • Maximum ( and also default) time limit for interactive jobs is 8 hours.

  14. Sun Grid Engine Two iceberg headnodes are gateways to the cluster of worker nodes. Headnodes’ main purpose is to allow access to the worker nodes but NOT to run cpu intensive programs. All cpu intensive computations must be performed on the worker nodes. This is achieved by; – qsh command for interactive jobs and – qsub command for batch jobs. Once you log into iceberg, taking advantage of the power of a worker-node for interactive work is done simply by typing qsh and working in the new shell window that is opened. The next set of slides assume that you are already working on one of the worker nodes (qsh session). • • • •

  15. Practice Session 1: Running Applications on Iceberg (Problem 1) • Case Studies – Analysis of Patient Inflammation Data • Running an R application how to submit jobs and run R interactively • List available and loaded modules load the module for the R package • Start the R Application and plot the inflammation data

  16. Managing Your Jobs Sun Grid Engine Overview SGE is the resource management system, job scheduling and batch control system. (Others available such as PBS, Torque/Maui, Platform LSF ) • Starts up interactive jobs on available workers • Schedules all batch orientated ‘i.e. non-interactive’ jobs • Attempts to create a fair-share environment • Optimizes resource utilization

  17. SGE worker node SGE worker node SGE worker node SGE worker node SGE worker node B Slot 1 C Slot 1 C Slot 2 A Slot 1 B Slot 1 C Slot 1 A Slot 1 A Slot 2 B Slot 1 C Slot 1 C Slot 2 C Slot 3 B Slot 1 B Slot 2 B Slot 3 Queue-A Queue-B Queue-C SGE MASTER node Queues Policies Priorities Share/Tickets JOB Y JOB Z JOB X Resources JOB O JOB N Users/Projects JOB U Scheduling ‘qsub’ batch jobs on the cluster

  18. Demonstration 2 Running Jobs batch job example Using the R package to analyse patient data qsub example: qsub –l h_rt=10:00:00 –o myoutputfile –j y myjob OR alternatively … the first few lines of the submit script myjob contains - $!/bin/bash #$ -l h_rt=10:00:00 #$ -o myoutputfile #$ -j y and you simply type; qsub myjob

  19. Managing Jobs monitoring and controlling your jobs http://www.sheffield.ac.uk/cics/research/hpc/using/runbatch/sge • There are a number of commands for querying and modifying the status of a job running or waiting to run. These are; – qstat or Qstat (query job status) • qstat –u username – qdel (delete a job) • qdel jobid – qmon ( a GUI interface for SGE )

  20. Practice Session: Submitting Jobs To Iceberg (Problem 2 & 3) • Patient Inflammation Study run the R example as a batch job • Case Study – Fish population simulation • Submitting jobs to Sun Grid Engine • Instructions are in the readme file in the sge folder of the course examples – From an interactive session • Load the compiler module • Compile the fish program • Run test1, test2 and test3

  21. Managing Jobs: Reasons for job failures http://www.shef.ac.uk/cics/research/hpc/using/requirements – SGE cannot find the binary file specified in the job script – You ran out of file storage. It is possible to exceed your filestore allocation limits during a job that is producing large output files. Use the quota command to check this. – Required input files are missing from the startup directory – Environment variable is not set correctly (LM_LICENSE_FILE etc) – Hardware failure (eg. mpi ch_p4 or ch_gm errors)

  22. Finding out the memory requirements of a job • Virtual Memory Limits: – Default virtual memory limits for each job is 6 GBytes – Jobs will be killed if virtual memory used by the job exceeds the amount requested via the –l mem= parameter. • Real Memory Limits: – Default real memory allocation is 2 GBytes – Real memory resource can be requested by using –l rmem= – Jobs exceeding the real memory allocation will not be deleted but will run with reduced efficiency and the user will be emailed about the memory deficiency. – When you get warnings of that kind, increase the real memory allocation for your job by using the –l rmem= parameter. – rmem must always be less than mem Determining the virtual memory requirements for a job; – qstat –f –j jobid | grep mem – The reported figures will indicate - the currently used memory ( vmem ) - Maximum memory needed since startup ( maxvmem) - cumulative memory_usage*seconds ( mem ) – When you run the job next you need to use the reported value of vmem to specify the memory requirement

  23. Managing Jobs: Running arrays of jobs http://www.shef.ac.uk/cics/research/hpc/using/runbatch/examples • Many processors running a copy of a task independently • Add the –t parameter to the qsub command or script file (with #$ at beginning of the line) – Example: –t 1-10 • This will create 10 tasks from one job • Each task will have its environment variable $SGE_TASK_ID set to a single unique value ranging from 1 to 10. • There is no guarantee that task number m will start before task number n , where m<n .

  24. Managing Jobs : Running cpu-parallel jobs More many processor tasks – Sharing memory – Distributed Memory Parallel environment needed for a job can be specified by the: -pe <env> nn parameter of qsub command, where <env> is.. – openmp : These are shared memory OpenMP jobs and therefore must run on a single node using its multiple processors. • • – openmpi-ib : OpenMPI library-Infiniband. These are MPI jobs running on multiple hosts using the Infiniband Connection ( 32GBits/sec ) – mvapich2-ib : Mvapich-library-Infiniband. As above but using the MVAPICH MPI library. Compilers that support MPI. – PGI , Intel, GNU •

  25. Running GPU parallel jobs GPU parallel processing is supported on 8 Nvidia Tesla Fermi M2070s GPU units attached to iceberg. • You can then submit jobs that use the GPU facilities by using the following three parameters to the qsub command; -l arch=intel* -l gpu=nn where 1<= nn <= 8 is the number of gpu-modules to be used by the job. P stands for project that you belong to. See next slide.

  26. Demonstration 3 Running a parallel job Test 6 provides an opportunity to practice submitting parallel jobs to the scheduler. To run testmpi6, compile the mpi example – Load the openmpi compiler module – module load mpi/intel/openmpi/1.8.3 compile the diffuse program – mpicc diffuse.c -o diffuse – qsub testmpi6 – Use qstat to monitor the job examine the output • • •

  27. Practice Session: Submitting A Task Array To Iceberg (Problem 4) • Case Study – Fish population simulation • Submitting jobs to Sun Grid Engine • Instructions are in the readme file in the sge folder of the course examples – From an interactive session • Run the SGE task array example – Run test4, test5

  28. 9. Remote Visualisation • See – Specialist High Speed Visualization Access to iceberg – http://www.sheffield.ac.uk/cics/research/hpc/using/access/intro • Undertake visualisation using thin clients accessing remote high quality visualisation hardware • Remote visualisation removes the need to transfer data and allows researchers to visualise data sets on remote visualisation servers attached to the high performance computer and its storage facility

  29. VirtualGL VirtualGL is an open source package which gives any UNIX or Linux remote display software the ability to run 3D applications with full hardware accelerations. VirtualGL can also be used in conjunction with remote display software such as VNC to provide 3D hardware accelerated rendering for OpenGL applications. VirtualGL is very useful in providing remote display to thin clients which lack the 3D hardware acceleration. • • •

  30. Client Access to Visualisation Cluster Iceberg – Campus Compute Cloud VirtualGL Server (NVIDIA GPU) VirtualGL Client

  31. Remote Visualisation Using SGD Star a browser and goto – https://myapps.shef.ac.uk – login to Sun Global Desktop Under Iceberg Applications start the Remote visualisation session This opens a shell with instructions to either – Open a browser and enter the address • http://iceberg.shef.ac.uk:XXXX – Start Tiger VNCViewer on your desktop • Use the address iceberg.shef.ac.uk:XXXX XXXX is a port address provided on the iceberg terminal When requested use your usual iceberg user credentials • • • • •

  32. Remote Desktop Through VNC

  33. Remote Visualisation Using Tiger VNC and the Putty SHH Client Login in to iceberg using putty At the prompt type qsh-vis This opens a shell with instructions to either – Open a browser and enter the address • http://iceberg.shef.ac.uk:XXXX – Start Tiger VNCViewer on your desktop • Use the address iceberg.shef.ac.uk:XXXX XXXX is a port address provided on the iceberg terminal When requested use your usual iceberg user credentials • • • • •

  34. Beyond Iceberg http://www.sheffield.ac.uk/cics/research/hpc/iceberg/costs • Iceberg OK for many compute problems • Purchasing dedicated resource • N8 tier 2 facility for more demanding compute problems • Hector/Archer Larger facility for grand challenge problems (pier review process to access)

  35. High Performance Computing Tiers • Tier 1 computing – Hector, Archer • Tier 2 Computing – Polaris • Tier 3 Computing – Iceberg

  36. Purchasing Resource http://www.sheffield.ac.uk/cics/research/hpc/iceberg/costs Buying nodes using framework – Research Groups purchase HPC equipment against their research grant this hardware is integrated with Iceberg cluster Buying slice of time – Research groups can purchase servers for a length of time specified by the research group (cost is 1.7p/core per hour) Servers are reserved for dedicated usage by the research group using a provided project name When reserved nodes are idle they become available to the general short queues. They are quickly released for use by the research group when required. For information e-mail research-it@Sheffield.ac.uk • • • • •

  37. The N8 Tier 2 Facility: Polaris http://www.shef.ac.uk/cics/research/hpc/polaris • Note N8 is for users whose research problems require greater resource than that available through Iceberg • Registration is through Projects – Authourisation by a supervisor or project leader to register project with the N8 – Users obtain a project code from supervisor or project leader – Complete online form provide an outline of work explaining why N8 resources are required

  38. Polaris: Specifications 5312 Intel Sandy Bridge cores Co-located with 4500-core Leeds HPC Purchased through Esteem framework agreement: SGI hardware #291 in June 2012 Top500

  39. National HPC Services Archer • UK National Supercomputing Service • Hardware – CRAY XC30 • 2632 Standard nodes • Each node contains two Intel E5-2697 v2 12-core processors • Therefore 2632*2*12 63168 cores. • 64 GB of memory per node • 376 high memory nodes with128GB memory • Nodes connected to each other via ARIES low latency interconnect • Research Data File System – 7.8PB disk • http://www.archer.ac.uk/ EPCC • HPCC Facilities • http://www.epcc.ed.ac.uk/facilities/national-facilities • Training and expertise in parallel computing • •

  40. Links for Software Downloads • Putty http://www.chiark.greenend.org.uk/~sgtatham/putty/ • WinSCP http://winscp.net/eng/download.php • TigerVNC http://sourceforge.net/projects/tigervnc/ http://sourceforge.net/apps/mediawiki/tigervnc/index.php?title= Main_Page

More Related