1 / 29

JGI/NERSC New Hardware Training

JGI/NERSC New Hardware Training. Kirsten Fagnan, Seung -Jin Sul January 10, 2013. Overview. New h ardware structure (# of nodes, cores, cores per socket) Exclusive use of a node – what does that mean Running serial (single-core) jobs on the exclusive nodes Python TaskFarmerMQ

jamal
Télécharger la présentation

JGI/NERSC New Hardware Training

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. JGI/NERSC New Hardware Training Kirsten Fagnan, Seung-Jin Sul January 10, 2013

  2. Overview • New hardware structure (# of nodes, cores, cores per socket) • Exclusive use of a node – what does that mean • Running serial (single-core) jobs on the exclusive nodes • Python • TaskFarmerMQ • Hands-on testing/work

  3. Genepool Components 20 120 GB 8 slot nodes (x4170) – high priority nodes 450 SGI commodity nodes 8 Slots, 48 GB of memory 222 Appro commodity nodes (new hardware) 16 physical cores, 120 GB of memory 8 240 GB nodes 9 500 GB nodes 3 1000 GB nodes 1 2TB node

  4. New Commodity Node Layout • 120G of memory • 16 physical cores (2 sockets - NUMA) • 16 virtual cores (hyperthreading) • 1.8 TB of local disk

  5. New High Memory Node Layout • 5 500G Nodes, 2 1000G Nodes (why not 512 and 1024??) • 32 physical cores (4 sockets - NUMA) • 32 virtual cores (hyperthreading) • 3.6 TB of local disk

  6. NUMA – Non-uniform Memory Architecture • There is a memory hierarchy on each die, so each thread will not have uniform access time to different blocks of memory Image from - http://venthusiast.com/numa-non-uniform-memory-access/

  7. Hyperthreading • 16 physical cores + 16 virtual cores means that you can run applications with up to 32 threads. • We have done some experiments with hyperthreading on/off and didn’t see any negative effects, but very few codes showed appreciable speed-up

  8. How are the old and new systems connected?

  9. NERSC Machine Room

  10. How do I access the new nodes? • User still specify the following parameters: • Wallclock limit ( -l h_rt HH:MM:SS) • # of cores/nodes ( -pe ... ) • Amount of memory per core (-l ram.c=16G) The new hardware has 120 GB of memory, if you specify more than 48GB of memory, your job will be routed to the new hardware. #!/bin/bash #$ -l h_rt=12:00:00, ram.c=100GB #$ -pepe_slots 16 #$ -N whole_node_serial_test Can run up to 16 MPI tasks or with 16 threads. #!/bin/bash #$ -l h_rt=12:00:00, ram.c=100GB #$ -pepe_slots 16 #$ -N whole_node_mpi_test #$ -pe pe_1 4 ## requesting 4 whole nodes, can run up to 16*4 MPI tasks

  11. What about run time? • There are 50 commodity nodes that can run long jobs (>12 hours), all the high memory nodes can run long jobs • The remainder of the jobs can run jobs with a up to a 12 hour wallclock

  12. Exclusive use of the node • I/O from this node will only be done by your job, don’t need to share the 1Gb ethernet with anyone else • 16 processors, 16 virtual cores (can test the benefit of hyperthreading with your code) • You can use up to 120G (more on the highmem nodes)

  13. Want to take advantage of all 16 cores, but how? Task 1 Task 2 Task 3 … Task 15 Task 16

  14. Running 16 serial tasks - Python You can use a python's mpi4py module to launch multiple serial jobs. Below is a sample python script, 'mwrapper.py': #!/usr/bin/env python from mpi4py import MPI from subprocess import call import sys exctbl = sys.argv[1] comm = MPI.COMM_WORLD rank = comm.Get_rank() myDir = "dir"+str(rank).zfill(2) cmd = "cd "+myDir+" ; "+exctbl+" < infile > outfile" sts = call(cmd,shell=True) comm.Barrier()

  15. Running 16 serial tasks - Python Below is a batch script to use it for a serial program, a.out: #!/bin/bash –l #$ -l h_rt=12:00:00 #$ –pepe_slots 16 #$ -l ram.c=7680MB #$ -cwd module load python module load openmpi aprun –n 16 mwrapper.pya.out

  16. Running 16 Serial tasks - TaskfarmerMQ /jgi/tools/bin/blastall -b 100 -v 100 -K 100 -p blastn -S 3 -d ./data/hs.m51.D4.diplotigs+fullDepthIsotigs.fa -e 1e-10 -F F -W 41 -i ./data/blast_query_1_160.fna -m 8 -o ./out-blastn/test1.m8.bout:/project/projectdirs/genomes/sulsj/test/2012.10.08-taskfarmer-mq/task _version/out-blastn:test1.m8.bout:0 /jgi/tools/bin/blastall -b 100 -v 100 -K 100 -p blastn -S 3 -d ./data/hs.m51.D4.diplotigs+fullDepthIsotigs.fa -e 1e-10 -F F -W 41 -i ./data/blast_query_1_160.fna -m 8 -o ./out-blastn/test2.m8.bout:/project/projectdirs/genomes/sulsj/test/2012.10.08-taskfarmer-mq/task _version/out-blastn:test1.m8.bout,test2.m8.bout:0 $ tfmq-client -i task.lst client user task list RabbitMQ Workers can be added at any time and reused ... task_1 task_2 task_t tfmq-worker_1 tfmq-worker_2 tfmq-worker_n status status fork() fork() fork() status ... P P P

  17. TaskfarmerMQ Client/Worker Usage tfmq-client -i <user task file> [-q user_specified_queue_name] [-w reuse_workers] • -i,--tf: user task list file • -q,--tq: user-specified queue name (*NOTE: If you set your queue name with this option, you SHOULD set the same queue name when you start the worker using “-q/--tq”). • -w,--reuse: worker termination option. If set as “0 (default)”, all workers will be terminated after completion; If set as “1”, all workers will stay running for other tasks. tfmq-worker [-q,--tq user-specified_queue_name] The “-q/--tq” option is for setting user-defined queue name. If you set a different queue name for running tfmq-client, you SHOULD set the same name when you run the worker. ex) User-defined queue name $ tfmq-client -i task1.lst -q mytaskqueuename1 $ tfmq-worker -q mytaskqueuename1

  18. TaskfarmerMQ Task List Example Task list format: <user command>:<output directory>:<list of output files>:<done flag> blastall -b 100 -v 100 -K 100 -p blastn -S 3 -d ./data/db.fa -e 1e-10 -F F -W 41 -i ./data/input1.fna -m 8 -o ./out-blastn/test1.m8.bout :./out-blastn :test1.m8.bout :0 blastall -b 100 -v 100 -K 100 -p blastn -S 3 -d ./data/db.fa -e 1e-10 -F F -W 41 -i ./data/input2.fna -m 8 -o ./out-blastn/test2.m8.bout :./out-blastn :test1.m8.bout,test2.m8.bout :0 blastall -b 100 -v 100 -K 100 -p blastn -S 3 -d ./data/db.fa -e 1e-10 -F F -W 41 -i ./data/input3.fna -m 8 -o ./out-blastn/test3.m8.bout :./out-blastn :test4.m8.bout :0

  19. TaskfarmerMQ Task Examples • A case where I have a list of tasks that each require 1 core and 7680MB of memory • Step 1 Fire up a client with the name of the queue that I want: • tfmq-client -i task7680.lst -q my7680MBqueue • A case where I have a list of tasks that each require 1 core and 15G of memory • A case where I have a list of tasks that each require 1 core and 30G of memory

  20. Example 1 – my tasklist is full of jobs that need 7.5GB (7500MB) of memory and 1 core each, to run these on Genepool Create a batch script. In this case called submit_16workers.q Note: We only specify, memory, slots and runtime to route our jobs! #!/bin/sh #$ -N taskfarmermq_test #$ -l h_rt=12:00:00 #$ -pe pe_slots 16 • #$ -l ram.c=7680MB #$ -cwd for i in {1..16} do • tfmq-worker –q my7680MBqueue & done wait Submit the job • genepool01:$ qsubsubmit_16workers.q

  21. Example 1 – my tasklist is full of jobs that need 7.5GB (7500MB) of memory and 1 core each, to run these on Genepool The name of the queue for the client and worker needs to be the same Create a batch script. In this case called submit_16workers.q #!/bin/sh #$ -N taskfarmermq_test #$ -l h_rt=12:00:00 #$ -pe pe_slots1 • #$ -l ram.c=7680G #$ -cwd • ## Running on the gpint • ## tfmq-client -i task1.lst -q my7680MBqueue for i in {1..15} do • tfmq-worker –q my7680MBqueue& done wait Submit the job • genepool01:$ qsub submit_16workers.q

  22. Example 1 – my tasklist is full of jobs that need 7.5GB (7500MB) of memory and 1 core each, to run these on Genepool There are 16 cores on a node, so I can have 16 workers Create a batch script. In this case called submit_16workers.q #!/bin/sh #$ -N taskfarmermq_test #$ -l h_rt=12:00:00 • #$ -pe pe_slots 16 • #$ -l ram.c=7680MB #$ -cwd • ## Running on the gpint • ## tfmq-client -i task1.lst -q my7680MBqueue for i in {1..16} do • tfmq-worker –q my7680MBqueue& done wait Submit the job • genepool01:$ qsub submit_16workers.q

  23. TaskfarmerMQ Task Examples • A case where I have a list of tasks that each require 1 core and 7.5G of memory • A case where I have a list of tasks that each require 1 core and 15G of memory • - Step 1 - Fire up a client with the name of the queue that I want: • tfmq-client -i task15.lst -q my15GBqueue • A case where I have a list of tasks that each require 1 core and 30G of memory

  24. Example 2 – my tasklist is full of jobs that need 15GB of memory and 1 core each, to run these on Genepool, so I can only use 8 cores Create a batch script. In this case called submit_8workers.q In this case we have only 8 workers (120G/15G = 8) #!/bin/sh #$ -N taskfarmermq_test #$ -l h_rt=12:00:00 #$ -pe pe_8 #$ -l ram.c=15G #$ -cwd for i in {1..8} do tfmq-worker –q my15GBqueue & done wait Submit the job • genepool01:$ qsub -t 1-10 submit_8workers.q

  25. TaskfarmerMQ Task Examples • A case where I have a list of tasks that each require 1 core and 7.5G of memory • A case where I have a list of tasks that each require 1 core and 15G of memory • A case where I have a list of tasks that each require 1 core and 30G of memory- Step 1 - Fire up a client with the name of the queue that I want: • tfmq-client -i task30.lst -q my30GBqueue

  26. Example 3 – my tasklist is full of jobs that need 15GB of memory and 1 core each, so I can only use 4 cores Create a batch script. In this case called submit_4workers.q #!/bin/sh #$ -N taskfarmermq_test #$ -pe pe_slots 4 #$ -l ram.c=30G #$ -cwd for i in {1..4} do tfmq-worker –q my30GBqueue & done wait Submit the job • genepool01:$ qsub-t 1-10 submit_4workers.q

  27. Example 3 – my tasklist is full of jobs that need 15GB of memory and 1 core each, so I can only use 4 cores You can also run with task arrays to increase the number of workers available to a particular queue Create a batch script. In this case called submit_4workers.q #!/bin/sh #$ -N taskfarmermq_test #$ -pe pe_slots 4 #$ -l ram.c=30G #$ -cwd for i in {1..4} do tfmq-worker –q my30GBqueue & done wait Submit the job • genepool01:$ qsub-t 1-10 submit_4workers.q

  28. Summary The JGI now has access to almost 2x the computing power that was available before break. To access the new hardware, just request between 48G and 240G of memory and your jobs will be routed to those nodes. In an effort to keep jobs scheduling efficiently for all users, we are scheduling the new nodes a whole node at a time. This will also make it easier for users to debug workflows and should enable jobs to complete more consistently. There are tools available (Python, TaskFarmerMQ) that will enable users with serial jobs to take advantage of the new hardware.

  29. hands-on section

More Related