1 / 8

CAT Tier-3 Tutorial

CAT Tier-3 Tutorial. October 2009. CAT Tier-3 Computing Resources. Interactive nodes: 5 machines with 8 CPU cores and 16 GB total memory, with access to AFS and castor, for interactive analysis work. These are accessed via LSF using the atlasinter queue. Batch queues:

domani
Télécharger la présentation

CAT Tier-3 Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CAT Tier-3 Tutorial October 2009

  2. CAT Tier-3 Computing Resources • Interactive nodes: • 5 machines with 8 CPU cores and 16 GB total memory, with access to AFS and castor, for interactive analysis work. • These are accessed via LSF using the atlasinter queue. • Batch queues: • Two dedicated batch queues, atlascatshort (1 hour) and atlascatlong (10 hours) with a certain number of dedicated LSF batch job slots. • Castor disk pool: • A 40 TB disk pool atlt3 for storing DPDs, ntuples etc used in CAT analysis. • No tape backup • AFS scratch disk space allocated to CAT team members.

  3. Data Organization • CERN User Disk • For all users • Access via rfio, xrootd • Setting Environment (This defines the castor-disk. The directory is only “fake”) • export RFIO_USE_CASTOR_V2=YES • export STAGE_HOST=castoratlas • export STAGE_SVCCLASS=atlt3 • CAT Tier-3 resources • Only for CERN people • Access via rfio • Two locations • Group-Space (5TB per group) /castor/cern.ch/grid/atlas/atlt3/<group> with the following groups: compperf, higgs, simulation:, sm, susy, top • Scratch-Space • /castor/cern.ch/grid/atlas/atlt3/scratch/<userid> • nsmkdir /castor/cern.ch/grid/atlas/atlt3/scratch/<usedid> • nschmod 750 /castor/cern.ch/grid/atlas/atlt3/scratch/<usedid> • Setting Environment: export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk

  4. Tutorial – Putting and Retrieving Files login lxplus Setup Athena cmt co PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnalysis-r198000/cmt cmt config source setup.sh gmake cd /tmp/<userid> Now we have just our analysis algorithm. We set now the variables for accessing the usual CERN user disk export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlast3 export STAGE_SVCCLASS=atlascernuserdisk And copy some files to our castor user disk rfcp /castor/cern.ch/user/d/ddmusr03/STEP09/mc08.106054.PythiaZee_Mll20to60_1Lepton.merge.AOD.e379_s462_r635_t53_tid059207/AOD.059207._00001.pool.root.1 ./ rfcp AOD.059207._00001.pool.root.1 /castor/cern.ch/user/<u>/<userid>/Tutorial

  5. Tutorial – Putting and Retrieving Files Now we copy the same file to the CAT scratch disk. First we have to set the environment variables export RFIO_USE_CASTOR_V2=YES export STAGE_HOST=castoratlas export STAGE_SVCCLASS=atlt3 Actually the first two lines are not needed at this point anymore but we will leave them for completeness. rfcp AOD.059207._00001.pool.root.1 /castor/cern.ch/grid/atlas/atlt3/scratch/<userid>/Tutorial/ Checking the content can be done with the usual rfdir-command, e.g. rfdir /castor/cern.ch/grid/atlas/atlt3/scratch/<userid>/Tutorial/

  6. Submitting jobs (1) Our queues are atlascatshort and atlascatlong and can be seens via bqueues To access the interactive machines just type bsub -Is -q atlasinter zsh We simply exit with quit Now we want to submit a job on our Tier-3 queues. We go to our example code, e.g. cd PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnalysis-r198000/cat Here we change the file runAthena.sh which should automatically setup the Athena environment and then starts an athena job. Remember that when sending the job to a queue, the job will be started on a scratch-directory which will be delete after then job.

  7. Submitting jobs (2) Havening changesd the runAthena.sh, we can submit the job via bsub -q atlascatlong source runAthena.sh ~/scratch0/Athena/15.5.1/PhysicsAnalysis/AnalysisCommon/PerformanceAnalysis/PerformanceAnalysis-r198000/cat/runPerformanceAnalysis.py To see the status of our job, we simply type bjobs Now we can play around with different access modes, i.e. accessing a file via rfio or xrootd. For that we simply change the prefix of the file in the InputCollection of runPerformance.py. Xrootd is accessed via root://castoratlast3/ Rfio is accessed via rfio:// Keep in mind that you might have to initialize the environment variables on the batch-job! The performance between rfio and xrootd can be checked when looking in the PerformanceResults.log file, which is produced when the job is finished...

  8. Submitting jobs (3) You should observe that xrootd is much faster than rfio, but we cannot use xrootd on our tier-3 scratch-disks...which brings us to Max Baak‘s famous Filestager

More Related