1 / 41

Nurcan Ozturk University of Texas at Arlington Turkish CERN Forum Seminars January 26, 2011

Nurcan Ozturk University of Texas at Arlington Turkish CERN Forum Seminars January 26, 2011. Distributed Data Analysis in ATLAS. Outline. ATLAS Distributed Computing Model Data Formats and Data Distribution in Computing Model ATLAS Grid Architecture Distributed Data Analysis System

akaren
Télécharger la présentation

Nurcan Ozturk University of Texas at Arlington Turkish CERN Forum Seminars January 26, 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Nurcan Ozturk University of Texas at Arlington Turkish CERN Forum Seminars January 26, 2011 Distributed Data Analysis in ATLAS

  2. Outline • ATLAS Distributed Computing Model • Data Formats and Data Distribution in Computing Model • ATLAS Grid Architecture • Distributed Data Analysis System • Data Format Usage • New Data Distribution Mechanism • Analysis at Tier3’s • User’s Work-flow for Data Analysis • Supporting a Thousand Users on the Grid • Hot Topics in Distributed Analysis • Summary Nurcan Ozturk

  3. ATLAS Distributed Computing Model – Tier Structure Tier0 Tier1 Tier2 Tier3 TR Tier2 connected to NL Tier1 Nurcan Ozturk

  4. Computing Tasks per Tier Site Nurcan Ozturk

  5. Data Formats and Data Distribution in Computing Model • RAW: • Stored in Bytestream format • One copy is kept at CERN (tape) • One distributed over Tier1’s (disk) • Small quantities can be copied to Tier2/group space for special study • ESD • Produced from RAW at Tier0 (first pass reco) and Tier1(reprocessing) • One ESD data copy at CERN (tape), two distributed over Tier1’s (disk) • Small quantities can be copied to Tier2 • AOD • Produced from ESD at Tier0 (first pass reco) and Tier1 (reprocessing) • At most 2 versions on disk at any given time. 2+1 copies at Tier1’s, 10+10 copies at Tier2’s • dESD • Derived from ESD, for detector and performance studies, 10 copies across Tier2’s • dAOD: • Derived from AOD, targeted toward physics analysis, definition driven by needs of group analyses, to be stored on Tier2 group space • ntuple (D3PD): • Produced by running over d(ESD)/d(AOD), ntuple dumper tools must be validated (official ATLAS status), produced under group/individual control, stored in group space, or locally Nurcan Ozturk

  6. ATLAS Grid Architecture EGI Nurcan Ozturk

  7. Distributed Data Management System – DDM/DQ2 Nurcan Ozturk

  8. PanDA: Production and Distributed Analysis – Workload Management System • PanDA is used for all Monte Carlo simulations, data reprocessing, and most of the user analysis activity worldwide. Nurcan Ozturk

  9. Distributed Data Analysis System Basic model: Data is pre-distributed to the sites, jobs go to data. DDM is the central link between components. Jobs are brokered to the best site available. New data distribution mechanism – PD2P: data is distributed based on user demand User User interface Gateways to resources Execution infrastructure 3-layers: Nurcan Ozturk

  10. Distributed Analysis Panda User Jobs Over the Year (as of Nov. 2010) Grid-based analysis during summer 2010: >1000 different users, >15M analysis jobs. Impressive performance during summer: E.g. ICHEP: The full data sample taken until Monday was shown at the conference on Friday. Ganga user jobs counts for 20% of Panda user jobs. 1387 distinct users in last 3 days in Panda. Nurcan Ozturk

  11. MC Data Format Usage (mc09_7TeV*) • Data format usage monitoring: • popularity.cern.ch • AOD’s are the most accessed as expected AOD ESD Nurcan Ozturk

  12. Data Format Usage(data10_7TeV*) • ESD’s are the most accessed • Predicting data usage patterns hasn’t worked • very well (Computing model document, AMFY report) • New development with data distribution model: • Panda Dynamic Data placement at Tier2’s ESD ESD AOD DESD RAW Nurcan Ozturk

  13. New Data Distribution Mechanism • PD2P – PanDA Dynamic data placement at Tier 2’s • Started June 15th. Now running in all clouds • Automatic distribution of ESD & DESD stopped, AOD still continuing • PanDA subscribes dataset to Tier 2, if no other copies are available (except at a Tier 1), as soon as any user needs the dataset • User jobs will still go to Tier 1 while data is being transferred – no delay • More jobs are running at Tier1 now, however this is still much better than situation with huge backlogs we saw at Tier 1’s at beginning of LHC data runs (since no ESD in Tier 2’s) • This will improve with re-brokering • PanDA subscribes replicas to additional Tier 2’s, if needed, based on backlog of user demand for datasets • Exclude RAW, RDO and HITS datasets from PD2P • Restrict data transfers to be within cloud for now • Lots of efforts towards more effective data distribution via PD2P Nurcan Ozturk

  14. Analysis at Tier3s • Tier3 is a computing infrastructure: • Devoted to user analysis activity • Consists of unpledged resources • ATLAS examples include: • Tier 3’s collocated with Tier 2’s • Tier 3’s with same functionality as a Tier 2 site • National Analysis facilities • Non-grid Tier 3 • Tier 3 effort is now part of the ADC. Lots of progress since January 2010 when the activity officially started, see more info at: https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasTier3 Nurcan Ozturk

  15. User’s Work-flow for Data Analysis Setup the analysis code Locate the data Setup the analysis job Submit to the Grid Retrieve the results Analyze the results Nurcan Ozturk

  16. What Type of Data/Jobs Users Can Run on the Grid • User can run on all of the data types as listed on Slide 5. However RAW/HITS/RDO are on tape, they need to request for DDM replication to disk storage (DaTRI page on Panda monitor). • Athena jobs: • with official production transformations: • event generation, simulation, pileup, digitization, reconstruction, merge • TAG selection jobs • with Good Run Lists • Event picking, etc. • General jobs (non-Athena type analysis): • ROOT(CINT, C++, pyRoot) • ARA (AthenaRootAccess) • Python, user's executable, shell script • Skim RAW/AOD/ESD data • Merge ROOT files, etc. Nurcan Ozturk

  17. How to Locate the Data • AMI (ATLAS Metadata Interface) • dq2 End-User Tools • ELSSI (Event Level Selection Service Interface) Nurcan Ozturk

  18. AMI Portal Page - http://ami.in2p3.fr Nurcan Ozturk

  19. Simple Search in AMI –search by name (1) type here to search for datasets. Example: data10_7TeV%physics_Muons.merge.NTUP_TOP.f299% Nurcan Ozturk

  20. Simple Search in AMI –search by name (2) Group by Apply filter Nurcan Ozturk

  21. dq2 End-User Tools (1) • User interaction with the DDM system is via dq2 end-user tools: • querying, retrieving, creating datasets/dataset containers, etc. • How to set up (on lxplus): • Setup dq2 client tools: source /afs/cern.ch/atlas/offline/external/GRID/ddm/DQ2Clients/setup.sh • Create your grid proxy: voms-proxy-init --voms atlas • More info on setup at: • https://twiki.cern.ch/twiki/bin/view/Atlas/RegularComputingTutorial#Getting_ready_to_use_the_Grid Nurcan Ozturk

  22. dq2 End-User Tools (2) • How to use: • List available MInBias datasets in DDM system (same search as in AMI) : • dq2-ls ‘data10_7TeV*physics*MinBias*‘ • Search for merged AOD’s in container datasets: • dq2-ls 'data10_7TeV*physics*MinBias*merge*AOD*‘ • Find location of container datasets (groups of datasets, ending with a '/' ): • dq2-list-dataset-replicas-container data10_7TeV.00152489.physics_MinBias.merge.AOD.f241_p115/ • List files in container dataset: • dq2-ls –f data10_7TeV.00152489.physics_MinBias.merge.AOD.f241_p115/ • Copy one file locally: • dq2-get -n 1 data10_7TeV.00152489.physics_MinBias.merge.AOD.f241_p115/ • More info at: • https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2ClientsHowTo • https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2Tutorial Nurcan Ozturk

  23. ELSSI – Event Level Selection Service Interface https://atlddm10.cern.ch/tagservices/index.htm • Goal: Retrieve the TAG file from TAG database • Define a query to select runs, streams, • data quality, trigger chains,… • .Review the query • Execute the query and retrieve the TAG file (a root file) to be used in Athena job Tutorial at: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/EventTagTutorial2008OctoberBrowser Nurcan Ozturk

  24. Setup the Analysis Job – PanDA Client Tools • PanDA = Production and Distributed Analysis System for ATLAS • PanDA client consists of five tools to submit or manage analysis jobs on PanDA • DA on PanDA page: https://twiki.cern.ch/twiki/bin/viewauth/Atlas/DAonPanda • pathena • How to submit Athena jobs • prun • How to submit general jobs (ROOT, python, sh, exe, …) • psequencer • How to perform sequential jobs/operations (e.g. submit job + download output) • pbook • Bookkeeping (browsing, retry, kill) of analysis jobs • puserinfo • Access control on PanDA analysis queues Nurcan Ozturk

  25. Submit the Job to the Grid with pathena • To submit Athena jobs to PanDA • A simple command line tool, but contains advanced capabilities for more complex needs • A consistent user interface to Athena • When you run Athena with: $ athena jobOptions.py all you need to do is: $ pathena jobOptions.py--inDS inputDatasetName --outDS outputDatasetName a dataset which contains the input files a dataset which will contain the output files Nurcan Ozturk

  26. More Information on Monitoring the Job, Retrieving and Analyzing the Result • Documentation about PanDA tools together with analysis examples and FAQ (Frequently Asked Questions): • https://twiki.cern.ch/twiki/bin/viewauth/Atlas/DAonPanda • pathena examples: how to run production transformations, TAG selection, on good run lists, event picking, etc. • prun examples: how to run CINT macro, C++ ROOT, python job, pyROOT script, skim RAW/AOD/ESD data, merge ROOT files, etc. • Regular Offline Software Tutorial page: • https://twiki.cern.ch/twiki/bin/viewauth/Atlas/RegularComputingTutorial • On backup slides: • Tips for submitting jobs • Tips for retrieving outputs • Tips for debugging failures Nurcan Ozturk

  27. Supporting a Thousand Users on the Grid • We have ~1000 active users. • They should not need to be distributed computing experts to use the Grid. • The Grid workflows are still being tuned – however not everything is %100 naïve user-proof. • Supporting users is critical to get physics results fast. • ATLAS introduced a support team in September 2008. • DAST – Distributed Analysis Support Team was formed for a combined support of pathena and Ganga users. First point of contact for distributed analysis questions. • Support is provided via a forum in e-groups: hn-atlas-dist-analysis-help@cern.ch • All kinds of problems are discussed in the forum, not just pathena/Ganga related ones: • athena, physics analysis tools, conditions database access, site/service problems, dq2-* tools, data access at sites, data replication • DAST helps directly by solving the problem or escalating to relevant experts. https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDAST Nurcan Ozturk

  28. DAST Team Members European time zone North American time zone ------------------------------------------------------------------------------------------- Daniel van der Ster Nurcan Ozturk (now in EU time zone) Mark Slater Alden Stradling Hurng-Chun Lee Sergey Panitkin Maria Shiyaka Bill Edson Manoj JhaWensheng Deng Elena Oliver GarciaShuwei Ye Frederic Brochu Nils Krumnack Daniel Geerts Woo Chun Park Carl Gwilliam Jack Crashaw A very small team (18 people) in ATLAS serving to a thousand users. Nurcan Ozturk

  29. Some Statistics – Usage in DAST help list, ten most common problems Based on 5979 threads (27567 messages) between October 27 2008 and November 29 2010. Nurcan Ozturk

  30. Hot Topics in Distributed Analysis Hot topics in distributed analysis for better analysis job performances: • Provide better data availability at sites: • A new data placement mechanism (PD2P) has been put into place in June to provide better data availability for user jobs. Very successful so far, expanded to several clouds, improvements to continue • Eliminate site problems from user analysis: • An auto-exclusion service is in place now to exclude problematic sites from user analysis before users discover them • Being watched closely, no false positives so far • Achieve better site/storage performances: • DA developers and site admins are putting more effort on understanding the data access issues and storage performances at the sites. Nurcan Ozturk

  31. Summary • ATLAS distributed analysis is going on in full swing • Lots of analysis activity since the beginning of data taking in March 2010. Successful contribution of ATLAS to summer conferences. Now preparations to winter conferences ongoing. • ADC team is following up analysis activity closely to provide the best availability of data and computing resources. • DA help forum provides an excellent communication platform for users to post their questions as well as for users to help each other. Nurcan Ozturk

  32. Backup Slides Nurcan Ozturk

  33. What is AMI • ATLAS Metadata Interface. • A generic cataloging framework: • Dataset discovery tool • Tag Collector tool (release management tool) • Where does AMI get its data: • From real data : DAQ data from the Tier 0 • From Monte Carlo and reprocessing • Pulled from the Task Request Database : Tasks, dataset names, Monte Carlo and reprocessing config tags • Pulled from the Production Database : Finished tasks – files and metadata • From physicists • Monte Carlo input files needed for event generation • Monte Carlo Dataset number info, physics group owner,… • Corrected cross sections and comments. • DPD tags Nurcan Ozturk

  34. Group Production • Physics groups produce derived data (D3PD, ntuple) for their own analysis needs • This can be done by a single user with group privileges using pathena/Ganga • Group production using the the central production system is more attractive: • Reproducibility of results • Access to computing resources world-wide • Full integration with Data Management system allows for automated data delivery to the final destination • Organized support (24/7) in case of problems • Flexible usage of group resources (mainly disk, CPU is less an issue) • First central group production trial was in October • Followed by discussions among ADC, SW Projects, Physics Coordination, Data Preparation and Physics Groups • Eight groups are actively participating now: Beam spot, Btagging, Egamma, SUSY, Trigger(L1Calo), SM, MinBias, Jet/MET • See more info at: https://twiki.cern.ch/twiki/bin/view/AtlasProtected/AtlasGroupProduction Nurcan Ozturk

  35. DAST Shift Organization • DAST shifts are Class-2 shifts (off-site) • Three time zones: • European, 8-16 hours • American, 16-24 hours • Asia-Pacific, 0-8 hours • Three level of shifts in each time zone: • 1st level, trained shifter, shift credit 100%, 7days/week • 2nd level, expert shifter, shift credit 50%, 7days/week • Trainee level, trainee shifter, shift credit 50%, 7 days/week • Shift organization, credit 25% • Note: Currently no shifts in Asia-Pacific time zones and no weekend shifts (some shifters respond to users during weekend) Nurcan Ozturk

  36. DAST Shift Schedule and Training – How new shifters are integrated • People who like to join DAST get in touch with Dan and myself • For new shifters, we enforce a "contract" of booking the following over the next 12 months: • two training shifts • six 1st level shifts • as desired 2nd level shifts • New shifters get prepared for training shifts by going through the material we prepared for the first DAST recruitment given in October 2009: • https://twiki.cern.ch/twiki/bin/view/Atlas/AtlasDAST#Training_session_for_DAST_shifte • More shifters are always needed. Goal is to have a steady stream of shifters undergoing training, with experts shadowing them. Nurcan Ozturk

  37. Examples of Common Problems in DA (1) • Site/release/cache issues: • Wrong updates concerning the analysis caches (for instance 15.6.13.1.1) • Broken AtlasLogin requirements at sites • Failed release installation processes • BDII consistency issues (BDII info used in job brokering) • dq2-get problems • Grid cert problems for certain users at sites – not updated CA files • lcg_cp errors - retry works • Files are being staged to disks • Scheduled downtimes at sites • Load on storage systems Nurcan Ozturk

  38. Examples of Common Problems in DA (2) • Data access problems: • Files with wrong checksums • SCRATCHDISK full (DAST receive an notification from DQ2 system now!) • Pilot errors with lsm-get failed: The pool hosts the input files was not available due to machine reboot. • Site problems with not having the conditions data poolfile catalog up-to-date. • Stuck DaTRI replication requests • DDM team helps • Dataset not replicated to Tier1 of that Tier2 site. Problems at Tier1 get fixed • Output datasets not closed Nurcan Ozturk

  39. Tips for Submitting Your Jobs • Always test your job locally before submitting to the Grid • Use the latest version of pathena/Ganga • Submit your jobs on container datasets (merged) • If your dataset is on tape, you need to request for data replication to a disk storage first • Do not specify any site name in your job submission, pathena/Ganga will choose the best site available • If you are using a new release that is not meant be used for user analysis, e.g. cosmic data, Tier0 reconstruction, then it’ll not be available at all sites. In general you can check release and installation status at: • http://atlas-computing.web.cern.ch/atlas-computing/projects/releases/status/ • https://atlas-install.roma1.infn.it/atlas_install/ Nurcan Ozturk

  40. Tips for Retrieving Your Outputs • If everything went fine, you need to retrieve your outputs from the Grid • Where and how you will store your output datasets: • request a data replication: your output files will stay as a dataset on Grid. • You need to freeze your dataset before requesting for replication • download onto your local disk using dq2-get: by default, user datasets are created on _SCRATCHDISK at the site where the jobs run. All the datasets on _SCRATCHDISK are to be deleted after a certain period (~ 30days). If you see a possibility to use them on Grid, you should think about replication. • No more than 30-40GB/day can be copied by dq2-get Details at: https://twiki.cern.ch/twiki/bin/view/Atlas/DQ2ClientsHowTo#AfterCreatingDataset Nurcan Ozturk

  41. Tips of Debugging Failures • Try to understand if the error is job related or site related • Job log files tell most of the time • If the site went down during your jobs being executed, you can check its status from the ATLAS Computer Operations Logbook: • https://prod-grid-logger.cern.ch/elog/ATLAS+Computer+Operations+Logbook/ • If input files are corrupted at given site you can exclude the site (till they are recopied) and notify DAST (Distributed Analysis Support Team) • Look at the FAQ (Frequently Asked Questions) section of the pathena/Ganga twiki pages for most common problems and their solutions: • https://twiki.cern.ch/twiki/bin/view/Atlas/PandaAthena#FAQ • https://twiki.cern.ch/twiki/bin/view/Atlas/DAGangaFAQ • If you can not find your problem listed there, do a search in the archive of the distributed-analysis-help forum in e-groups: • https://groups.cern.ch/group/hn-atlas-dist-analysis-help/default.aspx • If you still need help, send a message to the e-groups, DAST and other users will help you: hn-atlas-dist-analysis-help@cern.ch Nurcan Ozturk

More Related