1 / 45

PROOF/Xrootd for a Tier3

PROOF/Xrootd for a Tier3. Mengmeng Chen, Michael Ernst, Annabelle Leung, Miron Livny, Bruce Mellado, Sergey Panitkin, Neng Xu and Sau Lan Wu BNL/Wisconsin

kyle
Télécharger la présentation

PROOF/Xrootd for a Tier3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. PROOF/Xrootd for a Tier3 Mengmeng Chen, Michael Ernst, Annabelle Leung, Miron Livny, Bruce Mellado, Sergey Panitkin, Neng Xu and Sau Lan Wu BNL/Wisconsin Special thanks to Gerri Ganis, Jan Iwaszkiewicz, Fons Rademakers, Andy Hanushevsky, Wei Yang, Dan Bradley, Sridhara Dasu, Torre Wenaus and the BNL team SLAC meeting Tools meeting, 11/28/07

  2. Outline • Introduction • PROOF benchmarks • Our views on Tier3 • A Multilayer Condor System • PROOF and Condor’s COD • The I/O Queue • Data Redistribution in Xrootd • Outlook and Plans

  3. PROOF/XROOTD • When data comes it will not be possible for the physicist to do analysis with ROOT in one node due to large data volumes • Need to move to a model that allows parallel processing for data analysis, or distributed analysis. • As far as software for distributed analysis goes US ATLAS is going for the xrootd/PROOF system • Xrootd is a set of tools for serving data, maintained by SLAC which is proven to support up to 1000 nodes with no scalability problems within this range • PROOF (the Parallel ROOT Facility, CERN) is an extension of ROOT allowing transparent analysis of large sets of ROOT files in parallel on compute clusters or multi-core computers • See Sergey Panitkin’s talk at the PROOF workshop at CERN on Thursday overviewing ATLAS efforts/experience

  4. PROOF enabled facility PROOF in a Slide geographical domain master sub master workers MSS commands, scripts geographical domain top master sub master MSS workers client list of output objects (histograms, …) geographical domain sub master workers MSS PROOF: Dynamic approach to end-user HEP analysis on distributed systems exploiting the intrinsic parallelism of HEP data Analysis Facility, Tier3

  5. The end Point: Scalability Courtesy of PROOF team

  6. Some Technical Details • Structure of PROOF pool: • Redirector • Worker • Supervisor • Procedure of PROOF job: • User submit the PROOF job • Redirector find the exact location of each file • Workers validate each file • Workers process the root file • Master collects the results and sends to user • User make the plots • Packetizers. They work like job schedulers. • TAdaptivePacketizer (Default one, with dynamic packet size) • TPacketizer (Optional one, with fixed packet size) • TForceLocalPacktizer (Special one, no network traffic between workers. Workers only deal with the file stored locally) To be optimized for the Tier3

  7. Xrootd test farm at ACF BNL • 10 machines allocated so far for Xrootd test farm • Two dual core Opteron CPUs at 1.8 Ghz per node • 8 GB RAM per node • 4x 500 GB SATA drives per node, configured as a 2 TB partition • Gigabit network • 5 node configuration used for tests • 1 redirector + 4 data servers • 20 CPU cores • ~10 TB of available disk space • Behind ACF firewall, e.g visible from ACF only • 2 people involved in set up, installation, configuration, etc ~0.25 FTE

  8. Xrootd/PROOF Tests at BNL • Evaluation of Xrootd as a data serving technology • Comparison to dCache and NFS servers • Athena single client performance with AODs • I/O optimization for dCache and Xrootd • Athena TAG based analysis performance studies • Athena scalability studies with AODs • Evaluation of Xrootd/PROOF for root based analyses • Proof of the principle tests (factor of N scaling) • “Real” analyses (Cranmer, Tarrade, Black, Casadei,Yu....) • HighPtView, Higgs… • Started evaluation of different PROOF packetizers • Evaluation and tests of the monitoring and administrative setup • Integration with patena and Atlas DDM (T. Maeno) • Disk I/O benchmarks, etc

  9. Integration with Atlas DDM Tested by Tadashi Maeno (See demonstration tomorrow)

  10. PROOF test farms at GLOW-ATLAS • Big pool • 1 Redirector + 86 computers • 47 AMD 4x2.0GHz cores, 4GB memory • 39 Pentium4 2x2.8GHz, 2GB memory • We use just the local disk for performance tests • Only one PROOF worker run each node • Small pool A • 1 Redirector + 2 computers • 4 x AMD 2.0GHz cores, 4GB memory, 70GB disk • Best performance with 8 workers running on each node • Small pool B • 1 Redirector + 2 computers • 8 x Intel 2.66GHz cores, 16GB memory, 8x750GB on RAID 5 • Best performance when 8 workers running on each node, mainly for high performance tests

  11. Xrootd/PROOF Tests at GLOW-ATLAS (Jointly with PROOF team) • Focused on needs of a university-based Tier3 • Dedicated farms for data analysis, including detector calibration and performance, and physics analysis with high level objects • Various performance test and optimizations • Performance in various hardware configurations • Response to different data formats, volumes and file multiplicities • Understanding system with multiple users • Developing new ideas with the PROOF team • Tests and optimization of packetizers • Understanding the complexities of the packetizers

  12. PROOF test webpage • http://www-wisconsin.cern.ch/~nengxu/proof/

  13. The Data Files The ROOT version • URL: http://root.cern.ch/svn/root/branches/dev/proof • Repository UUID: 27541ba8-7e3a-0410-8455-c3a389f83636 • Revision: 21025 • Benchmark files: • Big size benchmark files (900MB) • Medium size benchmark files (400MB) • Small size benchmark files (100MB) • ATLAS format files: • EV0 files(50MB)

  14. The Data Processing settings • Benchmark files (Provided by PROOF team): • With ProcOpt.C (Read 25% of the branches) • With Pro.C (Read all the branches) • ATLAS format files (H DPD): • With EV0.C • Memory Refresh: • After each PROOF job, the Linux kernel stores the data in the physical memory. When we process the same data again, the PROOF will read from memory instead of disk. In order to see the real disk I/O in the benchmark, we have to clean up the memory after each test.

  15. What can we see from the results? • How much resource PROOF jobs need: • CPU • Memory • Disk I/O • How does PROOF job use those resources: • How to use a multi-core system? • How much data does it load to the memory? • How fast does it load to the memory? • Where do the data go after processing? (Cached memory)

  16. Benchmark files, big size, read all the data MB KB/s Cached Memory Disk I/O 1 2 4 6 8 9 10 1 2 4 6 8 9 10 % % CPU Usage Memory Usage Number of Workers 1 2 4 6 8 9 10 1 2 4 6 8 9 10 The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, 8 disks on RAID 5.

  17. Benchmark files, big size, read all the data MB KB/s Cached Memory Disk I/O 1 2 4 3 1 2 4 3 % % CPU Usage Memory Usage 1 2 4 3 Number of Workers 1 2 4 3 The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, SINGLE DISK

  18. Benchmark files, big size, read all the data MB KB/s Disk I/O Cached Memory 1 2 4 6 8 9 10 1 2 4 6 8 9 10 % % CPU Usage Memory Usage Number of Workers 1 2 4 6 8 9 10 1 2 4 6 8 9 10 The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory. 8 disks on RAID 5. Without Memory Refresh

  19. An Overview of the Performance rate Average Processing Speed (events/sec) Number of workers All the tests on same machine using default packetizer Using Xrootd preload function seems to work well Should not start more than 2 workers on single disk…

  20. Our Views on a Tier3 at GLOW Putting PROOF into Perspective

  21. Main Issues to Address • Network Traffic • Avoiding Empty CPU cycles • Urgent need for CPU resources • Bookkeeping, management and and processing of large amount of data Core Technologies • CONDOR • Job management • MySQL • Bookkeeping and file management • XROOTD • Storage • PROOF • Data analysis 21

  22. OnePossibleWay to go... GRID Computing PoolComputing nodes with small local disk. The gatekeeper Takes the production jobs from Grid and submits to local pool. Batch SystemNormally uses Condor, PBS, LSF, etc. The users Submit their own jobs to the local pool. Heavy I/O Load Dedicated PROOF poolcpus cores + big disks. Storage PoolCentralized storage servers (NFS, Xrootd, Dcache, CASTOR) CPUs are idle most of the time. 22

  23. The waywe want to go... GRID The gatekeeper Takes the production jobs from Grid and submits to local pool. Local job Submission Users’ own jobs to the whole pool. Pure computing Poolcpus cores + small local disk Xrootd Poolcpus cores + big disks. Less I/O Load Proof jobs Submission Users’ PROOF jobs to the Xrootd pool. Storage Poolvery big disks 23

  24. A Multi-layer Condor System PROOF Queue (Condor’s COD ?)For PROOF Jobs, Cover all the CPUs, no affective to the condor queue, jobs get the CPU immediately. Proof jobs Submission Users’ PROOF jobs to the Xrootd pool. Fast QueueFor high priority private Jobs, No number limitation, run time limitation, cover all the CPUs, half with suspension and half without, with highest priority I/O QueueFor I/O intensive jobs, No number limitation, No run time limitation, Cover the CPUs in Xrootd pool, Higher priority Local job Submission Users’ own jobs to the whole pool. Local Job QueueFor Private Jobs, No number limitation, No run time limitation, Cover all the CPUs, Higher priority. The gatekeeper Takes the production jobs from Grid and submits to local pool. Production QueueNo Pre-emption, Cover all the CPUs, Maximum 3 days, No number limitation. Suspension of ATHENA jobs well tested. Currently testing suspension on PANDA jobs. 24

  25. PROOF + Condor’s COD Model Use Condor’s Computing-on-Demand to free-up nodes (in ~2-3 sec) running long jobs with local Condor system A lot of discussion with PROOF team and Miron about integration of PROOF and CONDOR scheduling. May not need COD in the end. Condor + Xrootd + PROOF pool Long Production or local Condor jobs Condor Master COD requests PROOF requests The local storage on each machine PROOF jobs Xrootd Redirector

  26. dataserver1 dataserver2 dataserver2 Database client redirector Xrootd File Tracking System Framework (to be integrated into LRC DB) Local_xrd.sh DATA Xrootd_sync.py Xrootd_sync.py … Xrootd_sync.py 26

  27. TheI/O Queue Mysql database server 0 Xrootd Poolcpus cores + big disks. 1 2 4 3 Condor master Submitting node 5 0. The tracking system provides file locations in the Xrood pool. 1. Submission node asks Mysql database for the input file location. 2. Database provides the location for file and also the validation info of the file. 3. Submission node adds the location to the job requirement and submit to the condor system. 4. Condor sends the job to the node where the input file stored. 5. The node runs the job and puts the output file on the local disk. 0. The tracking system provides file locations in the Xrood pool. 27

  28. I/O Queue Tests Direct Access Jobs go on machines where the input files reside Accesses ESD files directly and converts them to CBNTAA files Copies output file to xrootd on the same machine using xrdcp Each file has 250 events xrdcp Jobs go on any machines – not necessarily on the ones which have the input files Copies input and output files via xrdcp to/from the xrootd pool Converts the input ESD file to CBNTAA cp_nfs Jobs go on any machine Copies input and output files to/from NFS Converts the input ESD file to CBNTAA

  29. I/OQueueTest Configuration Input file (ESD files) size ~700MB Output File (CBNTAA) size ~35MB Each machine has ~10 ESD files 42 running nodes 168 CPUs cores

  30. I/OQueueTest Results Time save per job: ~230sec Number of jobs

  31. I/OQueueTest Results Number of jobs

  32. Data Redistribution in Xrootd This one is down. New machineto replace the bad one. • When and why do we need data redistribution? • Case 1: One of the data servers is dead. All the data on it got lost. Replace it with a new data server. • Case 2: When we extend the Xrootd pool, we add new data servers into the pool. When new data comes, all the new data will go the new server because of the load balancing function of Xrootd. The problem is that if we run PROOF jobs on the new data, all the PROOF jobs will read from this new server.

  33. An Example of Xrootd file Distribution All the files were copied through Xrootd redirector This node happened to be filled with most of the files in one data set Number of files This machine was down Computer nodes in the xrootd pool

  34. PROOF Performance on this Dataset Here is the problem

  35. After File Redistribution Number of files Computer nodes in the xrootd pool

  36. Number of WorkersAccessing Files Before File Redistribution After File Redistribution Number of WorkersAccessing Files Running Time

  37. PROOF Performance after Redistribution

  38. The Implementation of DBFR We are working on a MySQL+Python based system We are trying to integrate this system into the LRC database Hopefully, this system can be implemented at PROOF level because PROOF already work with datasets

  39. Summary • Xrootd/PROOF is an attractive technology for ATLAS physics analysis, specially for the post-AOD phase • The work of understanding this technology is in progress by BNL and Wisconsin • Significant experience has been gained • Several Atlas analysis scenarios were tested, with good results • Tested machinery on HighPtView, CBNT, EV for Higgs, • Integration with DDM was tested • Monitoring and farm management prototypes were tested • Scaling performance is under test • We think PROOF is a viable technology for Tier3 • Testing Condor’s Multilayer System and COD, Xrootd file tracking and data redistribution, I/O queue • Need to integrate developed DB and LRC • Need to resolve issue of multi-user utilization of PROOF

  40. Additional Slides

  41. The Basic Idea of DBFR Register the location of all the files in every datasets in the database (MySQL) With this information, we can easily get the file distribution of each dataset Calculate the average number of the files each data server should handle Get a list of files which need to move out. Get a list of machines which don’t have less files than the average Match these 2 lists and move the files Register the new location of those files

  42. Benchmark files, big size, read all the data MB KB/s Cached Memory Disk I/O 1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5 % % CPU Usage Memory Usage Number of Workers 1 2 4 6 8 9 7 5 1 2 4 6 8 9 7 5 The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, 8 disks on RAID 5. With Xrootd Preload. 11/29/2007

  43. Performance Rate Average Processing Speed (events/sec) Number of workers Xrootd preloading doesn’t change disk throughput much Xrootd preloading helps to increase the top speed by ~12.5%. When we use Xrootd preload, disk I/O reaches ~60MB/sec CPUs usage reached 60%. The best performance is achieved when the number of workers is less than the number of CPUs.(6 workers provides the best performance.)

  44. Benchmark files, big size, read 25% of the data MB KB/s Cached Memory Disk I/O 1 2 4 6 7 8 9 1 2 4 6 7 8 9 % % CPU Usage Memory Usage 1 2 4 6 7 8 9 Number of Workers 1 2 4 6 7 8 9 The jobs were running on a machine with Intel 8 core, 2.66GHz, 16GB DDR2 memory, 8 disks on RAID 5.

  45. Performance Rate Average Processing Speed (events/sec) Number of workers Disk I/O reaches ~60MB/sec which is 5MB/sec more than reading all the data. CPUs usage reached 65%. 8 workers provides the best performance.

More Related