1 / 18

Increasing Tape Efficiency

Increasing Tape Efficiency. HEPiX Fall 2008 Taipei. Nicola Bessone, German Cancio, *Steven Murray , Giulia Taurelli * Presenter. Contents. Tape efficiency project Problem areas What has been done What is under development Roadmap Summary. Tape Efficiency Project.

langerc
Télécharger la présentation

Increasing Tape Efficiency

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Steven Murray, October 2008 Slide 1 Increasing Tape Efficiency HEPiX Fall 2008 Taipei Nicola Bessone, German Cancio, *Steven Murray, Giulia Taurelli * Presenter

  2. Steven Murray, October 2008 Slide 2 Contents • Tape efficiency project • Problem areas • What has been done • What is under development • Roadmap • Summary

  3. Steven Murray, October 2008 Slide 3 Tape Efficiency Project • All functionality dealing directly with storage on and management of tapes • Volume database • Migrations/recalls • Tape drive scheduling • Low-level tape positioning and read/write • Team is from IT/DM • Contributions from IT/FIO

  4. Steven Murray, October 2008 Slide 4 Problem Areas • Read/write more data per tape mount • Random access users have produced an average of 1.5 file accessed per mount • Use a more efficient tape format • The current tape format does not deal efficiently with small files • More efficient tape positioning • More efficient tape to drive mappings Work done Current work More ideas

  5. Steven Murray, October 2008 Slide 5 What has been done

  6. Steven Murray, October 2008 Slide 6 Read/write More Per Mount • Recall/migration policies • Freight train approach • Hold back requests based on the amount of data and elapsed time • Production managers rule • Production managers plan relatively large workloads for CASTOR • Access control lists give production managers a relatively larger percentage of resources • User and group based priorities encourage users to work with their production managers

  7. Steven Murray, October 2008 Slide 7 Efficiency and Repack • Reading the current ANSI AUL format is approximately twice as fast as writing • Repack uses Castor as a cache • Repack uses the cache to support asymmetric read/write drive allocation • Repack is equivalent to one LHC experiment and as such is a good test run for Castor

  8. Steven Murray, October 2008 Slide 8 Repack Measurements • 4 drives reading • 7 drives writing • 400MBytes/s

  9. Steven Murray, October 2008 Slide 9 What is under development

  10. Steven Murray, October 2008 Slide 10 Writing Small Files is Slow Graphs copied from the “Repack at CERN Usage and Outlook” presentation prepared by Tim Bell and Gordon Lee for the Castor external operations face-to-face meeting 10-11 June 2008.

  11. Steven Murray, October 2008 Slide 11 Why Small Files are Slow Header 1 data file Trailer • ANSI AUL format • There a 3 tape marks per file • 2 to 3 second per tape mark • Assuming network bandwidth = 100 MB/s • Loss per file = 3 × 3s × 100 MB/s = 900 MB/s hdr1 hdr2 uh1 tm data file tm eof1 eof2 utl1 tm Tape marks

  12. Steven Murray, October 2008 Slide 12 New Tape Format Header N data files Trailer • Multi-file block format within the ANSI AUL format • Header per block for “self description” • 3 tape marks per n files • N will take into account: • A configurable maximum number of files • A configurable maximum size • A configurable maximum amount of time to wait hdr1 hdr2 uh1 tm data file 1 … data file n tm eof1 eof2 utl1 tm Each 256 KB data file block written to tape includes a 1 KB header

  13. Steven Murray, October 2008 Slide 13 Block Header Format

  14. Steven Murray, October 2008 Slide 14 Predicted Tape Format Performance Graph copied from "Repack and Tape Label Options" by Tim Bell, Charles Curran, Gordon Lee. Presentaion given at CERN on June 27th 2008 http://indico.cern.ch/conferenceDisplay.py?confId=37127

  15. Steven Murray, October 2008 Slide 15 Current Architecture Disk Server Disk Server Disk Server VDQM 2 1 3 4 Stager Stager Stager Tape Server Tape Gateway Tape Gateway RTCPClientD RTCPD 4 1. RTCPClientD asks VDQM to schedule a drive 2. VDQM allocates a drive 3. VDQM submits data transfer job to RTCPD 4. RTCPD transfers data to/from disk/tape based on file list given by RTCPClientD Legend Host Server process(es) Control messages Data

  16. Steven Murray, October 2008 Slide 16 New Architecture Disk Server • Tape Gateway replaces RTCPClientD • Tape aggregator wraps RTCPD • Tape aggregator generates multi-file block format stream for RTCPD Disk Server VDQM Disk Server Stager Stager Stager Tape Server Tape Gateway Tape Gateway Tape Gateway Tape Aggregator RTCPD

  17. Steven Murray, October 2008 Slide 17 Roadmap • End Q3 / beginning Q4 2008: • Repack goes into full production. At least 20 drives will be dedicated to repack. Expecting 700 MB/s • Feasibility study and architectural design draft for new tape aggregator, tape gateway, and the protocol and/or DB between them. • Proposal for new tape and aggregation format • End Q1 2009: • First functional prototype for new tape aggregator, gateway, tape DB • End Q2 2009: • First limited production (write) usage for repacking tapes only (with extended verification of repacked data). • Full production read support. • Q3 2009: • Full production usage for Repack; first limited/controlled production (write) usage for non-repack migrations • First integrated prototype (replacing rtcpd) • Q4 2009 / Q1 2010: • Integrated version in production, drop of rtcpd

  18. Steven Murray, October 2008 Slide 18 Summary • We have improved the efficiency of tape by increasing the amount of data we read/write per mount • Repack uses Castor as cache to support asymmetric drive read/write allocation • We are currently developing a new tape format to increase write performance • We can still do more…

More Related