1 / 40

Update on H1 and ZEUS Computing for HERA-II

Update on H1 and ZEUS Computing for HERA-II. Rainer Mankel DESY Hamburg. HTASC Meeting, CERN, 2-Oct-2003. HERA-II Computing Challenges. Key Numbers. Data Processing. Mass Storage. Glance at Application Software. Outline. e 27 GeV. p 920 GeV. Neutral current event from 2002 run.

kiele
Télécharger la présentation

Update on H1 and ZEUS Computing for HERA-II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Update on H1 and ZEUS Computing for HERA-II Rainer Mankel DESY Hamburg HTASC Meeting, CERN, 2-Oct-2003

  2. HERA-II Computing Challenges Key Numbers Data Processing Mass Storage Glance at Application Software Outline R. Mankel, H1 & ZEUS Computing for HERA-II

  3. e 27 GeV p 920 GeV Neutral current event from 2002 run HERA Collision Experiments: H1 & ZEUS • HERA is currently the only operational collider in Europe • H1 & ZEUS are general purposeexperiments for ep collisions • HERMES has much smaller computing requirements, at least until 2005 • HERA-B has finished data-taking • About 400 physicists per expt • HERA-II run targets 4-5 fold increase of luminosity compared to HERA-I R. Mankel, H1 & ZEUS Computing for HERA-II

  4. HERA-II Luminosity • Luminosity upgrade by increasing specific luminosity at similar currents • Startup of machine and experiments has been slow because of unexpected background problems • proper modifications are in place now • HERA has already demonstrated a factor of three increase in specific luminosity, as well as positron polarization • Long runs planned, eg. 10 months in 2004 • Considerable new challenges to HERA computing Delivered luminosity expectations for 2004-2006 ? R. Mankel, H1 & ZEUS Computing for HERA-II

  5. Changing Paradigms • HERA has never had a multi-year shutdown. • Transition from HERA-I to HERA-II (luminosity upgrade) took place during a 9 month break • HERA is worldwide the only ep collider, a unique machine. Data taken during HERA-II run will provide the last statement on many physics questions, at least for a very long time. • Major paradigm shifts in computing in the last three years • transition SMP  Intel based farms • transition to commodity storage [Note: ZEUS-related numbers are very fresh and highly preliminary] R. Mankel, H1 & ZEUS Computing for HERA-II

  6. Tape storage incr. 30-60 TB/year MC production Data processing/ reprocessing Data mining Interactive Data Analysis Disk storage 3-16 TB/year ~450 Users Computing Challenge of a HERA Experiment O(1 M) detector channels 50 M  250 M delivered Events/year R. Mankel, H1 & ZEUS Computing for HERA-II

  7. Event Samples • Increased luminosity will require a more specific online event selection • Both H1 and ZEUS refine their trigger systems to be more selective • aim: reduction of trigger cross section by at least 60% • Both H1 and ZEUS aim for 100 million events per year on tape ? ? (ZEUS) R. Mankel, H1 & ZEUS Computing for HERA-II

  8. Transition to Commodity Components Lesson learned from HERA-I: • use of commodity hardware and software gives access to enormous computing power, but • much effort is required to build reliable systems • In large systems, there will always be a certain fraction of • servers which are down or unreachable • disks which are broken • files which are corrupt • it is impossible to operate a complex system on the assumption that normally all systems are working • this is even more true for commodity hardware • Ideally, the user should not even notice that a certain disk has died, etc • jobs should continue • Need redundancy at all levels R. Mankel, H1 & ZEUS Computing for HERA-II

  9. Processing Power for Reconstruction & Batch Analysis • Linux/Intel strategic platform. Dual-processor systems standard. • Regular yearly upgrades planned: 10 (H1), 14 dual units (ZEUS) • in addition, upgrade of the ZEUS reconstruction farm is envisaged in 2004 • Batch systems in use: OpenPBS (H1) and LSF 4.1 (ZEUS), being upgraded to 5.0 R. Mankel, H1 & ZEUS Computing for HERA-II

  10. Compute Server Nodes R. Mankel, H1 & ZEUS Computing for HERA-II

  11. Jobspool free space 9:00 – 12:00 am Snapshot of ZEUS Analysis Farm (Ganglia monitor) R. Mankel, H1 & ZEUS Computing for HERA-II

  12. Farm Hardware • Time for ATX towers is running out • New farm node standard: 1U Dual-Xeon servers • Supermicro barebone servers in production • very dense concentration of CPU power (2 x 2.4 GHz per node) • installation by memory stick • Issues to watch: • power consumption • heat / cooling H1 simulation farm R. Mankel, H1 & ZEUS Computing for HERA-II

  13. Tape Storage Requirements • Main reasons for using tapes: • data volumes too large to be entirely stored on disks • media cost relation 10:1 • data safety • integral part of a powerful mass storage concept • expect about 120 TB yearly increase (H1: 34 TB, ZEUS: 52 TB) • not including duplication for backup purposes (e.g. raw data) • approaching Petabyte scale with end of HERA-II R. Mankel, H1 & ZEUS Computing for HERA-II

  14. Mass Storage • DESY-HHuses4 STK Powderhorn tape libraries (connected to each other) • in transition to new 9940 type cartridges, which offer 200 GB (instead of 20 GB) • Much more economic, but loading times increase • Need a caching disk layer to shield user from tape handling effects • Middleware is important R. Mankel, H1 & ZEUS Computing for HERA-II

  15. Cache within a cache within a cache Access time Classical Picture CPU 10-9 s Primary Cache 2nd Level Cache Memory Cache Disk Controller Cache 10-3 s Disk Tape Library 102 s R. Mankel, H1 & ZEUS Computing for HERA-II

  16. dCache Picture CPU • Disk files are only cached images of files in the tape library • Files are accessed via a unique path, regardless of server name etc • optimized I/O protocols Main board Cache 2nd Level Cache Memory Cache Disk Controller Cache Fabric Disk Cache Tape Library R. Mankel, H1 & ZEUS Computing for HERA-II

  17. dCache • Mass storage middleware, used by all DESY experiments • has replaced ZEUS tpfs (1997) and SSF (2000) • joint development of DESY and FNAL (also used by CDF, MINOS, SDSS, CMS) • Optimised usage of tape robot by coordinated read and writerequests (read ahead, deferred writes) • allows to build large, distributed cache server systems • Particularly intriguing features: • retry-feature during read access – job does not crash even if file or server become unavailable (as already in ZEUS-SSF) • “Write pool” used by online chain (reduces #tape writes) • reconstruction reads RAW data directly from disk pool (no staging) • grid-enabled. Also ROOT can open dCache files directly • randomized distribution of files across the distributed file servers • avoids hot spots and congestion • scalable R. Mankel, H1 & ZEUS Computing for HERA-II

  18. need a sophisticated mass storage concept to avoid bottlenecks Server congestion Disk congestion Disk Access Scaling Issues • The classical picture does not scale • the dCache solves this problem elegantly 1997 data 1997 data 1998 data 1999 data 2000 data 2002 data R. Mankel, H1 & ZEUS Computing for HERA-II

  19. Tape Library d C a c h e HSM Distributedcache servers Analysis platform Physicists R. Mankel, H1 & ZEUS Computing for HERA-II

  20. Tape Library HSM Distributedcache servers d C a c h e Analysis platform • Builtin redundancyat all levels • On failure of a cache server, the requested file is staged automatically to another server Physicists R. Mankel, H1 & ZEUS Computing for HERA-II

  21. Commodity File Servers • Affordable disk storage decreases number of accesses to tape library • ZEUS disk space (event data): • begin 2000: 3 TB fraction FC+SCSI: 100% • mid 2001: 9 TB 67% • mid 2002: 18 TB 47% • necessary growth only possible with commodity components • commodity components need “fabric” to cope with failures  dCache 22 disks, up to 2.4 TB 12 disks, up to 2 TB DELFI3, 2000-2002 “invented” by F. Collin (CERN) DELFI2, 2002- ? R. Mankel, H1 & ZEUS Computing for HERA-II

  22. Commodity File Servers (cont’d) • New dCache file servers hardware • 4 Linux front ends connected to 2 SCSI-2-IDE disk arrays • SCSI connection to hosts • 12 drive bays with 200 MB disks • RAID5 • 2 TB net capacity per array • copper gigabit ethernet • Better modularity than PC-solution with built-in RAID controller • First system went into production last Monday ZEUS file server R. Mankel, H1 & ZEUS Computing for HERA-II

  23. output (analysis) input (staging) I/O Performance with Distributed Cache Servers doener bigos03 bigos06 bigos01 bigos04 bigos02 bigos05 bigos07 Mean cache hit rate: 98% R. Mankel, H1 & ZEUS Computing for HERA-II

  24. Simulation in H1: Mainly on 2 Sites (nach Zahlen von D. Lüke) R. Mankel, H1 & ZEUS Computing for HERA-II

  25. Simulation in ZEUS: the Funnel System • Automated distributed production • production reached 240 M events in 2002 • funnel is an early computing grid R. Mankel, H1 & ZEUS Computing for HERA-II

  26. Future Need for Monte Carlo • Modeling is based on our simulation statistics of 1998-2002 in relation to real-data taken • on average, 5-10 Monte Carlo events needed for each real-data event • Additional effects expected from new detector components • considerable increase in simulation demand during HERA-II ZEUS R. Mankel, H1 & ZEUS Computing for HERA-II

  27. Future MC Production Concepts • Both H1 & ZEUS face simulation demands that will not be satiable with their present production concepts • exploration of grid-based production • a grid test bed based on EDG kit is already running at DESY • collaboration with Queen Mary college in London (H1) • other partners likely to join • remote job entry has been demonstrated R. Mankel, H1 & ZEUS Computing for HERA-II

  28. A Glimpse at Software Development Just a casual glance at two examples... • H1: ROOT-based object-oriented analysis framework • new object-oriented data model, based on ROOT trees • required redesign of large parts of analysis code • in production for normal data analysis since 2002 • ZEUS: ROOT-based client-server event display • purely object-oriented event display client • retrieve events on demand from central server • decided technology after building prototypes with Wired (Java) and ROOT (C++) R. Mankel, H1 & ZEUS Computing for HERA-II

  29. H1 from Andreas Meyer (H1) R. Mankel, H1 & ZEUS Computing for HERA-II

  30. from Andreas Meyer (H1) R. Mankel, H1 & ZEUS Computing for HERA-II

  31. New Client-Server Event Display (ZEUS) R. Mankel, H1 & ZEUS Computing for HERA-II

  32. Straight-forward “projections” of 3D representation can be complex & confusing Layered projections with proper ordering allow to really analyze the event Integrationof 2D/3D views • Perspective views with hidden surface removal are useful for understanding geometry, but do not show the event well R. Mankel, H1 & ZEUS Computing for HERA-II

  33. Tag DB Event Store dCache Client-Server Motivation • Event display needs access to experiment’s event store and online system • Idea: separation of • portable lightweight client, which can run wherever ROOT can run • central geometry & event server on DESY site and connect via HTTP protocol (ROOT’s TWebFile) to pass firewall HTTP User work station Central ZeVis Server (1 Dual-Xeon 2GHz) Mass storage control room LAN office on-site LAN off-site, home WAN DESY Computer Center R. Mankel, H1 & ZEUS Computing for HERA-II

  34. Apache + ROOT module + PHP4 Geometry file Geometry file Geometry file Dispatcher (PHP4) Event agent Event agent Event agent Event agent Event agent ROOT Conversion Online Events Events of the Week (PR) Internal Structure of The ZeVis Server Constants DB Geometry Builder Geometry request Tag DB Single event request Event Store File request Online Reconstruction Online event request EOTW request R. Mankel, H1 & ZEUS Computing for HERA-II

  35. (no server cache) Server Performance • Less than 2 s average response time • even faster on cached events • this includes event analysis and conversion to ROOT format, excludes network Stress test • In reality, simultaneous requests will hardly occur at all • Server can stand up at least 4 requests in parallel without noticeable performance loss R. Mankel, H1 & ZEUS Computing for HERA-II

  36. Input modes Zoom controls Option tabs Status information Canvas Pads Object information Graphical User Interface R. Mankel, H1 & ZEUS Computing for HERA-II

  37. GUI: Single Event Server R. Mankel, H1 & ZEUS Computing for HERA-II

  38. Drift Chamber Raw Hits • Normally, CTD display shows assigned hits, taking drift distance into account Special mode shows cell structure and raw hits R. Mankel, H1 & ZEUS Computing for HERA-II

  39. Visualizing the Micro-Vertex Detector Q2 1200 GeV2 Charged Current Q2 4500 GeV2 Charged Current Q2 2800 GeV2 Neutral Current R. Mankel, H1 & ZEUS Computing for HERA-II

  40. Summary • HERA-II poses sizable demands on computing, and they are immediate • approaching the Petabyte scale • Vast increase on standards on turnaround speed and reliability • Commodity computing gives unprecedented computing power, but requires a dedicated fabric to work reliably • redundant farm setups • redundant disk technology • Sophisticated mass storage middleware (dCache) is essential • New developments in experiments’ application software • e.g. H1 Root analysis environment, ZEUS client server event display R. Mankel, H1 & ZEUS Computing for HERA-II

More Related