1 / 8

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges )

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges ). Oxana Smirnova LCG/ATLAS, Lund University SWEGRID Seminar (April 9, 2003, Uppsala). ATLAS: preparing for data taking. Currently @ Data Challenge 1 (DC1). Event generation completed during DC0 Main goals of DC1:

melia
Télécharger la présentation

Current Monte Carlo calculation activities in ATLAS (ATLAS Data Challenges )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Current Monte Carlo calculation activities in ATLAS(ATLAS Data Challenges) Oxana SmirnovaLCG/ATLAS, Lund UniversitySWEGRID Seminar (April 9, 2003, Uppsala)

  2. ATLAS: preparing for data taking oxana.smirnova@hep.lu.se

  3. Currently @ Data Challenge 1 (DC1) • Event generation completed during DC0 • Main goals of DC1: • Need to produce simulated data for High Level Trigger & Physics Groups • Reconstruction & analysis on a large scale • learn about data model; I/O performances; identify bottlenecks etc • Data management • Use/evaluate persistency technology • Learn about distributed analysis • Involvement of sitesoutside CERN • Use of Grid as and when possible and appropriate oxana.smirnova@hep.lu.se

  4. DC1, Phase 1: Task Flow • Example: one sample of di-jet events • PYTHIA event generation: 1.5 x 107 events split into partitions (read: ROOT files) • Detector simulation: 20 jobs per partition, ZEBRA output Athena-Root I/O Zebra Hits/ Digits MCTruth Atlsim/Geant3 + Filter Di-jet HepMC (~450 evts) (5000 evts) 105 events Pythia6 Atlsim/Geant3 + Filter Hits/ Digits MCTruth HepMC Atlsim/Geant3 + Filter Hits/ Digits MCtruth HepMC Detector Simulation Event generation oxana.smirnova@hep.lu.se

  5. Piling up events oxana.smirnova@hep.lu.se

  6. Future: DC2-3-4-… • DC2: • Originally Q3/2003 – Q2/2004 • Will be delayed • Goals • Full deployment of Event Data Model & Detector Description • Transition to the new generation of software tools and utilities • Test the calibration and alignment procedures • Perform large scale physics analysis • Further tests of the computing model • Scale • As for DC1: ~ 107 fully simulated events • DC3: Q3/2004 – Q2/2005 • Goals to be defined; Scale: 5 x DC2 • DC4: Q3/2005 – Q2/2006 • Goals to be defined; Scale: 2 X DC3 Sweden can try to provide ca 3-5% contribution (?) oxana.smirnova@hep.lu.se

  7. DC requirements so far • Integrated DC1 numbers: • 50+ institutes in 20+ countries • Sweden enter with other Nordic countries via the NorduGrid • 3500 “normalized CPUs” (80000CPU-days) • Nordic share: equivalent of 320 “normalized CPUs” (ca. 80 in real life) • 5 × 107 events generated • No Nordic participation • 1 × 107 events simulated • Nordic: ca. 3× 105 • 100 TB produced (135 000 files of output) • Nordic: ca. 2 TB, 4600 files • More precise quantification is VERY difficult because of orders of magnitude complexity differences between different physics channels and processing steps • CPU time consumption: largely unpredictable, VERY irregular • OS: GNU/Linux, 32 bit architecture • Inter-processor communication: never been a concern so far (no MPI needed) • Memory consumption: depends on the processing step/data set, so far 512 MB have been enough • Data volumes: vary from KB to GB per job • Data access pattern: mostly unpredictable, irregular • Data bases: each worker node is expected to be able to access a remote database • Software is under constant development, will certainly exceed 1 GB, includes multiple dependencies on HEP-specific software, sometimes licensed oxana.smirnova@hep.lu.se

  8. And a bit about Grid • ATLAS DC ran on Grid since summer 2002 (NorduGrid, US Grid) • Future DCs will be to large extent (if not entirely) gridified • Allocated computing facilities must have all the necessary Grid middleware (but ATLAS will not provide support) • Grids that we tried: • NorduGrid – a Globus-based solution developed in Nordic countries, provides stable and reliable facility, executes all the Nordic share of DCs • US Grid (iVDGL) – basically, Globus tools, hence missing high-level services, but still serves ATLAS well, executing ca 10% of US DC share • EU DataGrid (EDG) – way more complex solution (but Globus-based, too), still in development, not yet suitable for production, but can perform simple tasks. Did not contribute to DCs • Grids that are coming: • LCG: will be initially strongly based on EDG, hence may not be reliable before 2004 • EGEE: another continuation of EDG, still in the proposal preparation state • Globus moves towards Grid Services architecture – may imply major changes both in existing solutions, and in planning oxana.smirnova@hep.lu.se

More Related