html5-img
1 / 22

The ALICE Framework at GSI

The ALICE Framework at GSI. Kilian Schwarz ALICE Meeting August 1, 2005. Overview. ALICE framework What part of ALICE framework is installed where at GSI and how can it be accessed/used ALICE Computing model (Tier architecture) Resource consumption of individual tasks

percy
Télécharger la présentation

The ALICE Framework at GSI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The ALICE Framework at GSI Kilian Schwarz ALICE Meeting August 1, 2005

  2. Overview • ALICE framework • What part of ALICE framework is installed where at GSI and how can it be accessed/used • ALICE Computing model (Tier architecture) • Resource consumption of individual tasks • Resources at GSI and GridKa

  3. ALICE Framework G4 G3 FLUKA ISAJET HIJING AliRoot AliEn Virtual MC EVGEN MEVSIM HBTAN STEER PYTHIA6 PDF PMD EMCAL TRD ITS PHOS TOF ZDC RICH HBTP STRUCT CRT START FMD MUON TPC RALICE ROOT F. Carminati, CERN

  4. Software installed at GSI: AliRoot • Installed at: /d/alice04/PPR/AliRoot • Newest version: AliRoot v4-03-03 • Environment setup via: > . gcc32login > . alilogin dev/new/pro/version-number  gcc295-04 not supported anymore  corresponding ROOT version initialized, too * responsible person: Kilian Schwarz

  5. Software installed at GSI: ROOT(AliRoot is heavily based on ROOT) • Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr • Newest version: 502-00 • Environment setup via > . gcc32login / alilogin or rootlogin • Responsible persons: - Joern Adamczewski / Kilian Schwarz • See also: http://www-w2k.gsi.de/root

  6. Software installed at GSI: geant3(needed for simulation: accessed via VMC) • Installed at: /d/alice04/alisoft/PPR/geant3 • Newest version: v1-3 • Environment setup via gcc32login/alilogin • Responsible person: Kilian Schwarz

  7. Software at GSI: geant4/Fluka(simulation: accessed via VMC) • Both so far not heavily used from ALICE • Geant4: standalone versions up to G4.7.1 • newest VMC version: geant4_vmc_1.3 • Fluka: not installed so far by me • Environment setup via > . gsisimlogin [-vmc] dev/new/prod/version • See also http://www-linux.gsi.de/~gsisim/g4vmc.html • Responsible person: Kilian Schwarz

  8. Software at GSI: event generators(task: simulation) • Installed at: /d/alice04/alisoft/PPR/evgen • Available: - Pythia5 - Pythia6 - Venus • Responsible person: Kilian Schwarz

  9. Software at GSI: AliEnThe ALICE Grid Environment • Currently being set up in the version2 (AliEn2) • Installed at: /u/aliprod/alien • Idea: global production and analysis • Environment setup via . .alienlogin • Copy certs from /u/aliprod/.globus or register own certs • Usage: /u/aliprod/bin/alien (proxy-init/login) • Then: register files and submit grid-jobs • Or: directly from ROOT !!! • Status: global AliEn2 production testbed currently being set up. • Will be used for LCG SC3 in September • Individual analysis of globally distributed Grid data at the latest during LCG SC4 2006 via AliEn/LCG/PROOF • Non published analysis possible already now: - create AliEn-ROOT Collection (xml file readable via AliEn) - analyse via ROOT/PROOF (TFile::Open(“alien://alice/cern.ch/production/…”) - Web Frontend being created via ROOT/QT • Responsible person: Kilian Schwarz

  10. ALICE VO – central services User authentication File Catalogue Workload management Job submission Configuration Job Monitoring Central Task Queue Accounting Storage Element(s) DB AliEn Site services Computing Element Data Transfer Disk and MSS Local scheduler Storage Element Cluster Monitor Existing site components ALICE VO – Site services integration AliEn2 services(see http://alien.cern.ch)

  11. Software at GSI: Globus • Installed at: /usr/local/globus2.0 and /usr/local/grid/globus • Versions globus2.0 and 2.4 • Idea: can be used to send batch jobs to GridKa (far more resources available than at GSI) • Environment setup via: . globuslogin • Usage: > grid-proxy-init (Grid certificate needed !!!) > globus-job-run/submit alice.fzk.de Grid/Batch job • Responsible person: Victor Penso/Kilian Schwarz

  12. GermanGrid CA How to get a certificate in detail: See http://wiki.gsi.de/Grid/DigitalCertificates

  13. Software at GSI: LCG • Installed at: /usr/local/grid/lcg • Newest version: LCG2.5 • Idea: global batch farm • Environment setup: . lcglogin • Usage: > grid-proxy-init (Grid certificate needed !!!) > edg-job-submit batch/grid job (jdl-file) • See also: http://wiki.gsi.de/Grid • Responsible person: Victor Penso, Anar Manafov, Kilian Schwarz

  14. LCG: the LHC Grid Computing project(with ca. 11k CPUs world’s largest Grid Testbed)

  15. Software at GSI: PROOF • Installed at: /usr/local/pub/debian3.0/gcc323-00/rootmgr • Newest version: ROOT 502-00 • Idea: parallel analysis of larger data sets for quick/interactive results • Personal PROOF Cluster at GSI, integrated in batch farm, can be set up via > prooflogin <parameters> (e.g. number of slaves, data to be analysed, -h (help)) • See also: http://wiki.gsi.de/Grid/TheParallelRootFacility • Later personal PROOF Cluster including GSI and GridKa via Globus possible • Later global PROOF Cluster via AliEn/D-Grid possible • Responsible person: Carsten Preuss, Robert Manteufel, Kilian Schwarz

  16. stdout/obj proof ana.C proof TFile TFile TFile proof TNetFile proof proof proof = master server proof = slave server Parallel Analysis of Event Data #proof.conf slave node1 slave node2 slave node3 slave node4 Local PC Remote PROOF Cluster root *.root node1 ana.C *.root $ root root [0] tree.Process(“ana.C”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) $ root root [0] tree.Process(“ana.C”) root [1] gROOT->Proof(“remote”) root [2] dset->Process(“ana.C”) $ root node2 *.root node3 *.root node4

  17. LHC Computing Model(Monarc and Cloud) One Tier 0 site at CERN for data taking ALICE (Tier 0+1) in 2008: 500 TB disk (8%), 2 PB tape, 5.6 MSI2K (26%) Multiple Tier 1 sites for reconstruction and scheduled analysis 3 PB disk (46%), 3.3 PB tape 9.1 MSI2K (42%) Tier 2 sites for simulation and user analysis 3 PB disk(46%), 7.2 MSI2K (33%)

  18. ALICE Computing model more in detail: • T0 (CERN): long term storage for raw data, calibration and first reconstruction • T1 (5, in Germany GridKa): long term storage of second copy of raw data, 2 subsequent reconstructions, scheduled analysis tasks, reconstruction of MC Pb-Pb data, long term storage of data processed at T1s and T2s • T2 (many, in Germany GSI): generate and reconstruct simulated MC data and chaotic analysis • T0/T1/T2: short term storage in multiple copies of active data • T3 (many, in Germany Münster, Frankfurt, Heidelberg, GSI) chaotic analysis

  19. CPU requirements and Event size

  20. ALICE Tier resources

  21. GridKa (1 of 5 T1s)IN2P3, CNAF, GridKa, NIKHEF, (RAL), Nordic, USA (effective ~5)ramp up time: due to shorter runs and reduced luminosity at the beginning not full resources needed: 20% 2007, 40% 2008, 100% end of 2008

  22. GSI + T3(support for the 10% German ALICE members) T3: Münster, Frankfurt, Heidelberg, GSI

More Related