1 / 13

AECC (Pierre Auger European Computing Center) @ Lyon

AECC (Pierre Auger European Computing Center) @ Lyon. Conneting to CCIN2P3 The Computer Center supports 3 Unix platforms : Platform name      Operating system      Hardware   ccars.in2p3.fr IBM AIX 4.3.2 Risc 6000     ccasn.in2p3.fr SUN Solaris 7 Sparc    

ivana
Télécharger la présentation

AECC (Pierre Auger European Computing Center) @ Lyon

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AECC (Pierre Auger European Computing Center) @ Lyon Conneting to CCIN2P3 The Computer Center supports 3 Unix platforms : Platform name     Operating system     Hardware   ccars.in2p3.fr IBM AIX 4.3.2 Risc 6000     ccasn.in2p3.fr SUN Solaris 7 Sparc     ccali.in2p3.fr RedHat Linux 6.x Intel To connect to theses machines, SSH is the only available service. The SSH implementation used at CCIN2P3 is OpenSSH. . Valencia meeting – 20 Ottobre 2003 Carla Aramo

  2. Overview: simulation status • Since January 2001: • 58,870 showers • 15.7 TB data • 300-400 GB/week • 59 CPU years (units of 750 MHz Pentium PC) • CORSIKA 6.0; QGSJET 01 (default), partly SIBYLL 2.1 • fixed E,Theta (opt. 1e-6 thinning): 36000 showers • fixed E,Theta (opt. 1e-5 thinning): 4400 showers • E spectrum, Theta distribution (opt. 1e-5 thinning): 13850 showers • Others (special settings): 4620 showers • Note: Currently, a reprocessing of the .root files is carried out with • a considerably improved CorsToRoot version.

  3. CPU time and Size (1) E0= 1019 eV, Ee,g = 100 keV, thin = 10-6

  4. Thinning (1)

  5. Thinning (2) • Thinning parameters: • Optimum thinning means (E = primary energy, thl = thinning level, wlim = weight limit): • wlim = E(GeV)*thl for (e+-/gamma) and factor 100 less for (mu+-/hadrons)Usually: 'High Quality'= opt. 1e-6, 'High Statistics'= opt. 1e-5Example: E = 1e19 eV = 1e10 GeV, thl = 1e-6 => opt 1e-6 thinning:wlim (e+-/gamma) = 1e4 wlim (mu+-/hadrons) = 1e2 • opt 1e-5 thinning is compatible to/somewhat better than 1e-7 • thinning without weight limitation • with opt 1e-6 thinning, artificial fluctuations are reduced by factor 3-4 compared to opt 1e-5 thinning

  6. Data availability : * Detailed particle files (extension "part" and "root") retrievable from HPSS (mass storage); * Extracted information ("small files", extensions "hbook", "info", "long", "lst", "tab") on AFS disk, directory /afs/in2p3.fr/group/pauger/corsika

  7. AFS A set of machines on a network can share files in a coherent way by using AFS (Andrew File System), a system of distributed files with a hierarchical structure. The root is unique around the world and is named /afs . Each organism which asks for an AFS user license gets a subdirectory called a cell . The IN2P3 cell is in2p3.fr . The users of the IN2P3 Computing Center who log in to machines where AFS is installed will find their files in the directory: /afs/in2p3.fr The files on the CERN machines are in /afs/cern.ch/ ( /afs/cern.ch/ being the name of the CERN cell). You can see the big advantage of this file system. To transfer your files from one site to another you just have to copy them from one AFS directory to another • To summarize: • Type Management Reliability • HOME By individual users RAID, backuped on tapes and disk in directory you can reach by • the variable:$HOME_BACKUP • THRONG By the members of • throng (experience) RAID, backuped • GROUP By people responsible • for group space No backup • We recommend to use the following environment variables for the • access to all these disk spaces: • $HOME • $THRONG_DIR • $GROUP_DIR • This will be necessary if the system administrators change the • absolute path for these zone. With these variables, your procedures • to reach your directories will be the same.

  8. HPSS (High Performance Storage System) • Configuration21 machines:  1 core server, 20 moversCore server: DCE master, Encina SFS server, HPSS core services • 20 Tape movers • Disk movers (5): 9.5 TB staging space • 3-silo Storagetek 4400 Nearline Automated Tape Library;  Total capacity, ~ 18000 cartridges • 10 Mbps Ethernet control network • Switched Gigabit Ethernet data network

  9. Longitudinal development

  10. Lateral development

  11. Today • Date: Fri, 10 Oct 2003 11:34:59 +0200 (CEST) • From: Markus Risse <markus.risse@ik.fzk.de> • To: Carla Aramo <Carla.Aramo@na.infn.it> • Subject: Re: Lyon simulations • Dear Carla, • general simulation situation in Lyon: • At this moment there are more than 2000 jobs of the Auger group running or queued. • No CORSIKA library job is running at the moment, but I noticed in the past 2 months growing • interest. • More general, it is fine for me to supervise the CORSIKA library simulations, i.e. data of • general Auger interest -- after all we can use the Lyon facilities as Auger collaboration – • and I offer my help also in recommending specific simulation settings as most people at the • moment are simulation newcomers with little experience e.g. in thinning parameter settings. • For this it is helpful to know as good as possible which analysis is planned. For simulations • with settings which are very specific and (partly) not according to Auger conditions, my • personal feeling is that - as long as the general analysis results are of relevance -, it should • be OK to use the Lyon facilities if 'official' simulations (to which I would also count detector • simulation studies) are not limited too much by this (this would seem natural to me). Thus (my • feeling): No problem if it is not a really massive production. • Please tell me in case of any questions, I will try to help. • Ciao, Markus.

  12. Per l’analisi sulla composizione in massa utilizzando i metodi FNN e NNA sono state sviluppate a Lione su nostra specifica richiesta. Specifiche simulazioni richieste per l’analisi FNN e NNA • 1000 sciami da Protone • 1000 da Elio • 1000 da Ossigeno • 1000 da Ferro L’energia del primario è fissata a 1018 eV, l’angolo di incidenza q=0o , il thinning a 10-6 e la quota di osservazione a 870 gcm–2 (quota di Auger) Tali simulazioni sono state prodotte nel mese di agosto

  13. La nostra esperienza Per analizzare le simulazioni prodotte si è deciso di copiare su disco locale di Napoli i file .long relativi agli sviluppi longitudinali. Poiché tali file sono su disco si è usata la connessione klog (AFS) e la copia è risultata immediata Per l’analisi effettuata non si sono utilizzati i file .part che contengono l’informazione a quota di Auger, questo perché al momento siamo interessati solo allo sviluppo longitudinale. In futuro, per tener conto anche dell’informazione dell’array, tali file dovranno essere analizzati I file .part sono molto grossi (500 Mb) e non sono su disco ma su HPSS. Per poter essere quindi copiati su dischi locali devono essere prima spostati su disco AFS, dove l’utente medio non ha spazio disco sufficiente (50 Mb). Un’alternativa potrebbe essere quella di analizzare direttamente i file su HPSS. Questo significa non avere i file disponibili localmente, ma dipendere completamente da Lione. Inoltre l’accesso dei programmi di analisi su HPSS è molto più lenta. Bisogna definire quindi la migliore strategia da seguire.

More Related