html5-img
1 / 14

ATLAS TIER3 in Valencia

ATLAS TIER3 in Valencia. Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia). Tier 3 prototype at IFIC. Desktop Or Laptop (I). Atlas Collaboration Tier2 resources (Spanish T2) (II). ATLAS Tier3 resources ( Institute ) (II).

fernandeze
Télécharger la présentation

ATLAS TIER3 in Valencia

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ATLAS TIER3 in Valencia Santiago González de la Hoz IFIC – Instituto de Física Corpuscular (Valencia)

  2. Tier 3 prototype at IFIC Desktop Or Laptop (I) Atlas Collaboration Tier2 resources (Spanish T2) (II) ATLAS Tier3 resources (Institute) (II) Special requirements – PC farm to perform interactive analysis (III) Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  3. Individual Desktop or Laptop • Access to ATLAS software via AFS (Athena, root, atlantis, etc..) • /afs/ific.uv.es/project/atlas/software/releases • This is not easy, we have used the installation kit but this is not working for development or nightly releases • Is everything needed for detector software development inside the kit ? • Install the kit directly in AFS is not very practical. At least AFS volumes has to be created previously • Local checks, to develop analysis code before submitting larger jobs to the Tier1-2 via Grid • Use the ATLAS Grid resources (UI) • /afs/ific.uv.es/sw/LCG-share/sl3/etc/profile.d/grid_env.sh • DQ2 installed at IFIC AFS • /afs/ific.uv.es/sw/LCG-share/sl3/etc/profile.d/grid_env.sh • Users could search data and copy them to the local SE. • Ganga client installed • /afs/ific.uv.es/project/atlas/software/ganga/install/etc/setup-atlas.sh • Users can send jobs to the Grid for their analysis or production L X P L U S Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  4. Phase (I) done • From any PC at IFIC with AFS (i.e. 12.0.6) • Requirements file at cmthome: #--------------------------------------------------------------------- set CMTSITE STANDALONE set SITEROOT /afs/ific.uv.es/project/atlas/software/releases macro ATLAS_DIST_AREA /afs/ific.uv.es/project/atlas/software/releases macro ATLAS_TEST_AREA ${HOME}/testarea apply_tag projectArea macro SITE_PROJECT_AREA ${SITEROOT} macro EXTERNAL_PROJECT_AREA ${SITEROOT} apply_tag setup apply_tag simpleTest use AtlasLogin AtlasLogin-* $(ATLAS_DIST_AREA) set CMTCONFIG i686-slc3-gcc323-opt set DBRELEASE_INSTALLED 3.1.1 #--------------------------------------------------------------------- • Release test: source /afs/ific.uv.es/project/atlas/software/releases/CMT/v1r19/mgr/setup.sh cd $HOME/cmthome/ cmt config /usr/kerberos/bin/kinit -4 sgonzale@CERN.CH source ~/cmthome/setup.sh -tag=12.0.6,32 cd $HOME/testarea/12.0.6/ cmt co -r UserAnalysis-00-09-10 PhysicsAnalysis/AnalysisCommon/UserAnalysis cd PhysicsAnalysis/AnalysisCommon/UserAnalysis/cmt source setup.sh gmake Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  5. Local SE STORM & Lustre GANGA Athena AOD Analysis ROOT/PROOF DPD or Ntuple Analysis • -Work with ATLAS software • Use of final analysis tools (i.e. root) • User disk space • Ways to submit our jobs to other grid sites • Tools to transfer data Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  6. Phase (II) resources coupled to Spanish ATLAS TIER2 (in progress) • Nominal: • ATLAS Collaboration resources: TB (SE) and CPU (WN) • Tier2 Extra resources: • WNs and SEs used only by Tier3 users • Using different/share queues • To run local and private production • Production of MC samples of special interest for our institute • AOD for further analysis • To analyze AOD using GRID resources (AOD analysis on millions of events) • To store interesting data for analysis Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  7. Phase (III) a PC farm to perform interactive analysis outside Grid (it will be deployed) • Interactive analysis: DPD analysis (i.e. HightPTview, SAN, or AODroot ..) • Install PROOF in a PC farm: • Parallel ROOT facility. System for interactive analysis of very large sets of ROOT data files • Outside the Grid • 8-10 nodes • Fast Access to the data: Lustre and/or xrootd (filse systems) • Storm and Lustre under evaluation at IFIC • Xrootd • Tier3 Grid and non-Grid resources are going to use the same SE Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  8. StoRM • Posix SRM v2 • Under testing. Being used in preproduction farm. • Temporally using UnixfsSRM (dCache SRM v1 ) for production in Tier2 • Lustre in production in our Tier2 • High performance file system • Standard file system, easy to use • Higher IO capacity due to the cluster file system • Used in supercomputer centers • Free version available • www.lustre.org Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  9. ATLAS Spanish Tier2 • Distributed Tier2: UAM(25%), IFAE(25%) and IFIC(50%) • Inside our Tier2 two SE options are used. In case that Lustre won’t work as expected we will switch to dCache Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  10. Hardware • Disk servers: • 2 SUN X4500 (two more in place to be installed in the near future, used for testing) • 34 TB net capacity • Connectivity: • Switch Gigabit CISCO Catalyst 4500 • Grid Access: • 1 SRM server (P4 2.7 GHz, GbE) • 1 GridFTP server (P4 2.7 GHz, GbE) • Lustre Server: • 1 MDS (Pentium D 3.2 GHZ, R1 disk) Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  11. Plans • Put StoRM in production (SRM v2) • Add more gridftp servers as demand increases. • Move the Lustre server to a High Availability hardware. • Add more disk to cope with ATLAS requirements and use. • Give direct access to lustre from WN. Scientific Linux 4 is required. Some testing under way to try to install lustre kernel into SL 3. • Performance tuning Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  12. Tests: RFIO without Castor Athena analysis 12.0.6 AOD's 4 MB/s CPU limited and Athena Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  13. The same test with DPD Both cases Lustre was used and data in the cache 1.8 MB/s with Root 340 MB/s with a simple “cat” 2 x Intel Xeon 3.06GHz 4 GBytes RAM 1 NIC Gigabit Ethernet HD: ST3200822AS (Using a 3Ware card: 8006-2LP) Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

  14. Summary • From her/his desktop/laptop individual physicist can get access to: • IFIC Tier2-Tier3 resources. • ATLAS software (Athena, Atlantis, etc..), DDM/dq2 and Ganga tools. • IFIC Tier3 resources will be split in two parts: • Some resources coupled to IFIC Tier2 (ATLAS Spanish T2) in a Grid environment • AOD analysis on millions of events geographically distributed. • A PC farm to perform interactive analysis outside Grid • To check and validate major analysis task before submitting them to large computer farms. • A PROOF farm will be installed to do interactive analysis. Santiago González de la Hoz, ATLAS software workshop 24-Oct-2007

More Related