1 / 5

LHCb & GRID in the Netherlands

LHCb & GRID in the Netherlands. NIKHEF/VU Amsterdam Kors Bos Jo van den Brand Henk Jan Bulten David Groep Sander Klous Jeff Templon NIKHEF/VU ICT groups. Current Status (hardware). 2*20 CPUs for LHCb testbed 1 dual pentium III 933 MHz 10 nodes at NIKHEF, 10 at VU

emery
Télécharger la présentation

LHCb & GRID in the Netherlands

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. LHCb & GRID in the Netherlands • NIKHEF/VU Amsterdam • Kors Bos • Jo van den Brand • Henk Jan Bulten • David Groep • Sander Klous • Jeff Templon • NIKHEF/VU ICT groups

  2. Current Status (hardware) • 2*20 CPUs for LHCb testbed 1 • dual pentium III 933 MHz • 10 nodes at NIKHEF, 10 at VU • 1 Gb ram, 45 Gb IDE disk /node • 2 dell servers, 90Gb IDE disk, fast ethernet switches • Currently the nodes are being assembled at NIKHEF, the switches are ordered • Network • SURFnet NIKHEF-VU 1 Gbit/s • upgrade to 10Gbit/s (2002) • NIKHEF-Switserland 155Mbit/s • upgrade to 1 Gbit/s (2004) NIKHEF LAN Surfnet LAN VU

  3. Status Software • On linux system we installed and tested • globus toolkit • lhcb toolkit (straight forward) • gaudi • not necessary for MC production • with AFS easy (but lots of redundant stuff is installed) • afs-globus integration is not yet clear due to Kerberos • without AFS (package installation) few weeks of work • unresolved links (NA48, Atlas, ALICE). • Objectivity and HTL-package required • apache web server + tomcat servlet engine • servlets required to start up Monte Carlo jobs and to update CERN-based database • PBS batch system

  4. Current developments • In order to automate massive Monte Carlo data production we developed • integration job submission servlet with PBS • servlet to copy data to CASTOR • automatically every night (cron job) • guaranteed delivery by checking filesize after transfer • generic tool usable from anywhere, in line with grid-ftp • most effort went into creating a robust and reliable environment • Currently we start working on data-quality verification tools • Outlook: clusters should be ready for month-9 and testbed 1 • software environment is well suited for tests in testbed 1 • Data processing both from tape and disk to test efficiency of the Event Data Service • Future: NIKHEF wants to be tier-1 center (together with SARA)

  5. Grid philosophy • Our personal viewpoints: • Monte Carlo data distributed over tier-1s • jobs moved to the data • Minimal grid requirements: • grid authentication and authorization on all tier-1s for job submission • possibility to copy data between tier-1s (via grid tools) • alternatively: servlet strategy (same functionality, but no grid involved).

More Related