1 / 18

Moving the LHCb Monte Carlo production system to the GRID

Moving the LHCb Monte Carlo production system to the GRID. D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol

ziv
Télécharger la présentation

Moving the LHCb Monte Carlo production system to the GRID

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Moving the LHCb Monte Carlo production system to the GRID D.Galli,U.Marconi,V.Vagnoni INFN Bologna N Brook Bristol E.van Herwijnen,P.Mato CERN A.Khan Edinburgh M.McCubbin,G.D.Patel Liverpool A.Tsaregorodtsev Marseille H.Bulten,S.Klous Nikhef F.Harris Oxford G.N.Patrick,R.A.Sansum RAL F HARRIS CHEP, Beijing

  2. Overview of presentation • Functionality and distribution of the current system • Experience with the use of Globus in tests and production • Requirements and planning for the use ofDataGrid middleware and security system • Planning for interfacing GAUDI software framework to GRID services • Conclusions F HARRIS CHEP, Beijing

  3. LHCb distributed computing environment(15 countries - 13 European + Brazil,China, 50 institutes) • Tier-0 • CERN • Tier-1 • RAL(UK),IN2P3(Lyon),INFN(Bologna),Nikhef,CERN + ? • Tier-2 • Liverpool,Edinburgh/Glasgow,Switzerland + ? (maybe grow to ~10) • Tier-3 • ~50 throughout the collaboration • Ongoing negotations for centres (Tier-1/2/3) • Germany,Russia,Poland,Spain,Brazil • Current GRID involvement • DataGrid (and national GRID efforts in UK,Italy,+..) • Active in WP8 (HEP Applications) of Datagrid • Will use middleware(WP 1-5) + Testbed(WP6) + Network(WP7) + Security tools F HARRIS CHEP, Beijing

  4. Current MC production facilities • The max # of CPUs used simultaneously is usually less than the capacity of the farm. • Will soon extend to Nikhef, Edinburgh, Bristol F HARRIS CHEP, Beijing

  5. Distributed MC production, today Submit jobs remotely via Web Transfer data to CASTOR mass-store at CERN Update bookkeeping database (Oracle at CERN) Execute on farm Data Quality Check on data stored at CERN Monitor performance of farm via Web F HARRIS CHEP, Beijing

  6. Distributed MC production in future (using DataGRID middleware) WP 1 job submission tools WP 4 environment WP 2 data replication WP 5 API for mass storage Submit jobs remotely via Web Transfer data to CASTOR (and HPSS, RAL Datastore) Execute on farm WP 1 job submission tools Update bookkeeping database WP 2 meta data tools WP1 tools Online histogram production using GRID pipes WP 3 monitoring tools Data Quality Check ‘Online’ Monitor performance of farm via Web F HARRIS CHEP, Beijing

  7. Use of Globus in tests and production • Use of Globus simplifies remote production • submit jobs through local Globus commands rather than remote logon • Some teething problems in tests(some due to learning curve) • Some limitations to the system(e.g. need large temporary space for running jobs) • Some mismatches between Globus and the PBS batch system(job parameters ignored, submitting >100 jobs give problems) • DataGrid testbed organisation will ensure synchronisation of versions at sites + Globus support F HARRIS CHEP, Beijing

  8. M9(October 2001...) Authorisation group working towards tool providing single log-on and single role for individual Individual will get certificate from national CA Must work out administration for this at start for experiment VO. Probably ~10 users for LHCb M21(October2002….) Single log-on firmly in place. Moved to structured VO with (group,individual) authorisation. Multiple roles Maybe up to ~50 users Security F HARRIS CHEP, Beijing

  9. M9 Use command line interface to WP1 JDL. ‘Static’ file specification. Use environment specification as agreed with WP1,4 (no cloning) M21 Interface to WP1 Job Options via LHCb application (GANGA). Dynamic ‘file’ environment according to application navigation May require access to query language tools to metadata More comprehensive environment specification Job Submission F HARRIS CHEP, Beijing

  10. M9 Will run on farms at CERN, Lyon, RAL for first tests Extend to Nikhef, Bologna, Edinburgh once we get stability Will use a very simple environment (binaries) ‘Production’ flavour for work M21 Should be running on many sites (? 20) Complete LHCb environment for production and development, without AFS (use WP1 ‘sandboxes’) Should be testing user analysis via GRID, as well as performing production(~50) Job Execution F HARRIS CHEP, Beijing

  11. M9 Monitor farms with home-grown tools via Web Use home-grown data histogramming tools for data monitoring M21 Integrate WP3 tools for farm performance (status of jobs) Combine LHCb ideas on state management and data quality checking with DataGrid software Job Monitoring and data quality checking F HARRIS CHEP, Beijing

  12. M9 Use current CERN-centric Oracle based system M21 Moved to WP2 metadata handling tools ? ( ? Use of LDAP, Oracle) This will be distributed database handling using facilities of replica catalogue and replica management LHCb must interface applications view (metadata) to GRID tools. ?query tools availability Bookkeeping database F HARRIS CHEP, Beijing

  13. M9 WP2 GDMP tool via command line interface to transfer Zebra format files(control from LHCb scripts) WP5 interface to CASTOR M21 GDMP will be replaced by smaller tools with API interface. Copy Zebra +Root + ? Tests of strategy driven copying via replica catalogue and replica management WP5 interfaces to more mass storage devices. (HPSS+RAL Datastore) Data copying and mass storage handling F HARRIS CHEP, Beijing

  14. Gaudi Architecture Converter Converter Application Manager Converter Event Selector Transient Event Store Data Files Message Service Persistency Service Event Data Service JobOptions Service Algorithm Algorithm Algorithm Data Files Transient Detector Store Particle Prop. Service Persistency Service Detec. Data Service Other Services Data Files Transient Histogram Store Persistency Service Histogram Service F HARRIS CHEP, Beijing

  15. Converter Converter Application Manager Converter Event Selector Message Service Persistency Service Event Data Service JobOptions Service Algorithm Algorithm Algorithm Particle Prop. Service Persistency Service Detec. Data Service Other Services Persistency Service Histogram Service GAUDI services linking to external services DataSet DB OS Job Service Mass Storage Monitoring Service Transient Transient Config. Service Transient Event Store Transient Detector Store Event Database PDG Database Transient Histogram Store Analysis Program Other Other Histo Presenter F HARRIS CHEP, Beijing

  16. Another View Algorithms Gaudi Domain API Gaudi Services API Application external Services Grid Domain F HARRIS CHEP, Beijing

  17. GANGA: Gaudi ANd Grid Alliance GANGA GUI Collective & Resource Grid Services Histograms Monitoring Results JobOptions Algorithms GAUDI Program F HARRIS CHEP, Beijing

  18. Conclusions • LHCb already has distributed MC production using GRID facilities for job submission • Will test DataGrid M9 (Testbed1) deliverables in an incremental manner from October 15 using tools from WP1-5 • Have commenced defining projects to interface software framework (GAUDI) services (Event Persistency, Event Selection, Job Options) to GRID services • Within the WP8 structure we will work closely with the other work packages (middleware,testbed,network) in a cycle of (requirements analysis, design, implementation,testing) • http://lhcb-comp.web.cern.ch/lhcb-comp/ • http://datagrid-wp8.web.cern.ch/DataGrid-WP8/ F HARRIS CHEP, Beijing

More Related