1 / 30

Grid for CBM

Kilian Schwarz, GSI. Grid for CBM. What is Grid ?. Sharing of distributed resources within one Virtual Organisations !!!!. 637. 70. 4603. 87. 22. 538. 55. 27. 10. LHC Wissenschaftler weltweit. Europa: 267 Institute, 4603 User Sonstige: 208 Institute, 1632 User. Start of CBM Grid.

benita
Télécharger la présentation

Grid for CBM

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kilian Schwarz, GSI Grid for CBM

  2. What is Grid ? • Sharing of distributed resources within one Virtual Organisations !!!!

  3. 637 70 4603 87 22 538 55 27 10 LHC Wissenschaftler weltweit Europa: 267 Institute, 4603 User Sonstige: 208 Institute, 1632 User

  4. Start of CBM Grid • There are considerations to start a CBM Grid • Task: distributed MC production • Potential sites: 3 (Bergen, Dubna, GSI) • After positive experiences the Grid can be enlarged to more sites and tasks, like distributed analysis

  5. requirements * Globus-style X509 user certificates issued for CBM by GermanGrid CA http://www.gridka.de * How to get a certificate ? at GSI: > . globuslogin > grid-cert-request –cn “<surname> <name>” certificate request file and private key will be stored in $HOME/.globus The request file has to be signed (openssl) by the CA responsible person and mailed to GermanGrid CA The certificate will be mailed back via e-mail

  6. GermanGrid CA How to get a certificate in detail: See http://wiki.gsi.de/Grid/DigitalCertificates

  7. requirements: CBM VO Server (one per VO) additional sites: - Bergen, Dubna additional users: - to be added

  8. Globus/LCG – creation of grid-mapfilenecessary for each site • E.g. with gLite-security tools: - adjust $GLITE_LOCATION/etc/glite-mkgridmap.conf add: “group ldap://glite001.gsi.de:8389/o=cbm,dc=de,dc=de” - Create grid-mapfile $GLITE_LOCATION/sbin/glite-mkgridmap –output=/etc/grid-security/grid-mapfile

  9. user creation on each site (support of CBM VO) • Each site has to create cbm-user-IDs onto which the Grid-users will be mapped: EGEE/LCG: a certain number of POOL accounts, e.g. cbmvo00 – cbmvo10 Globus & AliEn: one production user: via this userID the jobs will be submitted. E.g. cbmprod

  10. CBM software environment • To be able to send real CBM jobs to the Grid, the participating sites have to * Install the CBM software and prepare the environment * Or the job has to bring it’s own environment (static links)

  11. Agreement on common Grid middleware • basically, the possibilities are: • - Globus • - NorduGrid • - LCG-2 • - AliEn • - gLite (EGEE) • - gLite (AliEn)

  12. LHC Computing Grid Project Fundamental Goal of the LCG To help the experiments’ computing projects Phase 1 – 2002-05prepare and deploy the environment for LHC computing Phase 2 – 2006-08acquire, build and operate the LHC computing service SC2 – Software & Computing Committee • SC2 includes the four experiments, Tier 1 Regional Centres • SC2 identifies common solutions and sets requirements for the project PEB – Project Execution Board • PEB manages the implementation • organising projects, work packages • coordinating between the Regional Centres

  13. Local Application Local Database Grid Application Layer Data Management Metadata Management Job Management Collective Services Grid Scheduler Information & Monitoring Replica Manager Underlying Grid Services Computing Element Services Storage Element Services Replica Catalog Authorization Authentication and Accounting Service Index SQL Database Services Fabric services Monitoring and Fault Tolerance Node Installation & Management Fabric Storage Management Resource Management Configuration Management EDG Middleware Architecture Local Computing APPLICATIONS Grid M / W Grid GLOBUSCondorG (via VDT) Fabric

  14. Dubna (JINR): LCG-2 site

  15. Dubna (JINR): LCG-2 siteLCG-test mostly successful

  16. JINR (LCG-2 site: job-submit)

  17. Timeline First production (distributed simulation) 10% DC (analysis) 2001 2002 2003 2004 2005 • After only 2 years of development, we have deployed a distributed computing environment which meets the needs of Alice experiment • Simulation & Reconstruction • Event mixing • Analysis • Using Open Source components (representing 99% of the code), internet standards (SOAP,XML, PKI…) and scripting language (perl) was the key element that alllowed quick prototyping and very fast development cycles P. Buncic, CERN

  18. Building AliEn P. Saiz, CERN

  19. AliEn Grid (ALICE VO): • 77 configured sites worldwide

  20. DC Monitoring: http://alien.cern.ch • Monalisa: http://aliens3.cern.ch:8080

  21. lxts05.gsi.de: AliEn client (PANDA VO)

  22. JINR and Bergen: AliEn sites

  23. JINR and Bergen: AliEn sites

  24. App-specific Services Open Grid Services Arch Web services GGF: OGSI, … (+ OASIS, W3C) Multiple implementations, including Globus Toolkit X.509, LDAP, FTP, … GlobusToolkit Defacto standards GGF: GridFTP, GSI Custom solutions Grids and Open Standards Increased functionality, standardization Time

  25. Architecture Guiding Principles • Lightweight (existing) services • Easily and quickly deployable • Use existing services where possible asbasis for re-engineering • Interoperability • Allow for multiple implementations • Resilience and Fault Tolerance • Co-existence with deployed infrastructure • Run as an application (e.g. on LCG-2; Grid3) • Reduce requirements on site components • Basically globus and SRM • Co-existence (and convergence) with LCG-2 and Grid3 are essential for the EGEE Grid service • Service oriented approach • WSRF still being standardized • No mature WSRF implementations exist to date, no clear picture about the impact of WSRF hence: start with plain WS • WSRF compliance is not an immediate goal, but we follow the WSRF evolution • WS-I compliance is important

  26. VDT EDG . . . AliEn LCG . . . EGEE Approach • Exploit experience and components from existing projects • AliEn, VDT, EDG, LCG, and others • Design team works out architecture and design • Architecture: https://edms.cern.ch/document/476451 • Design: https://edms.cern.ch/document/487871/ • Components are initially deployed on a prototype infrastructure • Small scale (CERN & Univ. Wisconsin) • Get user feedback on service semantics and interfaces • After internal integration and testing components are delivered to SA1 and deployed on the pre-production service

  27. gLite (AliEn) * From now on used by ALICE for globally distributed analysis in connection with • PROOF (at GSI: http://www-w2k.gsi.de/root/ •  PROOF at GSI )

  28. gLite (EGEE) * Will replace LCG-2.X in near? future, but nobody has real experience with it

  29. summary (middlewares) • LCG-2: GSI and Dubna • - pro: large distribution, support • - contra: difficult to set up, no distributed analysis • AliEn: GSI, Dubna, Bergen - pro: in production since 2001 - contra: unsecure future, no support Globus 2: GSI, Dubna, Bergen? - pro/contra: simple, but functioning (no RB, no FC, no support) gLite/GT4: new on the market - pro/contra: nobody has production experience (gLite)

  30. lxg01-05.gsi.de • LCG test installation, visible in LCG – preproduction testbed • Trying to port LCG to Debian Linux

More Related