1 / 15

Workload Management Workpackage

Workload Management Workpackage. Massimo Sgaravatto INFN Padova. Overview. Goal: define and implement a suitable architecture for distributed scheduling and resource management in a GRID environment Large heterogeneous environment PC farms and not supercomputers used in HEP

dmorency
Télécharger la présentation

Workload Management Workpackage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Workload ManagementWorkpackage Massimo Sgaravatto INFN Padova

  2. Overview • Goal: define and implement a suitable architecture for distributed scheduling and resource management in a GRID environment • Large heterogeneous environment • PC farms and not supercomputers used in HEP • Large numbers (thousands) of independent users in many different sites • Different applications with different requirements • HEP Monte Carlo productions, reconstructions and production analyses • “Scheduled” activities • Goal: throughput maximization • HEP individual physics analyses • “Chaotic”, non-predictable activities • Goal: latency minimization • …

  3. Overview • Many challenging issues : • Optimizing the choice of execution location based on the availability of data, computation and network resources • Optimal co-allocation and advance reservation of CPU, data, network • Uniform interface to possible different local resource management systems • Priorities, policies on resource usage • Reliability • Fault tolerance • Scalability • … • INFN responsibility in DataGrid

  4. Tasks • Job resource specification and job description • Method to define and publish the resources required by a job • Job control language (submission language, API, GUI) • Partitioning programs for parallel execution • “Decomposition” of single jobs in multiple, “smaller” jobs that can be executed in parallel • Exploitation of task and data parallelism

  5. Tasks • Scheduling • Definition and implementation of scheduling policies to find the best match between job requirements and available resources • Co-allocation and advance reservation • Resource management • Services • Bookkeeping, accounting, logging, authentication, authorization

  6. Effort breakdown (mm)

  7. Workload Management in the INFN-GRID project • Integration, adaptation and deployment of middleware developed within the DataGrid project • GRID software must enable physicists to run their jobs using all the available GRID resources in a “transparent” way • HEP applications classified in 3 different “classes”, with incremental level of complexity • Workload management system for Monte Carlo productions • Goal: throughput maximization • Implementation strategy: code migration (moving the application where the processing will be performed) • Workload management system for data reconstruction and production analysis • Goal: throughput maximization • Implementation strategy: code migration + data migration (moving the data where the processing will be performed, and collecting the outputs in a central repository) • Workload management system for individual physics analysis • “Chaotic” processing • Goal: latency minimization • Implementation strategy: code migration + data migration + remote data access (accessing data remotely) for client/server applications

  8. First Activities and Results • CMS-HLT use case (Monte Carlo production) analyzed in terms of GRID requirements and GRID tools availability • Discussions with Globus team and Condor team • Good and productive collaborations already in place • Definition of a possible high throughput workload management system architecture • Use of Globus and Condor mechanisms • But major developments needed

  9. High throughput workload management system architecture (simplified design) Other info Resource Discovery Submit jobs (using Class-Ads) Master Grid Information Service (GIS) condor_submit (Globus Universe) Master chooses in which Globus resources the jobs must be submitted Information on characteristics and status of local resources Condor-G Condor-G able to provide reliability Use of Condor tools for job monitoring, logging, … globusrun Globus GRAM as uniform interface to different local resource management systems Globus GRAM Globus GRAM Globus GRAM Local Resource Management Systems CONDOR LSF PBS Site1 Farms Site2 Site3

  10. First Activities and Results • On going activities in putting together the various building blocks • Globus deployment • Installed on ~ 35 hosts in 11 different sites • INFNGRID distribution toolkit to make Globus deployment easier and more automatic • INFN customizations • User and host certificates signed by INFN CA • Preliminary architecture for GIS

  11. GIS Architecture (test phase) Dc=infn,dc=it, o=grid Top Level INFN GIIS Implemented Implemented using INFNGRID distribution To be implemented Exp=atlas, o=grid Dc=bo, Dc=infn, dc=it,o=grid INFN ATLAS GIIS Dc=mi,Dc=infn, dc=it,o=grid GIIS GIIS GRIS Milano Bologna

  12. First Activities and Results • Evaluation of Globus GRAM • Tests with job submissions on remote resources • Globus GRAM as uniform interface to different underlying resource management systems (LSF, Condor, PBS) • Problems related with scalability and fault tolerance (Globus jobmanager) • Evaluation of Globus RSL as uniform language to describe resources • More flexibility is required (Condor ClassAds model) • “Cooperation” between GRAM and GIS • The information on characteristics and status of local resources and on jobs is not enough • As local resources we must consider Farms and not the single workstations

  13. First Activities and Results • Evaluation of Condor-G • It works, but some problems must be fixed: • Very difficult to understand about errors • Problems with log files • Problems with scalability in the submitting machine • Condor-G is not able to provide fault tolerance and robustness (because Globus doesn’t provide these features) • Fault tolerance only in the submitting side • Condor team is already working to fix some of these problems

  14. Some next steps • Tests with real applications and real environments • CMS fall production • Evaluation of the new Condor-G implementation (when ready) • GIS – ClassAds translator (Globus team ?) • Face the problems !! • Master development !!!

  15. Other info • http://www.infn.it/grid

More Related