1 / 23

Introduction to SHIWA Technology

Introduction to SHIWA Technology. Peter Kacsuk MTA SZTAKI and Univ.of Westminster kacsuk@sztaki.hu. What is a multi-workflow simulation?. A simulation workflow where nodes of the simulation are themselves workflows potentially based on different workflow languages

fruma
Télécharger la présentation

Introduction to SHIWA Technology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to SHIWA Technology Peter Kacsuk MTA SZTAKI and Univ.of Westminster kacsuk@sztaki.hu

  2. What is a multi-workflow simulation? • A simulation workflow where nodes of the simulation are themselves workflows potentially based on different workflow languages • A practical example: LINGA application (outGRID project) • Combining several workflows (2 CIVETs+FreeSurfer+STAT) • Heterogeneous workflow systems (LONI/MOTEUR) • It also should enable the execution in different DCIs. LINGA application: • LONI Cluster (USA) • gLite-based neuGRID infrastructure (Europe) • CBRAIN HPC infrastructure (Canada)

  3. Experiment setup CIVET @ CBRAIN LONI Pipeline 151 input data items FreeSurfer @ CRANIUM LONI Pipeline CIVET @ neuGRID LONI Pipeline 146 input data items STATS @ EGI MOTEUR Outputs of both CIVETs

  4. SHIWA solution for LINGA Multi- Workflow Management Sub-Workflows

  5. SHIWA (SHaringInteroperableWorkflows for Large-ScaleScientific Simulations on Available DCIs) project Start date: 01/07/2010 Duration: 27months Total budget: 2,101,980 € Fundingfromthe EC:1,800,000 € Total fundedeffortinperson-months: 231 Web site: www.shiwa-workflow.eu Coordinator: Prof. Peter Kacsuk, email: Kacsuk@sztaki.hu 5

  6. Motivations 1 In many cases large simulations are organized as scientific workflows that run on DCIs However, there are too many different WF formalism WF languages WF engines If a community selected a WF system it is locked into this system: They can not share their WFs with other communities (even in the same scientific field) They can not utilize WFs developed by other communities

  7. WF Ecosystem

  8. Who are the members of an e-science community from WF applications point of view? • End-users (e-scientists) (5000-50000) • Execute the published WF applications with custom input parameters by creating application instances using the published WF applications as templates • WF System Developers (50-100) • Develop WF systems • Writes technical, user and installation manuals • WF Application Developers (500-1000) • Develop WF applications • Publish the completed WF applications for end-users

  9. Using a portal/desktopto parameterize and run these applications, and to further develop them Supercomputer based SGs (DEISA, TeraGrid) Access to a large set of ready-to-run scientific WF applications accessing a large set of various DCIs to make these WF applications run WF App. Repository Portal Cluster based service grids (SGs) (EGEE, OSG, etc.) Desktop grids (DGs) (BOINC, Condor, etc.) Clouds Grid systems Local clusters Supercomputers E-science infrastructure What does a WF developer need?

  10. Using a portal/desktopto develop WF applications accessing a single DCI to make these WF applications run Portal/desktop Grid system Cluster based service grids (SG) (e.g. ARC) In the past: WF developers worked in an isolated way, on a single DCI • As a result if a community selected a WF system it is locked into this DCI • Porting the WF to another DCI required large effort • Parallel execution of the same WF in several DCIs is usually not possible

  11. Supercomputer based SGs (DEISA, TeraGrid) SHIWA App. Repository Cluster based service grids (SGs) (EGEE, OSG, etc.) SSP Portal Desktop grids (DGs) (BOINC, Condor, etc.) Clouds Grid systems Local clusters Supercomputers After SHIWA: Collaboration between WF application developers • Publish WF applications in the repository to be continued by other appl. developers • Application developers use the portal/desktop to develop complex applications (executable on various DCIs) for various end-user communities Application developers

  12. Project objectives • Enable user communities to share their WFs • Publish the developed WFs • Access and re-use the published WFs • Build multi-workflows from the published WFs • Toolset: • SHIWA Simulation Platform • WF Repository (production) • SHIWA Portal (production) • SHIWA Desktop (prototype)

  13. Coarse-grained interoperability (CGI) • CGI = Nesting of different workflow systems to achieve interoperability of WF execution frameworks Multi-workflow

  14. Fine-grained interoperability (FGI) Interoperable Workflow Intermediate Representation IWIR Import from IWIR Export to IWIR WFA WFB

  15. Tools for CGI SHIWA services • SHIWA repository to: • Describe workflows • Share workflows • SHIWA portal to: • Access and enact registered workflows • Compose and enact multi-workflows • Monitor workflows and multi-workflows execution in various DCIs • Retrieve results of the execution

  16. SHIWA Repositoryfacilitates publishingand sharingworkflows Supports: Abstract workflows with multiple implementations of over 10 workflow systems Storing execution specific data Available: from the SHIWA Portal standalone service at: repo.shiwa-workflow.eu

  17. Scenario: Find and test WFs • SHIWA Repository: Analyze description, inputs and outputs of published WFs • SHIWA Portal: Instantiate WF from repo, execute with given sample data(inside WS-PGRADE workflow used as the Master WF system) EGI Community Forum, Munich, March 28, 2012

  18. SHIWA Portal: Workflow Editor 18 Title: Work Package SA1...Author:.G Terstyanszky..v.:1.0

  19. SHIWA Portal: Configuring Workflow 19 Title: Work Package SA1...Author:.G Terstyanszky..v.:1.0

  20. SHIWA Portal: Executing Workflow 20 Title: Work Package SA1...Author:.G Terstyanszky..v.:1.0

  21. CGI User Scenario with WS-PGRADE as master SHIWA VO SHIWA Science Gateway local cluster Globus DCI gLite DCI submit WE SHIWA Portal SHIWA Repository s6 WF1 WFn Triana WE search WF Taverna WE s1 s3 MOTEUR WE WF1 WFm WE1 WEp ASKALON WE Kepler WE edit WF GEMLCA Repository WS-PGRADE Workflow editor s2 ASKALON WE MOTEUR WE GWES WE WE + WF s5 WF list s4 researcher WS-PGRADEWorkflow engine GEMLCA with GIB invoke WE s6 pre-deployed-WEs GEMLCA Service Proxy Server SHIWA Proxy Server

  22. Advantages for the various types of user communities using SHIWA • WF system developers • Better visibility: much more WF developers can access and use their WF system than before (through the applications stored in the SHIWA repo) • The joint impact is much bigger than the individual WF systems can achieve • WF developers • They can collaborate: share and re-use existing WF applications • WF application development can be accelerated • More complex WFs can be created in shorter time • They will access many different DCIs (their WF will be more popular) • End-users • much bigger set of usable and more sophisticated WF applications • These applications can run on various DCIs

  23. Conclusions • SHIWA brings advantage for all the 3 kinds of user communities: • WF system developers • WF developers • End-users • With relatively little effort • WF systems can join the SSP • WF system developers can adapt SHIWA technology • Further information: www.shiwa-workflow.eu

More Related