1 / 27

Coupling Climate and Hydrological Models Interoperability Through Web Services

Coupling Climate and Hydrological Models Interoperability Through Web Services. Outline. Project Objective Motivation System Description Components Frameworks System Driver Logical Workflow Data Flow Architecture Future Directions. Project Objective.

lazaro
Télécharger la présentation

Coupling Climate and Hydrological Models Interoperability Through Web Services

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Coupling Climate and Hydrological ModelsInteroperability Through Web Services

  2. Outline • Project Objective • Motivation • System Description • Components • Frameworks • System Driver • Logical Workflow • Data Flow • Architecture • Future Directions

  3. Project Objective The development of an end-to-end workflow that executes, in a loosely coupled mode, a distributed modeling system comprised of an atmospheric climate model using ESMF and a hydrological model using OpenMI

  4. Motivation • Hydrological impact studies can be improved when forced with data from climate models [Zeng et al., 2003; Yong et al., 2009] • A technology gap exists: • Many hydrological models run on personal computers • Most climate models run on high performance supercomputers • The leveraging of ESMF and OpenMI can mitigate the communication difficulties between these modeling types • ESMF contains web services interfaces that can be used to communicate across a distributed network • Both ESMF and OpenMI are widely used within their respective communities

  5. System Description • SWAT (hydrology model) runs on PC • CAM (climate model) runs on HPC • Wrappers for both SWAT and CAM provide OpenMI interface to each model • Driver (OpenMI Configuration Editor) uses OpenMI interface to timestep through models via wrappers Personal Computer Driver OpenMI SWAT CAM OpenMI Wrapper • Access to CAM across the network provided by ESMF Web Services • CAM output data streamed to CAM wrapper via ESMF Web Services High Performance Computer ESMF Web Services ESMF CAM Component

  6. Components: SWAT • The hydrological model chosen for this project is the Soil Water Assessment Tool (SWAT) • It is a river basin scale model developed to quantify the impact of land management practices in large, complex watersheds • It was chosen for this project because it is widely used, is open source, and runs on a Windows platform

  7. Components: CAM • The atmospheric model chosen for this system is the Community Atmospheric Model (CAM5), part of the Community Climate System Model (CESM1.0.3) • It was chosen because: • Has ESMF Component Interfaces • Our group has an ongoing collaboration with CESM • It is Open Source

  8. Frameworks: Earth System Modeling Framework • Is a high-performance, flexible software infrastructure that increases the ease of use, performance portability, interoperability, and reuse of Earth science applications • Provides an architecture for composing complex, coupled modeling systems and includes array-based, multi-dimensional data structures • Has utilities for developing individual models including utilities to make models self-describing • Web services included in the ESMF distribution allow any networked ESMF component to be available as a web service.

  9. Frameworks: OpenMI • The OpenMI Software Development Kit (SDK) is a software library that provides a standardized interface that focuses on time dependent data transfer • Primarily designed to work with systems that run simultaneously, but in a single-threaded environment [Gregerson et al., 2007] • The primary data structure in OpenMI is the ExchangeItem, which comes in the form of an InputExchangeItem and an OutputExchangeItem (single point, single timestep)

  10. The system driver • Controls the application flow • Implemented using OpenMI’s Configuration Editor • Convenient tool for the testing of the OpenMI implementations and model interactions

  11. Hardware Architecture Personal Computer (Windows) • The Atmospheric Model runs on a HPC platform • Access to the HPC Compute Nodes must be through the Login Nodes • Access to the Login Nodes is through the Virtual Server (Web Svcs) High Performance Computer Virtual Linux Server • The Client contains the OpenMI and SWAT software, which run on a Windows platform. Login Nodes (kraken) Compute Nodes (kraken)

  12. Software AchitectureClient • Configuration Editor is the driver… it is used to link the models and trigger the start of the run. • Hydrological model (SWAT 2005) is a modified version to work with OpenMI • Access to Atmospheric model (CAM) is done through “wrapper” code that accesses ESMF Web Services via an OpenMI interface Personal Computer (Windows) OpenMI Configuration Editor OpenMI SWAT 2005 CAM OpenMI Wrapper … to Web Services

  13. Software ArchitectureServer HPC Login Nodes Linux Server (Web Svr) Tomcat/Axis2 Process Controller Registrar SOAP Svcs Job Scheduler • In some HPC systems, access to nodes can be restrictive. In XSEDE, only the Login Nodes can communicate with the Compute Nodes. • Access to/from external systems can be controlled via “gateway” systems using Web Services. • Running applications (such as CAM Component Svc) on Compute Nodes must be handled by a Job Scheduler. HPC Compute Nodes Comp Svc Comp Svc Comp Svc CAM CAM CAM

  14. Logical WorkflowOne-Way Coupling Driver SWAT/OpenMI ATM/OpenMI Wrapper ESMF Web Services ESMF Component Initialize Initialize NewClient Prepare Prepare Initialize ESMF_GridCompInitialize GetValues GetValues RunTimestep ESMF_GridCompRun GetData ValueSet ValueSet Finish Finish Finalize ESMF_GridCompFinalize Dispose Dispose EndClient

  15. Logical WorkflowTwo-Way Coupling Driver SWAT/OpenMI ATM/OpenMI Wrapper ESMF Web Services ESMF Component Initialize Initialize NewClient Prepare Prepare Initialize ESMF_GridCompInitialize GetValues GetValues GetValues Extrapolate ValueSet RunTimestep ESMF_GridCompRun GetData ValueSet ValueSet Finish Finish Finalize ESMF_GridCompFinalize Dispose Dispose EndClient

  16. Data FlowOne-Way Coupling High Performance Computer Personal Computer CAM/OpenMI Wrapper GetValues Output Exchange Item ESMF Component/CAM GetDataValues ESMF State SWAT/OpenMI Input Exchange Item • The data is pulled from the CAM Component to SWAT via the wrapper , initiated by the OpenMI GetValues call; this call is made once per timestep. • Data is exchanged between CAM and SWAT using the OpenMI Exchange Item structures that handle the translation from grid to point values

  17. Data FlowTwo-Way Coupling High Performance Computer Personal Computer GetValues ESMF Component/CAM GetDataValues SWAT/OpenMI ESMF Export State CAM/OpenMI Wrapper Input Exchange Item ESMF Import State Output Exchange Item Import Input Exchange Item SetInputData • In two-way coupling, each model pulls the data from the other model using the OpenMI GetValues method. Extrapolation is used on the first timestep to break the deadlock between the two model requests. • OpenMI Input and Output Exchanges items are again used to exchange and translate the data. Output Exchange Item GetValues

  18. Model Configurations • SWAT • Hydrology science information provided by Jon Goodall of University of S. Carolina • Lake Fork Watershed (TX) • Watershed Area: 486.830 km2 • Model run: 2 years, 1977 – 1978 • Timestep = 1 day • Weather Stations: • wea62 (33.03 N, 95.92 W) • wea43 (33.25 N, 95.78 W) • CAM • Global Atmospheric Model • Model run: 1 day • Timestep: 1800 sec • Dynamical Core: finite volume • Horizontal Grid: 10x15 • Export data variables: • surface air temperature • precipitation • wind speed • relative humidity • solar radiation

  19. Scaling Analysis • 4 areas of increasing size • 3 variations of CAM resolution (.25, .5, and 1 degree) • CAM almost always gating factor in run times • Data transfer rates minimal • 5 data values CAM to SWAT • 1 data value SWAT to CAM

  20. Future Tasks • Additional SWAT configurations for larger scales • Possible integration with other models • Currently working on replacement of CAM with WRF • Abstraction of data exchange within the ESMF wrapper code to accommodate configuration of different variables for different model implementations

  21. Logical Flow - Startup When loading models into the Configuration Editor, each model is initialized. For CAM, this involves starting a “New Client” in the Process Controller, which submits a new CAM Component Service using the Job Scheduler. Personal Computer (Windows) Config Editor 1 HPC Login Nodes 1 3 Process Controller 5 OpenMI Registrar SWAT 2005 CAM Wrapper 4 Job Scheduler 7 2 Linux Server Comp Svc Web Svcs 6 HPC Compute Nodes CAM Initialize New Client New Client Submit Job Status = SUBMITTED Instantiate Job (Comp Svc) Status = READY

  22. Logical Flow - Status The status of the CAM Component Service is checked often throughout the workflow. The status is stored in the Registrar, so it can be retrieved via the Process Controller. Personal Computer (Windows) Config Editor HPC Login Nodes 2 Process Controller 3 OpenMI Registrar SWAT 2005 CAM Wrapper Job Scheduler 1 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM Get Status Get Status Get State

  23. Logical Flow - Initialize Before the models can be run, they need to be initialized. For CAM, the Initialize call is sent to the CAM Component Service via Web Services and the Process Controller. The CAM Component Svc updates it’s status in the Registrar prior to and after Initialization. Personal Computer (Windows) Config Editor 1 HPC Login Nodes 1 3 Process Controller OpenMI Registrar SWAT 2005 CAM Wrapper 5 Job Scheduler 4 6 2 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM Prepare Initialize Initialize Initialize Status = INITIALIZING Status = INIT_DONE

  24. Logical Flow – Timestep (Run) For each Timestep in SWAT, the trigger to run a timestep in CAM is a Get Values request in the OpenMI Interface. The Run Timestep request is passed to the CAM Component Service and the Component Service sets the output data making it available for later retrieval (see Get Data). Personal Computer (Windows) Config Editor 1 HPC Login Nodes 4 Process Controller OpenMI 2 Registrar SWAT 2005 CAM Wrapper 6 Job Scheduler 5 8 3 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM 7 Get Values Get Values Run Timestep Run Timestep Run Timestep Status = RUNNING Set Output Data Status = TIMESTEP_DONE

  25. Logical Flow – Timestep (Get Data) After each Timestep Run, the output data is then fetched from the CAM Component Service via the Web Services and Process Controller. The first time fetching data, a description of the data structure is requested. This description is then used for the remaining Timesteps. Personal Computer (Windows) Config Editor HPC Login Nodes 2 Process Controller OpenMI Registrar SWAT 2005 5 CAM Wrapper 3 Job Scheduler 6 1 4 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM Get Data Desc* Get Data Desc* Get Data Desc* Get Data Get Data Get Data * one time only

  26. Logical Flow - Finalize After all timesteps have completed, the models need to be finalized. For CAM, the Finalize call is sent to the CAM Component Service via Web Services and the Process Controller. The CAM Component Svc updates it’s status in the Registrar prior to and after finalization. Personal Computer (Windows) Config Editor 1 HPC Login Nodes 1 3 Process Controller OpenMI Registrar SWAT 2005 CAM Wrapper 5 Job Scheduler 4 6 2 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM Finish Finalize Finalize Finalize Status = FINALIZING Status = FINAL_DONE End Client (Next Slide)

  27. Logical Flow – End Client After the Finalize call, the CAM Component Service is done, so the CAM Wrapper closes it out by calling End Client. This call results in the CAM Component Service completing it’s loop and exiting as well as the Process Controller removing all references to the client. Personal Computer (Windows) Config Editor HPC Login Nodes 2 Process Controller 5 OpenMI Registrar SWAT 2005 CAM Wrapper Job Scheduler 3 1 Linux Server Comp Svc Web Svcs HPC Compute Nodes CAM 4 End Client End Client Kill Server Exit Service Loop Status = COMPLETED

More Related