1 / 99

Introduction to the Earth System Modeling Framework

Introduction to the Earth System Modeling Framework. Climate. Data Assimilation. Weather. Nancy Collins nancy@ucar.edu July 22, 2005. Goals of this Tutorial. To give future ESMF users an understanding of the background, goals, and scope of the ESMF project

arav
Télécharger la présentation

Introduction to the Earth System Modeling Framework

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to the Earth System Modeling Framework Climate Data Assimilation Weather Nancy Collins nancy@ucar.edu July 22, 2005

  2. Goals of this Tutorial • To give future ESMF users an understanding of the background, goals, and scope of the ESMF project • To review the status of the ESMF software implementation and current application adoption efforts • To outline the overall design and principles underlying the ESMF software • To describe the major classes and functions of ESMF in sufficient detail to give future users an understanding of how ESMF could be utilized in their own codes • To describe in steps how a user code prepares for using ESMF, incorporates ESMF, and runs under ESMF • To identify ESMF resources available to users such as documentation, mailing lists, and support staff

  3. For More Basic Information … ESMF Website http://www.esmf.ucar.edu See this site for downloads, documentation, references, repositories, meeting schedules, test archives, and just about anything else you need to know about ESMF. References to ESMF source code and documentation in this tutorial correspond to ESMF Version 2.2.0.

  4. 1 BACKGROUND, GOALS, AND SCOPE • Overview • ESMF and the Community • Development Status • Exercises

  5. Motivation and Context In climate research and NWP... increased emphasis on detailed representation of individual physical processes; requires many teams of specialists to contribute components to an overall modeling system In computing technology...increase in hardware and software complexity in high-performance computing, as we shift toward the use of scalable computing architectures In software …development of first-generation frameworks, such as FMS, GEMS, CCA and WRF, that encourage software reuse and interoperability

  6. What is ESMF? • ESMF provides tools for turning model codes into componentswith standard interfaces and standard drivers. • ESMF provides data structures and common utilities that components use for routine services such as data communications, regridding, time management and message logging. ESMF GOALS • Increase scientific productivity by making model components much easier to build, combine, and exchange, and by enabling modelers to take full advantage of high-end computers. • Promote new scientific opportunities and services through community building and increased interoperability of codes (impacts in collaboration, code validation and tuning, teaching, migration from research to operations)

  7. Application Example: GEOS-5 AGCM • Each box is an ESMF component • Every component has a standard interface so that it is swappable • Data in and out of components are packaged as state types with user-defined fields • New components can easily be added to the hierarchical system • Coupling tools include regridding and redistribution methods

  8. Why Should I Adopt ESMF If I Already Have a Working Model? • There is an emerging pool of other ESMF-based science components that you will be able to interoperate with to create applications - a framework for interoperability is only as valuable as the set of groups that use it. • It will reduce the amount of infrastructure code that you need to maintain and write, and allow you to focus more resources on science development. • ESMF provides solutions to two of the hardest problems in model development: structuring large, multi-component applications so that they are easy to use and extend, and achieving performance portability on a wide variety of parallel architectures. • It may be better software (better features, better performance portability, better tested, better documented and better funded into the future) than the infrastructure software that you are currently using. • Community development and use means that the ESMF software is widely reviewed and tested, and that you can leverage contributions from other groups.

  9. 1 BACKGROUND, GOALS, AND SCOPE • Overview • ESMF and the Community • Development Status • Exercises

  10. Growing ESMF Customer Base • New applications coming in during FY05 through the newly funded, ESMF-based DoD Battlespace Environments Institute (BEI): DoD Navy HYCOM oceanDoD Navy NOGAPS atmosphereDoD Navy COAMPS coupled atm-ocean DoD Air Force GAIM ionosphereDoD Air Force HAF solar windDoD Army ERDC WASH123 watershed • More new applications will begin adopting ESMF during FY06 through the ESMF-based NASA Modeling Analysis and Prediction (MAP) Climate Variability and Change program. • Further growth of the customer base is anticipated through development of an ESMF-based Space Weather computational environment. • Original ESMF applications:NOAA GFDL atmospheresNOAA GFDL MOM4 oceanNOAA NCEP atmospheres, analysesNASA GMAO models and GEOS-5NASA/COLA Poseidon oceanLANL POP oceanNCAR WRFNCAR CCSMMITgcm atmosphere and ocean • Other groups using ESMF: NASA GISSUCLACSUNASA Land Information Systems (LIS) projectNOAA Integrated Dynamics in Earth’s Atmosphere (IDEA) project, more…

  11. ESMF Impacts ESMF impacts a very broad set of research and operational areas that require high performance, multi-component modeling and data assimilation systems, including: • Climate prediction • Weather forecasting • Seasonal prediction • Basic Earth and planetary system research at various time and spatial scales • Emergency response • Ecosystem modeling • Battlespace simulation and integrated Earth/space forecasting • Space weather (through coordination with related space weather frameworks) • Other HPC domains, through migration of non-domain specific capabilities from ESMF – facilitated by ESMF interoperability with generic frameworks, e.g. CCA

  12. Open Source Development • Open source license (GPL) • Open source environment (SourceForge) • Open repositories: web-browsable CVS repositories accessible from the ESMF website • for source code • for contributions (currently porting contributions and performance testing) • Open development priorities and schedule: priorities set based on user meetings, telecons, and mailing list discussions, web-browsable task lists • Open testing: 1000+ tests are bundled with the ESMF distribution and can be run by users • Open port status: results of nightly tests on many platforms are web-browsable • Open metrics: test coverage, lines of code, requirements status are updated regularly and are web-browsable

  13. Open Source Constraints • ESMF does not allow unmoderated check-ins to its main source CVS repository (though there is minimal check-in oversight for the contributions repository) • ESMF has a co-located, line managed Core Team whose members are dedicated to framework implementation and support – it does not rely on volunteer labor • ESMF actively sets priorities based on user needs and feedback • ESMF requires that contributions follow project conventions and standards for code and documentation • ESMF schedules regular releases and meetings The above are necessary for development to proceed at the pace desired by sponsors and users, and to provide the level of quality and customer support necessary for codes in this domain

  14. 1 BACKGROUND, GOALS, AND SCOPE • Overview • ESMF and the Community • Development Status • Exercises

  15. Latest Information For scheduling and release information, see: http://www.esmf.ucar.edu > Development This includes latest releases, known bugs, supported platforms. Task lists, bug reports, and support requests are tracked on the ESMF SourceForge site: http://sourceforge.net/projects/esmf

  16. ESMF Development Status • Overall architecture is well-defined and well-accepted • Components and low-level communications stable • Logically rectangular grids with regular and arbitrary distributions implemented • On-line parallel regridding (bilinear, 1st order conservative) completed and optimized • Other parallel methods, e.g. halo, redistribution, low-level comms implemented • Utilities such as time manager, logging, and configuration manager usable and adding features • Virtual machine with interface to shared / distributed memory implemented, hooks for load balancing implemented

  17. ESMF Platform Support • IBM AIX (32 and 64 bit addressing) • SGI IRIX64 (32 and 64 bit addressing) • SGI Altix (64 bit addressing) • Cray X1 (64 bit addressing) • Compaq OSF1 (64 bit addressing) • Linux Intel (32 and 64 bit addressing, with mpich and lam) • Linux PGI (32 and 64 bit addressing, with mpich) • Linux NAG (32 bit addressing, with mpich) • Linux Absoft (32 bit addressing, with mpich) • Linux Lahey (32 bit addressing, with mpich) • Mac OS X with xlf (32 bit addressing, with lam) • Mac OS X with absoft (32 bit addressing, with lam) • Mac OS X with NAG (32 bit addressing, with lam) • User-contributed g95 support

  18. ESMF Distribution Summary • Fortran interfaces and complete documentation • Many C++ interfaces, no manuals yet • Serial or parallel execution (mpiuni stub library) • Sequential or concurrent execution • Single executable (SPMD) support

  19. Some Metrics … • Test suite currently consists of • ~1200 unit tests • ~15 system tests • ~35 examples runs every night on ~12 platforms • ~289 ESMF interfaces implemented, ~276 fully or partially tested, ~95% fully or partially tested. • ~160,000 SLOC • ~1000 downloads

  20. ESMF Near-Term Priorities, FY05/06 • Reworked design and implementation of array / grid / field interfaces and array-level communications • Optimized regridding and low-level communications • Grid masks and merges • Unstructured grids • Read/write interpolation weights and grid specifications

  21. Planned ESMF Extensions • Looser couplings: support for multiple executable and Grid-enabled versions of ESMF • Support for representing, partitioning, communicating with, and regridding unstructured grids and semi-structured grids • Support for advanced I/O, including support for asynchronous I/O, checkpoint/restart, and multiple archival mechanisms (e.g. NetCDF, HDF5, binary, etc.) • Advanced support for data assimilation systems, including data structures for observational data and adjoints for ESMF methods • Support for nested, moving grids and adaptive grids • Support for regridding in three dimensions and between different coordinate systems • Advanced optimization and load balancing

  22. 1 BACKGROUND, GOALS, AND SCOPE • Overview • ESMF and the Community • Development Status • Exercises

  23. Exercises Sketch a diagram of the major components in your application and how they are connected. Introduction of tutorial participants.

  24. Application Diagram

  25. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  26. Computational Characteristicsof Weather/Climate Platforms • Mix of global transforms and local communications • Load balancing for diurnal cycle, event (e.g. storm) tracking • Applications typically require 10s of GFLOPS, 100s of PEs – but can go to 10s of TFLOPS, 1000s of PEs • Required Unix/Linux platforms span laptop to Earth Simulator • Multi-component applications: component hierarchies, ensembles, and exchanges;components in multiple contexts • Data and grid transformations between components • Applications may be MPMD/SPMD, concurrent/sequential, combinations • Parallelization via MPI, OpenMP, shmem, combinations • Large applications (typically 100,000+ lines of source code) Seasonal Forecast coupler ocean assim_atm sea ice assim atmland atm land physics dycore

  27. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  28. Design Strategy:Hierarchical Applications Since each ESMF application is also a Gridded Component, entire ESMF applications can be nested within larger applications. This strategy can be used to systematically compose very large, multi-component codes.

  29. Design Strategy: Modularity Gridded Components don’t have access to the internals of other Gridded Components, and don’t store any coupling information. Gridded Components pass their States to other components through their argument list. Since components are not hard-wired into particular configurations and do not carry coupling information, components can be used more easily in multiple contexts. NWP application Seasonal prediction Standalone for basic research atm_comp

  30. Design Strategy: Flexibility • Users write their own drivers as well as their own Gridded Components and Coupler Components • Users decide on their own control flow Pairwise Coupling Hub and Spokes Coupling

  31. Design Strategy:Communication Within Components All communication in ESMF is handled within components. This means that if an atmosphere is coupled to an ocean, then the Coupler Component is defined on both atmosphere and ocean processors. atm2ocn _coupler atm_comp ocn_comp processors

  32. Design Strategy:Uniform Communication API • The same programming interface is used for shared memory, distributed memory, and combinations thereof. This buffers the user from variations and changes in the underlying platforms. • The idea is to create interfaces that are performance sensitive to machine architectures without being discouragingly complicated. • Users can use their own OpenMP and MPI directives together with ESMF communications ESMF sets up communications in a way that is sensitive to the computing platform and the application structure

  33. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  34. Elements of Parallelism: Serial vs. Parallel • Computing platforms may possess multiple processors, some or all of which may share the same memory pools • There can be multiple threads of execution and multiple threads of execution per processor • Software like MPI and OpenMP is commonly used for parallelization • Programs can run in a serial fashion, with one thread of execution, or in parallel on multiple threads of execution. • Because of these and other complexities, terms are needed for units of parallel execution.

  35. Elements of Parallelism: PETs Persistent Execution Thread (PET) • Path for executing an instruction sequence • For many applications, a PET can be thought of as a processor • Sets of PETs are represented by the Virtual Machine (VM) class • Serial applications run on one PET, parallel applications run on multiple PETs

  36. Elements of Parallelism: Sequential vs. Concurrent In sequential mode components run one after the other on the same set of PETs.

  37. Elements of Parallelism: Sequential vs. Concurrent In concurrent mode components run at the same time on different sets of PETs

  38. Elements of Parallelism: DEs Decomposition Element (DE) • In ESMF a data decomposition is represented as a set of Decomposition Elements (DEs). • Sets of DEs are represented by the DELayout class. • DELayouts define how data is mapped to PETs. • In many applications there is one DE per PET.

  39. Elements of Parallelism: DEs More complex DELayouts: • Users can define more than one DE per PET for cache blocking and chunking • DELayouts can define a topology of decomposition (i.e., decompose in both x and y)

  40. Modes of Parallelism:Single vs. Multiple Executable • In Single Program Multiple Datastream (SPMD) mode the same program runs across all PETs in the application - components may run sequentially or concurrently. • In Multiple Program Multiple Datastream (MPMD) mode the application consists of separate programs launched as separate executables - components may run concurrently or sequentially, but in this mode almost always run concurrently

  41. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  42. Framework-Wide Behavior ESMF has a set of interfaces and behaviors that hold across the entire framework. This consistency helps make the framework easier to learn and understand. For more information, see Sections 6-8 in the Reference Manual.

  43. Classes and Objects in ESMF • The ESMF Application Programming Interface (API) is based on the object-oriented programming notion of a class. A class is a software construct that’s used for grouping a set of related variables together with the subroutines and functions that operate on them. We use classes in ESMF because they help to organize the code, and often make it easier to maintain and understand. • A particular instance of a class is called an object. For example, Field is an ESMF class. An actual Field called temperature is an object.

  44. Classes and Fortran • In Fortran the variables associated with a class are stored in a derived type. For example, an ESMF_Field derived type stores the data array, grid information, and metadata associated with a physical field. • The derived type for each class is stored in a Fortran module, and the operations associated with each class are defined as module procedures. We use the Fortran features of generic functions and optional arguments extensively to simplify our interfaces.

  45. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  46. ESMF Class Structure GridComp Land, ocean, atm, … model CplComp Xfers between GridComps State Data imported or exported Superstructure Infrastructure Regrid Computes interp weights Bundle Collection of fields Field Physical field, e.g. pressure Grid LogRect, Unstruct, etc. PhysGrid Math description DistGrid Grid decomposition F90 Array Hybrid F90/C++ arrays DELayout Communications Route Stores comm paths C++ Utilities Virtual Machine, TimeMgr, LogErr, IO, ConfigAttr, Base etc. Communications Data

  47. 3 DESIGN AND PRINCIPLES OF ESMF • Computational Characteristics of Weather and Climate • Design Strategies • Parallel Computing Definitions • Framework-Wide Behavior • Class Structure • Exercises

  48. Exercises Following instructions given during class: • ssh to login to the Linux cluster. • Find the ESMF distribution directory. • See which ESMF environment variables are set. • Browse the source tree.

  49. 4 CLASSES AND FUNCTIONS • ESMF Superstructure Classes • ESMF Infrastructure Classes: Data Structures • ESMF Infrastructure Classes: Utilities • Exercises

  50. ESMF Class Structure GridComp Land, ocean, atm, … model CplComp Xfers between GridComps State Data imported or exported Superstructure Infrastructure Regrid Computes interp weights Bundle Collection of fields Field Physical field, e.g. pressure Grid LogRect, Unstruct, etc. PhysGrid Math description DistGrid Grid decomposition F90 Array Hybrid F90/C++ arrays DELayout Communications Route Stores comm paths C++ Utilities Virtual Machine, TimeMgr, LogErr, IO, ConfigAttr, Base etc. Communications Data

More Related