670 likes | 870 Vues
Earth System Modeling Framework Overview. GFDL FMS Suite. MITgcm. NASA GSFC PSAS. NCEP Forecast. NSIPP Seasonal Forecast. NCAR/LANL CCSM. Chris Hill, MIT cnh@plume.mit.edu TOMS, 2003, Boulder. Talk Outline. Project Overview Architecture and Current Status Superstructure layer Design
E N D
Earth System Modeling Framework Overview GFDL FMS Suite MITgcm NASA GSFC PSAS NCEP Forecast NSIPP Seasonal Forecast NCAR/LANL CCSM Chris Hill, MIT cnh@plume.mit.edu TOMS, 2003, Boulder
Talk Outline • Project Overview • Architecture and Current Status • Superstructure layer • Design • Adoption • Infrastructure layer • What is it for • What it contains • Next steps…. • Open Discussion
Technological Trends In climate research and NWP... increased emphasis on detailed representation of individual physical processes; requires many teams of specialists to contribute components to an overall modeling system. In computing technology... increase in hardware and software complexity in high-performance computing, as we shift toward the use of scalable computing architectures. Time mean air-sea CO2 flux. MITgcm constrained by obs. + ocean carbon (MIT,Scripps,JPL).
Community Response • Modernization of modeling software Abstraction of underlying hardwareto provide uniform programming model, runs efficiently across vector and single and multiple microprocessor architectures.Distributed software development model characterized by many contributing authors; use high-level language features for abstraction, to facilitate development process and software sharing.Modular designfor interchangeable dynamical cores and physical parameterizations, development of community-wide standards for components • Development of prototype infrastructures GFDL (FMS), NASA/GSFC (GEMS), NCAR/NCEP (WRF), NCAR/DOE (MCT), MIT(Wrapper), ROMS/TOMS etc.. ESMF aims to unify and extend these efforts.
ESMF Goals and Products STATED GOAL: To increase software reuse, interoperability, ease of use and performance portability in climate, weather, and data assimilation applications, implies unified “standards”. PRODUCTS: • Coupling superstructure and utility infrastructure software • Synthetic code suite for validation and demonstration • Set of 15 ESMF-compliant applications (including CCSM, WRF, GFDL models; MITgcm, NCEP and NASA data assimilation systems) • Set of 8 interoperability experiments Interoperability
Talk Outline • Project Overview • Architecture and Current Status • Component based approach • Superstructure layer • Design • Adoption • Infrastructure layer • What is it for • What it contains • Next steps…. • Open Discussion
ESMF overall structure • ESMF uses a component based approach • The framework provides an upper “superstructure” layer and a lower “infrastructure” layer • User written code (simulation algorithms, DA algorithms …) is sandwiched between the two layers. • User code provides standard interfaces that are called from the superstructure layer • User code uses facilities in the infrastructure for parallelism, I/O, interpolation
ESMF Programming Model 1. ESMF provides an environment for assembling components. Application Component SUPERSTRUCTURE LAYER Gridded Components Coupler Components 2. ESMF provides a toolkit that components use to • ensure interoperability • abstract common services Component: run(), checkpoint() Grid: regrid(), transpose() + Metrics Field: halo(), import(), export() + I/O INFRASTRUCTURE LAYER Layout, PEList, Machine Model 3. Gridded components, coupler components and application components are user written.
Superstructure Layer: Assembles and connects components Since each ESMF application is also a component, entire ESMF applications may be treated as Gridded Components and nested within larger applications. climate_comp Example: atmospheric application containing multiple coupled components within a larger climate application ocn2atm_coupler ocn_comp atm_comp phys2dyn_coupler atm_phys atm_dyn PE
Superstructure Layer:Controlling subcomponents Components must provide a single externally visible entry point which will register the other entry points with the Framework. Components can: - Register one or more Initialization, Run, Finalize, and Checkpoint entry points. - Register a private data block which can contain all data associated with this instantiation of the Component; particularly useful when running ensembles. Higher level Comp ESMF Framework Services cmp_final() cmp_register() cmp_run() cmp_init() Public subroutine Private subroutine
Superstructure Layer:Passing data between components I Gridded Components do not have access to the internals of other Gridded Components. They have 2 options for exchanging data with other Components; the first is to receive Import and Export States as arguments. States contain flags for “is required”, “isvalid”, “is ready”, etc. coupler ocn_component subroutine ocn_run(comp, & ImportState,ExportState, Clock, rc) atm_component subroutine atm_run(comp, & ImportState,ExportState, Clock, rc)
Superstructure Layer:Passing data between components II Gridded Components using Transforms do not have to return control to a higher level component to send or receive State data from another component. They can receive function pointers which are methods that can be called on the states. call ESMF_CompRun(atm, xform) call ESMF_CompRun(ocn, xform) transform coupler ocn_component call ESMF_StateXform(ex_state, & xform) atm_component call ESMF_StateXform(xform, & im_state)
Superstructure Layer: Parallel Communication All inter-component communication within ESMF is local. climate_comp This means:Coupler Components must be defined on the union of the PEs of all the components that they couple. In this example, in order to send data from the ocean component to the atmosphere, the Coupler mediates the send. atm2ocn _coupler ocn_comp atm_comp phys2dyn_coupler atm_phys atm_dyn PE
Superstructure Layer: Summary Provides means to connect components together • Components can be connected in a hierarchy Provides a general purpose mechanisms for passing data between components • Data is self-describing Provides a general purpose mechanism for “parent” components to control “child” components • Stepping forward, backward, initializing state etc…
Infrastructure Layer NCAR Atmosphere NCAR Atmosphere • A standard software platform for enabling interoperability (developing couplers, ensuring performance portability). • Set of reusable software for Earth science applications. Streamlined development for researchers. My Sea Ice GFDL Ocean GFDL Ocean NSIPP Land NSIPP Land
Infrastructure Layer Scope ESMF Superstructure Support for • Physical Grids • Regridding • Decomposition/composition • Communication • Calendar and Time • I/O • Logging and Profiling User Code ESMF Infrastructure
Field and Grid : grid = ESMF_GridCreate(…,layout,…) : field_u = ESMF_FieldCreate(grid, array) : ESMF_Field metadata ESMF_Grid ESMF_Array info. • Create field distributed over a set of decomposition elements (DE’s). • Domain decomposition determined by DELayout, layout. • Each object (grid and field_u) has internal representation. • Other parts of the infrastructure layer use the internal representation e.g. • Regrid() – interpolation/extrapolation + redistribute over DE’s • Redistribution() - general data rearrangement over DE’s • Halo() – specialized redistribution
Regrid • Function mapping field’s array to a different physical and distributed grid. • RegridCreate() – creates a Regrid structure to be used/re-used regrid = ESMF_RegridCreate(src_field, dst_field, method, [name], [rc]) Source, Dest field can be empty of field data (RegridCreate() uses grid metrics) • PhysGrid, DistGrid info used for setting up regrid • Resulting regrid can be used for other fields sharing same Grid • Method specifies interpolation algorithm. For example, bilinear, b-spline, etc…
Regrid Interface (cont) • RegridRun() – performs actual regridding call ESMF_RegridRun(src_field, dst_field, regrid,[rc]) • Communication and interpolation handled transparently. • RegridDestroy() – frees up memory call ESMF_RegridDestroy(regrid,[rc]) src_field dst_field
src_field Redistribution • No interpolation or extrapolation • Maps between distributed grids, field’s array and physical grid are the same i.e. • Example: layout() created two distributed grids one decomposed in X and one decomposed in Y. • Redistribution() function maps array data between the distributions (a transpose/corner turn) • Communication handled transparently dst_field
Halo • Fields have distributed index spaces of • Exclusive {E}, compute {C} and local {L} region where • Halo() fills points not in {E} or {C} from remote {E} e.g call ESMF_FieldHalo(field_foo, status) {L} {L} {C} {C} {E} {E} DE4 DE3
Bundle and Location Stream • ESMF_Bundle • collection of fields on the same grid • ESMF_LocationStream • Like a field but….. • unstructured index space with an associated physical grid space • useful for observations e.g. radiosonde, floats • Functions for create(), regrid(), redistribute(), halo() etc…
ESMF Infrastructure Utilities • Clock • Alarm • Calendar • I/O • Logging • Profiling • Attribute • Machine model and comms Ensures consistent time between components Provides field level I/O in standard forms – netCDF, binary, HDF, GRIB, Bufr Consistent monitoring and messaging Consistent parameter handling Hardware and system software hiding. Platform customizable
Time • Standard type for any component • Calendar (support for range of calendars) ! initialize stop time to 13May2003, 2:00 pm call ESMF_TimeInit(inject_stop_time, & YR=int(2003,kind=ESMF_IKIND_I8), & MM=off_month, DD=off_day, H=off_hour, M=off_min, & S=int(0,kind=ESMF_IKIND_I8), & cal=gregorianCalendar, rc=rc) do while (currTime .le. inject_stop_time ) : call ESMF_ClockAdvance(localclock, rc=rc) call ESMF_ClockGetCurrTime(localclock, currtime, rc) end call ESMF_CalendarInit(gregorianCalendar, ESMF_CAL_GREGORIAN, rc)
Time Representations type(ESMF_Calendar) :: calendar1 call ESMF_CalendarInit(calendar1, ESMF_CAL_GREGORIAN, rc) - ESMF_CAL_GREGORIAN (3/1/-4800 to 10/29/292,277,019,914) - ESMF_CAL_JULIAN (+/- 106,751,991,167,300 days) - ESMF_CAL_NOLEAP - ESMF_CAL_360DAY - ESMF_CAL_USER
I/O • Field level binary, netCDF, HDF, GRIB, bufr, extensible… • Currently I/O piped through 1 PE call ESMF_FieldAllGather(field_u, outarray, status) if (de_id .eq. 0) then write(filename, 20) "U_velocity", file_no call ESMF_ArrayWrite(outarray, filename=filename, rc=status) endif call ESMF_ArrayDestroy(outarray, status)
Internal Classes • Machine model • Captures system attributes, CPU, mem, connectivity graph • Useful for defining decomposition, load-balance, performance predictions. • Comms • Communication driver, allows bindings to MPI, shared memory, vendor system libraries
Comms Performance Test Right mix (green) on Compaq gives x2 realized bandwidth (in large message limit)
Talk Outline • Project Overview • Architecture and Current Status • Superstructure layer • Design • Adoption • Infrastructure layer • What is it for • What it contains • Next steps…. • Open Discussion
May 2003 Release Focus for May 2003 ESMF release was on developing sufficient infrastructure and superstructure to achieve the initial set of interoperability experiments. These are: • FMS B-grid atmosphere coupled to MITgcm ocean • CAM atmosphere coupled to NCEP analysis • NSIPP atmosphere coupled to DAO analysis
Regrid Next Steps • Support for all ESMF Grids • Support for regridding methods including: Bilinear Bicubic 1st-order Conservative 2nd-order Conservative Rasterized Conservative Nearest-neighbor distance-weighted average Spectral transforms 1-d interpolations (splines) Index-space (shifts, stencils) Adjoints of many above
Distributed Grid • Regular 2d already supported • Next steps • Generalized 1d decomposition • Extend support for 2d and quasi regular decomposition • Spectral grid decompositions • Composite grids Physical Grid Next Steps • Larger set of metrics, grids • High level routines for rapid definition of common grids.
I/O Next Steps • Broaden format set, binary, netCDF, HDF, GRIB, bufr • Improve parallelization • Full support for alarms • Broader functionality Time/Logging/Profiling Next Steps
Summary • ESMF Current Status • Comprehensive class structure available in version 1.0 • Over the coming year significant extension of functionality will take place. • Feedback and comments on version 1.0 welcome http://www.esmf.ucar.edu
Talk Outline • Project Overview • Architecture and Current Status • Superstructure layer • Design • Adoption • Infrastructure layer • What is it for • What it contains • Next steps…. • Open Discussion
Last but not least - an interesting potential benefit of “component” based approaches Component based approaches could provide a foundation for driving high-end applications from “desktop” productivity environments. For example driving a parallel ensemble GCM run from Matlab becomes conceivable! To learn more visit us at MIT!!!
Time Manager ESMF Infrastructure Utility detailed example Earl Schwab, ESMF Core Team, NCAR
What is Time Manager? • Clock for time simulation • Time representation • Time calculator • Time comparisons • Time queries • F90 API, C++ implementation