1 / 27

A High-Performance Framework for Earth Science Modeling and Data Assimilation

ESMF. A High-Performance Framework for Earth Science Modeling and Data Assimilation. V. Balaji (vb@gfdl.gov), SGI/GFDL Princeton University First PRISM Project Meeting Toulouse, 22 May 2002 NASA/GSFC.

jed
Télécharger la présentation

A High-Performance Framework for Earth Science Modeling and Data Assimilation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ESMF A High-Performance Framework for Earth Science Modeling and Data Assimilation V. Balaji (vb@gfdl.gov), SGI/GFDL Princeton University First PRISM Project Meeting Toulouse, 22 May 2002NASA/GSFC

  2. NSF NCARTim Killeen, PIByron BovilleCecelia DeLucaRoberta JohnsonJohn Michalakes MITJohn Marshall, PIChris Hill NASA DAOArlindo da Silva, PILeonid ZaslavskyWill Sawyer NASA NSIPPMax Suarez Michele RieneckerChristian Keppenne NOAA GFDLAnts LeetmaaV. BalajiRobert HallbergJeff Anderson NOAA NCEPStephen LordMark IredellMike YoungJohn Derber DOE Los Alamos National LabPhil Jones DOE Argonne National LabJay LarsonRob Jacob University of MichiganQuentin Stout Who are we?

  3. Part I Core Framework Development NSF NCAR PI Part I Proposal Specific Milestones Joint Milestones Joint Milestones Joint Milestones Project Organization NASA ESS Part II Part III Data Assimilation Deployment Prognostic Model Deployment MIT PI NASA DAO PI Part II Proposal Specific Milestones Part III Proposal Specific Milestones Joint Specification Team Requirements Analysis System Architecture API Specification

  4. Outline • Background • ESMF Objectives and Scientific Benefits • ESMF Overview • Final Milestones • Development Plan • Beyond 2004: ESMF Evolution

  5. Technological Trends In climate research and NWP...increased emphasis on detailed representation of individual physical processes; requires many teams of specialists to contribute components to an overall coupled system In computing technology...increase in hardware and software complexity in high-performance computing, as we shift toward the use of scalable computing architectures

  6. Community Response • Modernization of modeling software Abstraction of underlying hardwareto provide uniform programming modelacross vector, uniprocessor and scalable architecturesDistributed development model characterized by many contributing authors; use of high-level language features for abstraction to facilitate development processModular designfor interchangeable dynamical cores and physical parameterizations, development of community-wide standards for components • Development of prototype frameworksGFDL (FMS), NASA/GSFC (GEMS) • Other framework-ready packages: NCAR/NCEP (WRF), NCAR/DOE (MCT) The ESMF aims to unify and extend these efforts

  7. Objectives of the ESMF • Facilitate the exchange of scientific codes (interoperability) • Promote the reuse of standardized technical software while preserving computational efficiency • Focus community resources to deal with changes in computer architecture • Present the computer industry and computer scientists with a unified and well defined task • Share overhead costs of the housekeeping aspects of software development • Provide greater institutional continuity to model development efforts

  8. Scientific Benefits ESMF accelerates advances in Earth System Science • Eliminates software barriers to collaboration among organizations • Easy exchange of model components accelerates progress in NWP and climate modeling • Independently developed models and data assimilation methods can be combined and tested • Coupled model development becomes truly distributed process • Advances from smaller academic groups easily adopted by large modeling centers

  9. Scientific Benefits, cont. ESMF accelerates advances in Earth System Science • Facilitates development of new interdisciplinary collaborations • Simplifies extension of climate models to upper atmosphere • Accelerates inclusion of advanced biogeochemical components into climate models • Develops clear path for many other communities to use, improve, and extend climate models • Many new model components gain easy access to power of data assimilation

  10. Outline • Background • ESMF Objectives and Scientific Benefits • ESMF Overview • Final Milestones • Development Plan • Beyond 2004: ESMF Evolution

  11. Design Principles Modularity data-hiding, encapsulation, self-sufficiency; Portability adhere to official language standards, use community-standard software packages, comply with internal standards Performance minimize abstraction penalties of using a framework Flexibility address a wide variety of climate issues by configuring particular models out of a wide choice of available components and modules Extensibility design to anticipate and accommodate future needs Community encourage users to contribute components, develop in open source environment

  12. Framework Architecture

  13. Application Architecture Coupling Layer ESMF Superstructure Model Layer User Code Fields and Grids Layer ESMF Infrastructure Low Level Utilities External Libraries BLAS, MPI, NetCDF, …

  14. ESMF Infrastructure DISTGRID: Distributed grid operations (transpose, halo, etc.) PHYSGRID: physical grid specification, metric operations. REGRID: interpolation of data between grids, ungridded data. IO: on distributed data KERN: Management of distributed memory, data-sharing for shared and distributed memory. TMGR: Time management, alarms, time and calendar utilities PROF: Performance profiling and logging, adaptive load-balancing. ERROR: Error handling ESMF Superstructure CONTROL: assignment of components to processor sets, scheduling of components and inter-component exchange. Inter-component signals, including checkpointing of complete model configurations. COUPLER: Validation of exchange packets. Blocking and non-blocking transfer of boundary data between component models. Conservation verification. COMPONENT: specification of required interfaces for components. Functionality Classes

  15. Other features • ESMF will be usable by models written in F90/C/C++. • ESMF will be usable by models requiring adjoint capability. • ESMF will support SPMD and MPMD coupling. • ESMF will support several I/O formats (principally netCDF). • ESMF will have uniform syntax across platforms.

  16. Target Platforms ESMF will target broad range of platforms • Major center hardware, e.g. • SMP nodes • SP, SGI O3K, Alpha • 1000+ processors • Commodity hardware, e.g. • Linux clusters, desktops • x86 (P4,Athlon) + interconnect • 64 processors  $140K, 10-30GFlop/s

  17. Outline • Background • ESMF Objectives and Scientific Benefits • ESMF Overview • Final Milestones • Development Plan • Beyond 2004: ESMF Evolution

  18. Joint Milestone Codeset I

  19. Joint Milestone Codeset II

  20. Joint Milestone Codeset III

  21. Final Milestones • Tested, optimized core ESMF software • Many platforms, including commodity clusters • All JMC codes will achieve full ESMF compliance • Major Modeling EffortsCCSM, FMS, MIT, NCEP, GEMS, WRF • Major Data Assimilation EffortsAtmospheric: NCEP, DAOOceanic: NSIPP, MIT • ESMF interoperability demonstrationsDemonstrated by running JMC codes using ESMF coupling services, including models that have never been coupled before

  22. Interoperability Demo 3 interoperability experiments completed at Milestone I, 5 more at Milestone J

  23. Outline • Background • ESMF Objectives and Scientific Benefits • ESMF Overview • Final Milestones • Development Plan • Beyond 2004: ESMF Evolution

  24. Development Strategy Collaborative, organized, efficient • Modular, object-oriented software design • Open, collaborative web development environment based on SourceForge • Communication via teleconferences, mailing lists and ESMF website • Reviews for requirements, design, code and documentation • ProTeX, LaTeX and Latex2html tools to support integrated and easy to maintain documentation • Defect tracking • CVS source code control

  25. ESMF Management and Coordination Advisory Board Executive Committee E/PO Director Prognostic Model Deployment Core Software Development Data Assimilation Deployment MIT PI NSF NCAR PI NASA DAO PI Joint Specification Team Requirements Analysis – Architecture - API Specification Technical Oversight Team Technical Oversight Team • Technical Oversight Teams • Low-Level Infrastructure • Fields and Grids • Coupling Superstructure Technical Oversight Team Technical Oversight Team Technical Lead Technical Lead Technical Lead Core Software Development Team Modeling Applications Team Data Assimilation Applications Team Management Structure

  26. Community Involvement • Review by Earth science community twice during framework development • More frequent Request for Comments • Closer and more frequent interaction with other ESS funded projects • University of MichiganA High-Performance Adaptive Simulation Framework for Space-Weather Modeling (SWMF) • UCLA / LANLIncreasing Interoperability of an Earth System Model: Atmosphere-Ocean Dynamics and Tracer Transports • GSFC HSB/COLA Land Information System (LIS)

  27. Beyond 2004:ESMF Evolution • Maintenance, support and management • NCAR commitment to maintain and support core ESMF software • NCAR commitment to develop ESMF, with focus and level contingent on outside funding • Persistence of Executive Committee and Advisory Board • Technical evolution • Functional extension: • Support for advanced data assimilation algorithms: error covariance operators, infrastructure for generic variational algorithms, etc. • additional grids, new domains • Earth System Modeling Environment, including web/GUI interface, databases of components and experiments, links to GRID services

More Related