1 / 17

Improved Scalability of the Finite-Volume Dynamical Core

Improved Scalability of the Finite-Volume Dynamical Core. The Cubed-Sphere and 2-dimensional (XY) domain decomposition. William M. Putman Special thanks to Shian-Jiann Lin (GFDL) V. Balaji (GFDL) Tom Clune (SIVO NASA/GSFC). NASA/GSFC Software Integration and Visualization Office.

shawna
Télécharger la présentation

Improved Scalability of the Finite-Volume Dynamical Core

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improved Scalability of theFinite-Volume Dynamical Core The Cubed-Sphere and 2-dimensional (XY) domain decomposition William M. Putman Special thanks to Shian-Jiann Lin (GFDL) V. Balaji (GFDL) Tom Clune (SIVO NASA/GSFC) NASA/GSFC Software Integration and Visualization Office NASA/GSFC Software Integration and Visualization Office

  2. ObjectiveHighly scalable fvcore • Scalability of the Finite-Volume dynamical core to 10,000s - 100,000s of processors • 2-Dimensional horizontal (XY) domain decomposition • Quasi-Uniform grid allows Purely Flux-Form dycore • No semi-lagrangian extension • No polar filters • Stepping-stone toward global non-hydrostatic cloud-resolving atmospheric model at 4-km to 1-km horizontal resolution (Lin, GFDL) NASA/GSFC Software Integration and Visualization Office

  3. IntroductionThe fvcore and the cubed-sphere • The FV dycore • Developed at NASA during the 1990’s (Lin & Rood, 1990s) • An Integral component of the GEOS atmospheric model line at NASA/GSFC • Lat-Lon implementations exist within the Global modeling efforts at NASA, NCAR, NOAA-GFDL • GEOS-4 and GFDL implementations • 1-D domain decomposition with MPI in latitudinal direction OpenMP in vertical • GEOS-5 and NCAR CAM implementations • 2-D domain decomposition in latitude and vertical directions • The Cubed-Sphere • Potential Scalability to 100,000s of processors • Ideal for 2-dimensional XY domain decomposition • Quasi-Uniform mapping of the cube to a sphere • Gnomonic (Sadourny, 1972) • Conformal (Rancic & Purser, 1996) • Elliptic Solvers • Can share a common code base with lat-lon implementation NASA/GSFC Software Integration and Visualization Office

  4. MotivationScaling to 100,000s of Processors • Why? • Climate (capacity computing) • 100-km to 50-km resolution • Long integration (100+ years) • 100s of chemical constituents • Many scenarios - Faster turnaround • Advanced Physics and Super-Parameterization • Hi-Res Cloud resolving model within each grid cell • Thirst for more processors • Weather (capability computing) • High Resolution NWP • 25-km to 50-km at present • 5-km to 10-km within a decade • Global Cloud resolving • Non-Hydrostatic Convective-Parameterization free (explicit cloud micro-physics) NASA/GSFC Software Integration and Visualization Office

  5. MotivationScaling to 100,000s of Processors • Current Limitations • Finite-Volume dycore • The FV algorithm of Lin and Rood (1990s) • Built on Lat-Lon grid • Convergence of longitudes at poles • Polar Filters • Flux-Form with Semi-lagrangian extension • Non-local communication for XY domain-decomp • Limited scalability due to 1-D domain-decomp • YZ - decomp option • Requires XY to YZ transformation for column physics NASA/GSFC Software Integration and Visualization Office

  6. MotivationScaling to 100,000s of Processors • How do we get there? • Quasi-Uniform Grids • Cubed-Sphere grid • Geodesic grids • 2D XY domain-decomposition • Exposes additional parallelism over Lat-Lon • Potential scalability of Cubed-Sphere • Scalable communication • Barriers within the FV algorithm • Orthogonality requirements • Vector interpolation C-D grid scheme • Vector Components on the corners • The algorithm needs them • Special treatment on cubed-sphere • Vorticity at corner points • Other efforts • The high-order Discontinuous Galerkin on Cubed-Sphere at NCAR (HOMME) • Finite-Volume methods on geodesic grids at CSU NASA/GSFC Software Integration and Visualization Office

  7. ApproachThe FV dycore on the Cubed-Sphere • The choice of Grid • Gnomonic, Conformal, Elliptic-Solvers… • Characteristics of computationally efficient grids • Minimum Grid length • Orthogonal • Conformal • preserves the angles between intersecting curves on adjoining faces • Differentiable to any order everywhere except singularities • Aspect Ratio (smoothness) • Conformal versus Elliptic-Solvers • Orthogonality versus Minimum Grid-Length NASA/GSFC Software Integration and Visualization Office

  8. ApproachThe FV dycore on the Cubed-Sphere • FV algorithm modifications • Generalized Software • Work on a generic Cartesian patch • Replace lat/lon geometric factors with areas and grid lengths • Special handling of poles and cyclic BCs (lat-lon) • Special handling of corners (cubed-sphere) • Vector interpolation • Divergence Conserving • Adjustment factor around corners • D-grid kinetic energy formulation • Upwind Biased • Advection of D-Winds using B-Winds • B-Winds ambiguously defined at corners • Vorticity at corner points • Circulation theorem • C-Grid Winds • 2-D XY domain-decomposition • Uses Mosaics within MPP from GFDL • Block decomposition within each face NASA/GSFC Software Integration and Visualization Office

  9. ResultsShallow Water Dynamics and Scalability • Shallow Water Equations • What are they? • Mass conservation • Vector invariant momentum equation • Why do we use them? • standard tests cases (Williamson et al) • comprehensive evaluation of the numerical implementation of the dynamical core • Widely accepted as initial evaluation of numerical schemes prior full three-dimensional baroclinic implementation • Initial scaling results • 2D XY domain-decomposition • SGI Altix (512 processors) NASA/GSFC Software Integration and Visualization Office

  10. Test Case 1 • 2D Advection of a cosine bell • Advection Oriented over Corners • Height error after 1 revolution (12-days) • Mass conservation

  11. Test Case 2 • Zonal Geostrophically Balanced Flow • Steady State Solution • Flow Oriented over Corners

  12. Test Case 2 • Zonal Geostrophically Balanced Flow • Reveals excitation of gravity waves around corners due to geostrophic inbalances in the initial state and D- to C- Grid vector inperpolation • Rossby wave mode (zonal wave number-4) excited due to variation in resolution near corners • Error convergence with increased resolution displayed (2nd order convergence)

  13. Test Case 5 • Zonal Geostrophically Balanced Flow over an isolated mountain • Excitation of multiple waves due to presence of mountain • Tests energy properties during conversion between potential and kinetic energy

  14. Test Case 6 • Rossby Wave (Wavenumber-4) • 60-day Integration

  15. Strong Scalability -- where the problem size is fixed and the processor count expands • 2-Dimensional X-Y Domain decomposition • Pure MPI implementation (Can also be hybrid MPI-OpenMP in 3D dycore) • Run on SGI Altix Cubed-Sphere Scalability

  16. Weak Scalability – where the problem size and processor count expand • 2-Dimensional X-Y Domain decomposition • Pure MPI implementation (Can also be hybrid MPI-OpenMP in 3D dycore) • Run on SGI Altix Cubed-Sphere Scalability

  17. Summary and Future • Targeting a global atmospheric model for climate and weather scalable to 100,000s of processors • Current status • A 2-dimensional shallow water model with the finite-volume dynamics on the Cubed-Sphere • 2D XY domain-decomposition • Demonstrating scalability beyond the limits of the 1D domain-decomp on lat-lon • What’s next? • Implementation within the 3-dimensional baroclinic framework • Thermodynamic equation • Lagrangian vertical coordinate • Pressure gradiant calculations • Full physics implementation • Scaling studies at 1,000s - 10,000s of CPUs NASA/GSFC Software Integration and Visualization Office

More Related