1 / 67

Introduction to PETSc

Introduction to PETSc. Numerical Software Libraries for the Scalable Solution of PDEs. Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics and Computer Science Division Argonne National Laboratory full tutorial as presented at the SIAM Parallel Processing

jenski
Télécharger la présentation

Introduction to PETSc

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to PETSc Numerical Software Libraries for the Scalable Solution of PDEs Satish Balay, Bill Gropp, Lois C. McInnes, Barry Smith Mathematics and Computer Science Division Argonne National Laboratory full tutorial as presented at the SIAM Parallel Processing Conference, March 1999, available via http://www.mcs.anl.gov/petsc

  2. Philosophy • Writing hand-parallelized application codes from scratch is extremely difficult and time consuming. • Scalable parallelizing compilers for real application codes are very far in the future. • We can ease the development of parallel application codes by developing general-purpose, parallel numerical PDE libraries. • Caveats • Developing parallel, non-trivial PDE solvers that deliver high performance is still difficult, and requires months (or even years) of concentrated effort. • PETSc is a toolkit that can reduce the development time, but it is not a black-box PDE solver nor a silver bullet.

  3. PETSc ConceptsAs illustrated via a complete nonlinear PDE example: flow in a driven cavity • How to specify the mathematics of the problem • Data objects • vectors, matrices • How to solve the problem • Solvers • linear, nonlinear, and timestepping (ODE) solvers • Parallel computing complications • Parallel data layout • structured and unstructured meshes • Debugging, profiling, extensibility

  4. 4 3 advanced developer 2 1 beginner intermediate Tutorial Approach From the perspective of an application programmer: • Beginner • basic functionality, intended for use by most programmers • Intermediate • selecting options, performance evaluation and tuning • Advanced • user-defined customization of algorithms and data structures • Developer • advanced customizations, intended primarily for use by library developers

  5. Incremental Application Improvement • Beginner • Get the application “up and walking” • Intermediate • Experiment with options • Determine opportunities for improvement • Advanced • Extend algorithms and/or data structures as needed • Developer • Consider interface and efficiency issues for integration and interoperability of multiple toolkits

  6. Visualization PDE Discretization Steering Grids Algebraic Solvers Derivative Computation Optimization Component Interactions for Numerical PDEs PETSc emphasis

  7. CFD on an Unstructured Grid • 3D incompressible Euler • Tetrahedral grid • Up to 11 million unknowns • Based on a legacy NASA code, FUN3d, developed by W. K. Anderson • Fully implicit steady-state • Primary PETSc tools: nonlinear solvers (SNES) and vector scatters (VecScatter) Results courtesy of collaborators Dinesh Kaushik and David Keyes, Old Dominion Univ., partially funded by NSF and ASCI level 2 grant

  8. Missing data point Scalability for Fixed-Size Problem 11,047,096 Unknowns

  9. Scalability Study 600 MHz T3E, 2.8M vertices

  10. Multiphase Flow • Oil reservoir simulation: fully implicit, time-dependent • First, and only, fully implicit, parallel compositional simulator • 3D EOS model (8 DoF per cell) • Structured Cartesian mesh • Over 4 million cell blocks, 32 million DoF • Primary PETSc tools: linear solvers (SLES) • restarted GMRES with Block Jacobi preconditioning • Point-block ILU(0) on each processor • Over 10.6 gigaflops sustained performance on 128 nodes of an IBM SP. 90+ percent parallel efficiency Results courtesy of collaborators Peng Wang and Jason Abate, Univ. of Texas at Austin, partially funded by DOE ER FE/MICS

  11. PC and SP Comparison 179,000 unknowns (22,375 cell blocks) • PC: Fast ethernet (100 Megabits/second) network of 300 Mhz Pentium PCs with 66 Mhz bus • SP: 128 node IBM SP with 160 MHz Power2super processors and 2 memory cards

  12. Speedup Comparison

  13. The PETSc Programming Model • Goals • Portable, runs everywhere • Performance • Scalable parallelism • Approach • Distributed memory, “shared-nothing” • Requires only a compiler (single node or processor) • Access to data on remote machines through MPI • Can still exploit “compiler discovered” parallelism on each node (e.g., SMP) • Hide within parallel objects the details of the communication • User orchestrates communication at a higher abstract level than message passing

  14. PETSc PDE Application Codes ODE Integrators Visualization Nonlinear Solvers, Unconstrained Minimization Interface Linear Solvers Preconditioners + Krylov Methods Object-Oriented Matrices, Vectors, Indices Grid Management Profiling Interface Computation and Communication Kernels MPI, MPI-IO, BLAS, LAPACK PDE Application Codes

  15. Nonlinear Solvers Time Steppers Newton-based Methods Other Euler Backward Euler Pseudo Time Stepping Other Line Search Trust Region Krylov Subspace Methods GMRES CG CGS Bi-CG-STAB TFQMR Richardson Chebychev Other Preconditioners Additive Schwartz Block Jacobi Jacobi ILU ICC LU (Sequential only) Others Matrices Compressed Sparse Row (AIJ) Blocked Compressed Sparse Row (BAIJ) Block Diagonal (BDIAG) Dense Other Vectors Index Sets Indices Block Indices Stride Other PETSc Numerical Components

  16. Mesh Definitions: For Our Purposes • Structured • Determine neighbor relationships purely from logical I, J, K coordinates • PETSc support via DA • Semi-Structured • No direct PETSc support; could work with tools such as SAMRAI, Overture, Kelp • Unstructured • Do not explicitly use logical I, J, K coordinates • PETSc support via lower-level VecScatter utilities One is always free to manage the mesh data as if it is unstructured.

  17. A Freely Available and Supported Research Code • Available via http://www.mcs.anl.gov/petsc • Usable in C, C++, and Fortran77/90 (with minor limitations in Fortran 77/90 due to their syntax) • Users manual in Postscript and HTML formats • Hyperlinked manual pages for all routines • Many tutorial-style examples • Support via email: petsc-maint@mcs.anl.gov

  18. True Portability • Tightly coupled systems • Cray T3D/T3E • SGI/Origin • IBM SP • Convex Exemplar • Loosely coupled systems, e.g., networks of workstations • Sun OS, Solaris 2.5 • IBM AIX • DEC Alpha • HP • Linux • Freebsd • Windows NT/95

  19. PETSc History • September 1991: begun by Bill and Barry • May 1992: first release of PETSc 1.0 • January 1993: joined by Lois • September 1994: begun PETSc 2.0 • June 1995: first release of version 2.0 • October 1995: joined by Satish • Now: over 4,000 downloads of version 2.0 • Department of Energy, MICS Program • National Science Foundation, Multidisciplinary Challenge Program, CISE PETSc Funding and Support

  20. Velocity-vorticity formulation, with flow driven by lid and/or bouyancy Finite difference discretization with 4 DoF per mesh point Primary PETSc tools: SNES, DA Solution Components velocity: v velocity: u [u,v,z,T] temperature: T vorticity: z A Complete Example:Driven Cavity Model Example code: petsc/src/snes/examples/tutorials/ex8.c Application code author: D. E. Keyes

  21. Driven Cavity Solution Approach A Main Routine B Nonlinear Solvers (SNES) Solve F(u) = 0 Linear Solvers (SLES) PETSc PC KSP Application Initialization Function Evaluation Jacobian Evaluation Post- Processing C D User code PETSc code

  22. Driven Cavity Program • PartA: Data objects (Vec and Mat), Solvers (SNES) • Part B: Parallel data layout(DA) • Part C: Nonlinear function evaluation • ghost point updates • local function computation • Part D: Jacobian evaluation • default colored finite differencing approximation • Experimentation

  23. Collectivity • MPI communicators (MPI_Comm) specify collectivity (processors involved in a computation) • All PETSc creation routines for solver and data objects are collective with respect to a communicator, e.g., VecCreate(MPI_Comm comm, int m, int M, Vec *x) • Some operations are collective, while others are not, e.g., • collective: VecNorm( ) • not collective: VecGetLocalSize() • If a sequence of collective routines is used, they must be called in the same order on each processor

  24. Vectors proc 0 • Fundamental objects for storing field solutions, right-hand sides, etc. • VecCreateMPI(...,Vec *) • MPI_Comm - processors that share the vector • number of elements local to this processor • total number of elements • Each process locally owns a subvector of contiguously numbered global indices proc 1 proc 2 proc 3 proc 4

  25. proc 0 proc 1 proc 2 proc 3 proc 4 Sparse Matrices • Fundamental objects for storing linear operators (e.g., Jacobians) • MatCreateMPIAIJ(…,Mat *) • MPI_Comm - processors that share the matrix • number of local rows and columns • number of global rows and columns • optional storage pre-allocation information • Each process locally owns a submatrix of contiguously numbered global rows.

  26. Parallel Matrix and Vector Assembly • Processes may generate any entries in vectors and matrices • Entries need not be generated by the process on which they ultimately will be stored • PETSc automatically moves data during assembly if necessary • Vector example: • VecSetValues(Vec,…) • number of entries to insert/add • indices of entries • values to add • mode: [INSERT_VALUES,ADD_VALUES] • VecAssemblyBegin(Vec) • VecAssemblyEnd(Vec)

  27. Matrix Assembly • MatSetValues(Mat,…) • number of rows to insert/add • indices of rows and columns • number of columns to insert/add • values to add • mode: [INSERT_VALUES,ADD_VALUES] • MatAssemblyBegin(Mat) • MatAssemblyEnd(Mat)

  28. Blocked Sparse Matrices • For multi-component PDEs • MatCreateMPIBAIJ(…,Mat *) • MPI_Comm - processes that share the matrix • block size • number of local rows and columns • number of global rows and columns • optional storage pre-allocation information • 3D compressible Euler code • Block size 5 • IBM Power2

  29. Linear (SLES) Nonlinear (SNES) Timestepping (TS) Context variables Solver options Callback routines Customization important concepts Solvers: Usage Concepts Solver Classes Usage Concepts

  30. Nonlinear Solvers (SNES) Goal: For problems arising from PDEs, support the general solution of F(u) = 0 • Newton-based methods, including • Line search strategies • Trust region approaches • Pseudo-transient continuation • Matrix-free variants • User provides: • Code to evaluate F(u) • Code to evaluate Jacobian of F(u) (optional) • or use sparse finite difference approximation • or use automatic differentiation (coming soon!) • User can customize all phases of solution

  31. Context Variables • Are the key to solver organization • Contain the complete state of an algorithm, including • parameters (e.g., convergence tolerance) • functions that run the algorithm (e.g., convergence monitoring routine) • information about the current state (e.g., iteration number)

  32. Creating the SNES Context • C/C++ version ierr = SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,&sles); • Fortran version call SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,snes,ierr) • Provides an identical user interface for all linear solvers • uniprocessor and parallel • real and complex numbers

  33. Basic Nonlinear Solver Code (C/C++) SNES snes; /* nonlinear solver context */ Mat J; /* Jacobian matrix */ Vec x, F; /* solution, residual vectors */ int n, its; /* problem dimension, number of iterations */ ApplicationCtx usercontext; /* user-defined application context */ ... MatCreate(MPI_COMM_WORLD,n,n,&J); VecCreate(MPI_COMM_WORLD,n,&x); VecDuplicate(x,&F); SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,&snes); SNESSetFunction(snes,F,EvaluateFunction,usercontext); SNESSetJacobian(snes,J,EvaluateJacobian,usercontext); SNESSetFromOptions(snes); SNESSolve(snes,x,&its);

  34. Basic Nonlinear Solver Code (Fortran) SNES snes Mat J Vec x, F int n, its ... call MatCreate(MPI_COMM_WORLD,n,n,J,ierr) call VecCreate(MPI_COMM_WORLD,n,x,ierr) call VecDuplicate(x,F,ierr) call SNESCreate(MPI_COMM_WORLD,SNES_NONLINEAR_EQUATIONS,snes,ierr) call SNESSetFunction(snes,F,EvaluateFunction,PETSC_NULL,ierr) call SNESSetJacobian(snes,J,EvaluateJacobian,PETSC_NULL,ierr) call SNESSetFromOptions(snes,ierr) call SNESSolve(snes,x,its,ierr)

  35. important concept Solvers Based on Callbacks • User provides routines to perform actions that the library requires. For example, • SNESSetFunction(SNES,...) • uservector - vector to store function values • userfunction - name of the user’s function • usercontext - pointer to private data for the user’s function • Now, whenever the library needs to evaluate the user’s nonlinear function, the solver may call the application code directly with its own local state. • usercontext: serves as an application context object. Data are handled through such opaque objects; the library never sees irrelevant application data

  36. Sample Application Context:Driven Cavity Problem typedef struct { /* - - - - - - - - - - - - - - - - basic application data - - - - - - - - - - - - - - - - - */ double lid_velocity, prandtl, grashof; /* problem parameters */ int mx, my; /* discretization parameters */ int mc; /* number of DoF per node */ int draw_contours; /* flag - drawing contours */ /* - - - - - - - - - - - - - - - - - - - - parallel data - - - - - - - - - - - - - - - - - - - - - */ MPI_Comm comm; /* communicator */ DA da; /* distributed array */ Vec localF, localX; /* local ghosted vectors */ } AppCtx;

  37. Sample Function Evaluation Code:Driven Cavity Problem UserComputeFunction(SNES snes, Vec X, Vec F, void *ptr) { AppCtx *user = (AppCtx *) ptr; /* user-defined application context */ int istart, iend, jstart, jend; /* local starting and ending grid points */ Scalar *f; /* local vector data */ …. /* Communicate nonlocal ghost point data */ VecGetArray( F, &f ); /* Compute local function components; insert into f[ ] */ VecRestoreArray( F, &f ); …. return 0; }

  38. Sample Local Computational Loops:Driven Cavity Problem #define U(i) 4*(i) #define V(i) 4*(i)+1 #define Omega(i) 4*(i)+2 #define Temp(i) 4*(i)+3 …. for ( j = jstart; j<jend; j++ ) { row = (j - gys) * gxm + istart - gxs - 1; for ( i = istart; i<iend; i++ ) { row++; u = x[U(row)]; uxx = (two * u - x[ U (row-1)] - x [U (row+1) ] ) * hydhx; uyy = (two * u - x[ U (row-gxm)] - x [ U (row+gxm) ] ) * hxdhy; f [U(row)] = uxx + uyy - p5 * (x [ (Omega (row+gxm)] - x [Omega (row-gxm)] ) * hx; } } ….

  39. Customization Options • Procedural Interface • Provides a great deal of control on a usage-by-usage basis inside a single code • Gives full flexibility inside an application • Command Line Interface • Applies same rule to all queries via a database • Enables the user to have complete control at runtime, with no extra coding

  40. Setting Solver Options within Code • SNESSetTolerances(SNES snes, double atol, double rtol, double stol, int maxits, int maxf) • SNESSetType(SNES snes,SNESType type) • etc.

  41. Uniform access to all linear and nonlinear solvers • -ksp_type [cg,gmres,bcgs,tfqmr,…] • -pc_type [lu,ilu,jacobi,sor,asm,…] • -snes_type [ls,tr,…] • -snes_line_search <line search method> • -sles_ls <parameters> • -snes_rtol <relative_tol> • etc... 1 2

  42. Runtime Script Example

  43. Customization via Callbacks:Setting a user-defined line search routine SNESSetLineSearch(SNES snes,int(*ls)(…),void *lsctx) where available line search routines ls( ) include: • SNESCubicLineSearch( ) - cubic line search • SNESQuadraticLineSearch( ) - quadratic line search • SNESNoLineSearch( ) - full Newton step • SNESNoLineSearchNoNorms( ) - full Newton step but calculates no norms (faster in parallel, useful when using a fixed number of Newton iterations instead of usual convergence testing) • YourOwnFavoriteLineSearchRoutine( ) - whatever you like 2 3

  44. SNES: Review of Basic Usage • SNESCreate( ) - Create SNES context • SNESSetFunction( ) - Set function eval. routine • SNESSetJacobian( ) - Set Jacobian eval. routine • SNESSetFromOptions( ) - Set runtime solver options for [SNES,SLES,KSP,PC] • SNESSolve( ) - Run nonlinear solver • SNESView( ) - View solver options actually used at runtime (alternative: -snes_view) • SNESDestroy( ) - Destroy solver

  45. SNES: Review of Selected Options 1 2 And many more options...

  46. SNES: Example Programs Location:petsc/src/snes/examples/tutorials/ • ex1.c, ex1f.F - basic uniprocessor codes • ex4.c, ex4f.F - uniprocessor nonlinear PDE (1 DoF per node) • ex5.c, ex5f.F - parallel nonlinear PDE (1 DoF per node) • ex8.c - parallel nonlinear PDE (multiple DoF per node, driven cavity problem) 1 2 • And many more examples ...

  47. Structured DA objects Unstructured VecScatter objects Geometric data Data structure creation Ghost point updates Local numerical computation important concepts Data Layout and Ghost Values : Usage Concepts Managing field data layout and required ghost values is the key to high performance of most PDE-based parallel programs. Mesh Types Usage Concepts

  48. Local node Ghost node Ghost Values Ghost values: To evaluate a local function f(x) , each process requires its local portion of the vector x as well as its ghost values -- or bordering portions of x that are owned by neighboring processes.

  49. Communication and Physical Discretization Communication Local Numerical Computation Geometric Data Data Structure Creation Ghost Point Data Structures Ghost Point Updates stencil [implicit] DA AO Loops over I,J,K indices DACreate( ) DAGlobalToLocal( ) structured meshes 1 elements edges vertices VecScatter AO Loops over entities VecScatter( ) VecScatterCreate( ) unstructured meshes 2

  50. 5 9 0 1 2 3 4 Local node Ghost node Global and Local Representations Global: each process stores a unique local set of vertices (and each vertex is owned by exactly one process) Local: each process stores a unique local set of vertices as well as ghost nodes from neighboring processes

More Related