1 / 31

HPCN modeling at Vah river site

HPCN modeling at Vah river site. Institute of Informatics, SAS Water Research Institute Vah River Authority. Outlines. Input data available at Vah pilot site Modeling and simulation at Vah pilot site using SMS/FESWMS HPCN approach for SMS/FESWMS. Data available at Vah pilot site.

brac
Télécharger la présentation

HPCN modeling at Vah river site

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. HPCN modeling at Vah river site Institute of Informatics, SAS Water Research Institute Vah River Authority

  2. Outlines • Input data available at Vah pilot site • Modeling and simulation at Vah pilot site using SMS/FESWMS • HPCN approach for SMS/FESWMS

  3. Data available at Vah pilot site

  4. What we have yet: Two sets (43 and 141) of cross-sections positives: hi-precision data, two independent sources negatives: problems with fitting to coordinate system usage: modeling, calibrating models Vector 1:50 000 ARC GIS positives: global information, base database negatives: accuracy 1:50 000 usage: help interpret the area from its economic viewpoint Raster 1:10 000 positives: good accuracy for interpretation of area negatives: raster format, the manual processing is needed usage: interpret the area from economic viewpoint and provide some parameters for modeling LANDSAT IMAGES positives: actual (1999) multi-spectral data negatives: low accuracy usage: interpret the area from its economic viewpoint and provide some parameters for modeling of landcover features ORTHOPHOTOMAP positives: actual (2000) data from land negatives: 2D dimension usage: interpret the area from its economic viewpoint and provide parameters for modeling

  5. What we are waiting for: LIDAR data • contract has been signed • Data will be available due the end of June • Digital Surface Model DSM with the following features: grid width: 2 m height accuracy: RMS = +/- 0.15m data coding: 16 Bit or 8 Bit raster data data format ASCII • Digital Terrain model DTM (describing the ground surface) with the following features: grid width: 2 m height accuracy: RMS = +/- 0.15m data coding: 16 Bit or 8 Bit raster data data format ASCII

  6. GIS vector data in 1:50 000 scale ArcView format (with single database)

  7. Focus to downstream part of pilot site (Povazska Bystrica town)

  8. The same place in raster maps1: 10 000 scale

  9. … and how it really looks(orthophotomap)

  10. Set of 43 cross-sections Power canal cross-sections Vah river channel railway road

  11. 43 cross-sections placed on satellite image

  12. Modeling and simulation using SMS at Vah river

  13. Input data: 6 cross-sections from Vah

  14. Mesh generated by SMS The right bank of river has denser mesh because there is larger change in elevations and roughness

  15. Mesh in details The black points represent nodes of the mesh. At this time, elevations of the nodes are interpolated from the measured cross-sections (not very accurate). LIDAR data will give more accurate elevations for every node The red points represent the measured cross-sections

  16. Map elevations (interpolated from cross-sections)

  17. Material (roughness) The main river channel has smaller roughness The floodplains have bigger roughness

  18. Experimental parameters • Inflow: 3000 m3s-1 (return period more than 100 years) steady state • Number of elements: 7800 • Number of nodes: 15750 • Average distance between two neighbor nodes: ~ 5m • Number of equations: 35500 • Computation time: 0:30:56 (on Pentium III, 550Mhz, 512MB RAM)

  19. Results: water surface elevations

  20. Results: water depths

  21. Results: flow velocity magnitudes and vectors

  22. Results: trace flow animation

  23. HPCN approach for SMS

  24. Complexity of SMS/FESWMS If the simulated area increases 2 times in every dimension (or the distances between two neighbor nodes decreases 2 times for better accuracy), then: • Number of nodes increases 4 times (O(N2)) • Number of equations increase 4 times (O(N2)) • Length of fronts in FESWMS increases 2 times (O(N)) • Total memory requirement increases 8 times (O(N3)) • Computation time increases 16 times (O(N4)) !!! • Computational time increases very fast with data size. Without using high performance platforms, users can not achieve reliable simulation results for large areas in reasonable time. • HPCN implementation is necessary.

  25. Graphical user interface (SMS) for pre- and post-processing • is not interesting for research • is absolutely necessary for end-users • Import terrain maps (TIFF, XYZ, ArcView shapfile, …) • Define mesh, boundary conditions, … • Define simulation parameters • Generate input files • Computational module (FESWMS) • is the focus of research • is not interesting for end-users • Read input data • Check if input parameters are valid Pre-processing Post-processing Program structures 140 NELL = NELL + 1 IF (NELL .GT. NE) GO TO 470 N = NFIXH(NELL)C C IF (IMAT(N) .LE. 0) GO TO 140 NM = IMAT(N) IF (NM .LT. 901) THEN IF (ORT(NM,1) .EQ. 0.) GO TO 140 END IF C =BPD= CALL SECOND(TA1) IF (NCORN(N) .LT. 6 .OR. NM .GT. 900) THEN CALL COEF1(N, 1) ELSE OMEGA = OMEGAS(N) CALL COEFS(N, 1) END IF C =BPD= CALL SECOND(TA2)C =BPD= SELT=SELT+TA2-TA1 NBN = NCN * NDF DO 150 LK = 1, NBN LDEST(LK) = 0 NK(LK) = 0 150 CONTINUE KC = 0 DO 170 J = 1, NCN I = NOP(N,J) DO 160 L = 1, NDF LL = NBC(I,L) KC = KC + 1 NK(KC) = LL IF (LL .EQ. 0) GO TO 160 IF (NLSTEL(LL) .EQ. N) NK(KC) = - LL 160 CONTINUE 170 CONTINUEC C ... SET UP HEADING VECTORSC DO 220 LK = 1, NBN NODE = NK(LK) IF (NODE .EQ. 0) GO TO 220 IF (LCOL .EQ. 0) GO TO 190 DO 180 L = 1, LCOL LL = L IF (IABS(NODE) .EQ. IABS(LHED(L))) GO TO 200 180 CONTINUE 190 LCOL = LCOL + 1 LDEST(LK) = LCOL LHED(LCOL) = NODE GO TO 210 200 LDEST(LK) = LL LHED(LL) = NODE 210 CONTINUE 220 CONTINUE IF (LCOL .GT. LCMAX) LCMAX = LCOL IF (LCOL .LE. MFWX) GO TO 240 PRINT 225, MFWX PRINT 230, NFIXH(NELL) IF (IOUT.GT.0) WRITE (IOUT,225) MFWX 225 FORMAT (/ ' Fatal ERROR in Subroutine FRONT.', //, * ' The Parameter variable MFW is presently=',I6,/ Processing input data • Numerical simulation • (FEM, FDM) • The main focus of research Input files • Read solutions from output files • Visualization, animation, analysis, statistics … • Export solutions Computational kernel Save solution to output files Save solutions Output files

  26. No modification is required for GUI environment. Users will not notice any changes and use the program as normally. GUI (SMS) will run on PC terminals The existing code in I/O parts of computational module is reused  guarantee compatibility with existing module and save development time The main work of HPCN solution is to implement parallel computational kernel Communication between GUI (SMS) and computational module (FESWMS) can be done via any standard protocols (FTP, CORBA, HTTP,…) Parallel computational module will run on HPCN platform (supercomputers, clusters of workstations) HPCN approach Processing input data Pre-processing Input files Parallel computational kernel Save solutions Post-processing Output files

  27. Input files Preliminary computations Prepare next solution Linear solver Nonlinear solver write solution to the file Check solution Solution file Detailed FESWMS structures Solution schema Nonlinear solver Newton iteration is used to solve nonlinear equations. Linear solver is the computational kernel of FESWMS and is the most CPU-time consuming part. Therefore, it is the focus of parallelization OK

  28. Parallel direct solvers • Frontal method: the main direct solver which is especially designed for FEM. SMS uses this algorithm in FESWMS computational module. • Multi-frontal method: the parallel version of the frontal method. It is based on partitioning the finite-element domain into sub-domains and applying the frontal method to each sub-domain. • Existing libraries with direct solvers: MUMPS/PARASOL (developed in ESPRIT IV projects), SuperLU, SPARSE,…

  29. Parallel iterative solvers • Conjugate Gradient (CG): the most powerful iterative solver which contains only vector and matrix operations  is trivially parallelized. • Existing libraries with iterative solvers: PINEAPL (developed in ESPRIT IV projects), PETSC, Aztec, … • Advantages (in comparison with direct solvers): • less expensive (in terms of memory and CPU time) • higher parallelism, easier to parallelize • Disadvantages • does not guarantee to converge (direct solvers always do)

  30. The original version is too slow due the fact that direct solvers like frontal method need a lot of memory. As there is not enough physical memory on a single computer, part of data has to be swapped on hard disks Preliminary experimental results • Computation time of original sequential version (using frontal method): 0:30:56 • Computation time of current version (using BiCGSTAB iterative solver from PETSC) on single processor: 0:02:57 • Computational time of current version on PC cluster of eight processors connected by 100Mb Ethernet 0:01:13 • Speedup on eight processors: 2.4 WP 3.5 HPCN implementation started 01/01/2001 and will last 24 months. The speedup will be improved in the final version.

  31. Conclusion • There are several data sets of Vah river ready for modeling. LIDAR data will improve the accuracy of input data. • SMS/FESWMS is good environment for modeling and simulation. Experiments have been done at Vah river with real input data. • HPCN version of FESWMS not only reduces computation times but also allows the simulation of large scale problems and consequently provides more reliable results. • Using HPCN will affect only computational kernel. End-users will not notice any changes except for the performance. • Parallel solvers are available. The preliminary experimental results show good speedup achieved on PC clusters.

More Related