1 / 108

Results from the 3 rd Drag Prediction Workshop using the NSU3D Unstructured Mesh Solver

Results from the 3 rd Drag Prediction Workshop using the NSU3D Unstructured Mesh Solver. Dimitri J. Mavriplis University of Wyoming. Overview. Description of Meshes Description of NSU3D Solver Sample performance Preliminary Sensitivity Evaluations WB and WBF Results W1 and W2 Results

adia
Télécharger la présentation

Results from the 3 rd Drag Prediction Workshop using the NSU3D Unstructured Mesh Solver

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Results from the 3rd Drag Prediction Workshop using the NSU3D Unstructured Mesh Solver Dimitri J. Mavriplis University of Wyoming

  2. Overview • Description of Meshes • Description of NSU3D Solver • Sample performance • Preliminary Sensitivity Evaluations • WB and WBF Results • W1 and W2 Results • Including runs performed at Cessna on 2nd family of grids • Conclusions

  3. General Gridding Guidelines • Grid Convergence Cases: • DLR F6 WBF • 3 grid levels required • DLR F6 WB • Medium grid required, coarse/fine optional • Wing1 and Wing2 • Four grid levels required

  4. General Gridding Guidelines • Grid Resolution Guidelines • BL Region • Y+ < 1.0, 2/3, 4/9, 8/27 (Coarse,Med,Fine,VeryFine) • 2 cell layers constant spacing at wall • Growth rates < 1.25 • Far Field: 100 chords • Local Spacings (Medium grid) • Chordwise: 0.1% chord at LE/TE • Spanwise spacing: 0.1% semispan at root/tip • Cell size on Fuselage nose, tail: 2.0% chord • Trailing edge base: • 8,12,16,24 cells across TE Base (Coarse,Med,Fine,Veryfine)

  5. General Gridding Guidelines • Grid Convergence Sequence • Grid size to grow ~3X for each level refinement • 1.5X in each coordinate direction (structured) • Maintain same family of grids in sequence • Same relative resolution/topology/growth factors • Sample sizes (DLR F6 WBF): • 2.7M, 8M, 24M pts (structured grids) • Unstructured grids should be similar • Cell based vs. Node Based Unstructured solvers • 5 to 6 times more tetrahedra per nodes • 2 times more prisms than nodes

  6. Available (Posted) Unstructured Grids • VGRID (NASA Langley) • Node-Based grids NASA(W1,W2,WB,WBF) • Node-Based grids Cessna (W1,W2) • Cell Centered Grids Raytheon (WB,WBF) • ANSYS Hybrid Meshes • Centaur (DLR, adapted) (Node Based) • AFLR3 (Boeing) (Cell Centered) • TAS (JAXA) (Node Based) • GridPro (Block-Structured/Unstructured)

  7. VGRID NASA (Node Based) • WB: • Coarse : 5.3M pts • Medium: 14.3M pts • Fine: 40.0M pts (> 200M cells) • WBF: • Coarse: 5.6M pts • Medium: 14.6M pts • Fine: 41.1M pts ( > 200M cells)

  8. VGRID Node Centered (NASA)

  9. VGRID Node Centered (NASA)

  10. VGRID Node Centered (NASA)

  11. NSU3D Description • Unstructured Reynolds Averaged Navier-Stokes solver • Vertex-based discertization • Mixed elements (prisms in boundary layer) • Edge data structure • Matrix artificial dissipation • Option for upwind scheme with gradient reconstruction • No cross derivative viscous terms • Thin layer in all 3 directions • Option for full Navier-Stokes terms

  12. Solver Description (cont’d) • Spalart-Allmaras turbulence model • (original published form) • Optional k-omega model

  13. Solution Strategy • Jacobi/Line Preconditioning • Line solves in boundary layer regions • Relieves aspect ratio stiffness • Agglomeration multigrid • Fast grid independent convergence rates • Parallel implementation • MPI/OpenMP hybrid model • DPW runs: MPI on local cluster and on NASA Columbia Supercomputer

  14. Grid Generation • Runs based on NASA Langley supplied VGRIDns unstructured grids • Tetrahedra in Boundary Layer merged into prismatic elements • Grid sizes up to 41M pts, 240M elements

  15. Sample Run Times • All runs performed on NASA Columbia Supercomputer • SGI Altix 512cpu machines • Coarse/Medium (~15Mpts) grids used 96 cpus • Using 500 to 800 multigrid cycles • 30 minutes for coarse grid • 1.5 hrs for medium grid • Fine Grids (~40M pts) used 248 cpus • Using 500 to 800 multigrid cycles • 1.5 to 2 hrs hrs for fine grid • CL driver and constant incidence convergence similar • WB cases hard to converge (not entirely steady)

  16. Scalability • Near ideal speedup for 72M pt grid on 2008 cpus of NASA Columbia Machine (~10 minutes for steady-state solution)

  17. NSU3D Sensitivity Studies • Sensitivity to Distance Function Calculation Method • Effect of Multi-Dimensional Thin-Layer versus Full Navier-Stokes Terms • Sensitivity to Levels of Artificial Dissipation

  18. Sensitivity to Distance Function • All DPW3 Calculations done with Eikonal equation distance function

  19. Sensitivity to Navier-Stokes Terms • All DPW3 Calculations done with Multidimensional Thin-Layer Formulation

  20. Sensitivity to Dissipation Levels • Drag is grid converging • Sensitivity to dissipation decreases as expected • All Calculations done with low dissipation level

  21. WBF Convergence (fixed alpha) • “Similar” convergence for all grids • Force coefficients well converged < 500 MG cycles

  22. WBF Convergence • Medium Grid (15M pts): Fixed alpha

  23. WBF Convergence • Medium Grid (15M pts): Fixed CL

  24. WBF Convergence • Similar convergence (Fixed CL or alpha)

  25. WBF: Grid Convergence Study • CP at wing break station (y/b=0.411)

  26. WBF: Grid Convergence Study • CP at wing break station (y/b=0.411)

  27. WBF: Grid Convergence Study • CP at wing break station (y/b=0.411)

  28. WBF: Grid Convergence Study • CF at wing break station (y/b=0.411)

  29. WBF: Grid Convergence Study • Good fairing design (coarse grid: 5M pts)

  30. WBF: Grid Convergence Study • Good fairing design (medium grid: 15M pts)

  31. WBF: Grid Convergence Study • Good fairing design (fine grid: 40M pts)

  32. WBF: TE Separation • Coarse grid: 5M pts

  33. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  34. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  35. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  36. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  37. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  38. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  39. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  40. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  41. WBF: Drag Polar • CP at wing break station (y/b=0.411)

  42. WBF: Drag Polar • CFX at wing break station (y/b=0.411)

  43. WBF: Drag Polar • Full Polar run on all 3 grids (5, 15, 40M pts)

  44. WBF: Drag Polar • Full Polar run on all 3 grids (5, 15, 40M pts)

  45. WBF: Moment • Full Polar run on all 3 grids (5, 15, 40M pts)

  46. WBF: Moment • Full Polar run on all 3 grids (5, 15, 40M pts)

  47. WB Convergence (fixed alpha) • Separated Flow, unsteady shedding pattern • Smaller residual excursions with fewer MG levels • Moderate CL variations

  48. WB Medium Grid • Plot Min and Max unsteady CL values

  49. WB Medium Grid • Plot Min and Max unsteady CL values • Good overlap in polar– suitable drag values

  50. WB Medium Grid • Plot Min and Max unsteady CL values • Less overlap in CM

More Related