1 / 36

Uintah Component Developer API Tutorial

www.uintah.utah.edu. Uintah Component Developer API Tutorial. Outline of Uintah Component API Burgers Equation Example Code Task Graph Generation MPM Example ICE Example RMCRT Radiation Arches code. Jeremy Thornock , Todd Harman, John Schmidt.

lorant
Télécharger la présentation

Uintah Component Developer API Tutorial

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. www.uintah.utah.edu Uintah Component Developer API Tutorial • Outline of Uintah Component API • Burgers Equation Example Code • Task Graph Generation • MPM Example • ICE Example • RMCRT Radiation • Arches code Jeremy Thornock, Todd Harman, John Schmidt Message : Coding for an abstract graph interface provides portability

  2. John Schmidt • Outline of Uintah Component API • Burgers Equation Example Code • Task Graph Generation

  3. ExampleUintah Applications Explosions Angiogenesis Industrial Flares Micropin Flow Explosions Sandstone Compaction Foam Compaction Shaped Charges Carbon capture and cleanup

  4. Achieving Scalability & Portability • Get the Abstractions Right • Abstract Task Graph • Encapsulates computation and communication • Data Storage – DataWarehouse • Encapsulates model for describing the global computational space • Data moves from node/processor via MPI under the covers as needed – hidden from the developer • Programming for a Patch • Multiple patches combine to form the global Grid • Unit work space – block structured collection of cells with I,J,K indexing scheme • Single or multiple patches per Core/Processor

  5. Abstract Taskgraph • Convenient abstraction that provides a mechanism for achieving parallelism • Explicit representation of computation and communication • Components delegate decisions about parallelism to a • scheduler component using variable dependencies and • computational workloads for global resource optimization • (load balancing) • Efficient fine-grained coupling of multi-physics components • Flexible load balancing • Separation of Application development from Parallelism. • Component developers don’t have to be parallelism experts

  6. Uintah Patch and Variables ICE is a cell-centered finite volume method for Navier Stokes equations • Structured Grid Variable (for Flows) are Cell Centered Nodes, Face Centered Nodes. • Unstructured Points (for Solids) are Particles MPM is a novel method that uses particles and nodes Exchange data with ICE, not just boundary condition ARCHES is a combustion code using several different radiation models and linear solvers

  7. What is a Task? • Two features: • A pointer to a function that performs the actual work • A specification of the inputs & outputs Task* task = new Task( “Example::taskexample”, this, &Example::taskexample ); task->requires( Task::OldDW, variable1_label, Ghost::AroundNodes, 1 ); task->computes( variable1_label ); task->computes( variable2_label ); sched->addTask( task, level->eachPatch(), sharedState_->allMaterials() );

  8. Component Basics class Example : public UintahParallelComponent, public SimulationInterface { public: virtual void problemSetup(. . . .); virtual void scheduleInitialize(. . . ); virtual void scheduleComputeStableTimestep(. . . ); virtual void scheduleTimeAdvance(. . . ); private: void intialize(. . . ); void computeStableTimestep(. . . ); void timeAdvance( . . . ); } Algorithmic Implementation

  9. Burgers Example <Grid> <Level> <Box label = "1"> <lower> [0,0,0] </lower> <upper> [1.0,1.0,1.0] </upper> <resolution> [50,50,50] </resolution> <patches> [2,2,2] </patches> <extraCells> [1,1,1] </extraCells> </Box> </Level> </Grid> 25 cubed patches 8 patches One level of halos void Burger::scheduleTimeAdvance( constLevelP& level, SchedulerP& sched ) { ….. task->requires( Task::OldDW, u_label, Ghost::AroundNodes, 1 ); task->requires( Task::OldDW, sharedState_->get_delt_label() ); task->computes( u_label ); sched->addTask (task, level->eachPatch(), sharedState_->allMaterials() ); } Get old solution from old data warehouse One level of halos Compute new solution

  10. Burgers Equation code void Burger::timeAdvance( const ProcessorGroup*, const PatchSubset* patches, const MaterialSubset* matls, DataWarehouse* old_dw, DataWarehouse* new_dw) { for(intp=0;p<patches->size();p++){//Loop for all patches on this processor // Get data from data warehouse including 1 layer of "ghost" nodes from // surrounding patches old_dw->get( u, lb_->u, matl, patch, Ghost::AroundNodes, 1 ); Vector dx = patch->getLevel()->dCell(); // dt, dx Time and space increments old_dw->get(dt, sharedState_->get_delt_label()); new_dw->allocateAndPut(new_u, lb_->u, matl, patch ); // allocate memory for // result: new_u // define iteratorranges l and h, etc. // A lot of code has been pruned… for(NodeIteratoriter(l, h);!iter.done(); iter++){ // iterate through all the nodes IntVector n = *iter; double dudx = (u[n+IntVector(1,0,0)] - u[n-IntVector(1,0,0)]) /(2.0 * dx.x()); double du = - u[n] * dt * (dudx); new_u[n] = u[n] + du; }

  11. Uintah Component xml Task Compile Run Time (each timestep) RUNTIME SYSTEM Calculate Residuals Solve Equations Parallel I/O Visus PIDX VisIt UINTAH ARCHITECTURE

  12. Uintah Distributed Task Graph 2 million tasks per timestep globally on 98K cores Tasks on local and neighboring patches Same code callback by each patch Variables in data warehouse(DW) Read - get() from OldDW and NewDW Write- put() to NewDW Communication along edges Although individual task graphs are quite “linear” per mesh patch they are offset when multiple patches execute per core 4 patches single level ICE task graph

  13. Todd Harman • ICE Example • RMCRT (Reverse Monte Carlo Ray Tracing) Radiation • MPM Example

  14. Uintah Patch and Variables

  15. Task from the ICE Algorithm Compute face-centered pressures: Pressure _CC Compute Press_FC press _FC rho _CC Input variables Output variables x x • x x

  16. Scheduling ICE Algorithm Task “Hello World” task in Uintah void ICE::scheduleComputePressFC(SchedulerP& sched, const PatchSet* patches, const MaterialSubset* press_matl, constMaterialSet* matls) { Task* task = scinew Task( "ICE::computePressFC", this, &ICE::computePressFC); Ghost::GhostTypegac = Ghost::AroundCells; Task::MaterialDomainSpecoims = Task::OutOfDomain; //outside of ice matlSet. task->requires( Task::NewDW, lb->press_CCLabel, press_matl,oims, gac,1 ); task->requires( Task::NewDW,lb->sum_rho_CCLabel, press_matl,oims, gac,1 ); task->computes( lb->pressX_FCLabel, press_matl, oims ); task->computes( lb->pressY_FCLabel, press_matl, oims ); task->computes( lb->pressZ_FCLabel, press_matl, oims ); sched->addTask( task, patches, matls ); }

  17. Task from the ICE Algorithm (1/3) void ICE::computePressFC( constProcessorGroup*, constPatchSubset* patches, constMaterialSubset* /*matls*/, DataWarehouse* /*old_dw */, DataWarehouse* new_dw) { for(int p = 0; p < patches->size(); p++){ const Patch* patch = patches->get(p); Ghost::GhostTypegac = Ghost::AroundCells; // Get data from new Data Warehouse constCCVariable<double> press_CC; constCCVariable<double> sum_rho_CC; constintpressMatl = 0; new_dw->get( press_CC, lb->press_CCLabel, pressMatl, patch, gac, 1 ); new_dw->get( sum_rho_CC,lb->sum_rho_CCLabel, pressMatl, patch, gac, 1 );

  18. Task from the ICE Algorithm (2/3) …continued // Allocate memory and “put” it into new data warehouse SFCXVariable<double> pressX_FC; SFCYVariable<double> pressY_FC; SFCZVariable<double> pressZ_FC; new_dw->allocateAndPut( pressX_FC, lb->pressX_FCLabel, pressMatl, patch ); new_dw->allocateAndPut( pressY_FC, lb->pressY_FCLabel, pressMatl, patch ); new_dw->allocateAndPut( pressZ_FC, lb->pressZ_FCLabel, pressMatl, patch );

  19. Task from the ICE Algorithm (3/3) …continued // Templated function that performs the calculation //__________________________________ // For each face compute the pressure computePressFace< SFCXVariable<double> >([snip] ); computePressFace< SFCYVariable<double> >([snip] ); computePressFace< SFCZVariable<double> >([snip] ); } // patch loop }

  20. Radiation: Reverse Monte Carlo Ray Tracing (Single Level) • All-to-All: Every cell potentially needs to “see” all cells in the domain • Ideal for GPUs

  21. Radiation: Reverse Monte Carlo Ray Tracing (Single Level) Compute Div Q: Temp _CC rayTrace divQ_CC abskg _CC cellType _CC Input variables Output variables

  22. RMCRT Single Level Scheduling void Ray::sched_rayTrace( constLevelP& level, SchedulerP& sched, Task::WhichDWabskg_dw, Task::WhichDWsigma_dw, Task::WhichDWcelltype_dw, boolmodifies_divQ, constintradCalc_freq) { Task *task; if ( Parallel::usingDevice() ) { // GPU Task implementation task = scinew Task( “Ray::rayTraceGPU”, this, &Ray::rayTraceGPU, modifies_divQ, abskg_dw, sigma_dw, celltype_dw,… ); task->usesDevice(true); } else { // CPU Task implementation task = scinew Task( “Ray::rayTrace”, this, &Ray::rayTrace, modifies_divQ, abskg_dw, sigma_dw, celltype_dw,… ); }

  23. RMCRT Single Level Scheduling …continued // require an infinite number of ghost cells to access the entire domain. Ghost::GhostTypegac = Ghost::AroundCells; task->requires( abskg_dw , d_abskgLabel , gac, SHRT_MAX); task->requires( sigma_dw , d_sigmaT4_label, gac, SHRT_MAX); task->requires( celltype_dw , d_cellTypeLabel , gac, SHRT_MAX);

  24. RMCRT Single Level Scheduling …continued if( modifies_divQ ){ task->modifies( d_divQLabel); } else { task->computes( d_divQLabel); } } sched->addTask( task, level->eachPatch(), d_matlSet ); }

  25. RMCRT Single Level Task void Ray::rayTrace( constProcessorGroup* pc, constPatchSubset* patches, constMaterialSubset* matls, DataWarehouse* old_dw, DataWarehouse* new_dw, boolmodifies_divQ, Task::WhichDWwhich_abskg_dw, Task::WhichDW which_sigmaT4_dw, Task::WhichDWwhich_celltype_dw, constintradCalc_freq) { const Level* level = getLevel(patches); DataWarehouse* abskg_dw = new_dw->getOtherDataWarehouse( which_abskg_dw ); DataWarehouse* sigmaT4_dw = new_dw->getOtherDataWarehouse( which_sigmaT4_dw ); DataWarehouse* celltype_dw= new_dw->getOtherDataWarehouse( which_celltype_dw );

  26. RMCRT Single Level Task …continued constCCVariable<double> sigmaT4OverPi; constCCVariable<double> abskg; constCCVariable<int> celltype; //__________________________________ // Determine the size of the domain. IntVectordomainLo, cellHi; level->findCellIndexRange(cellLo, cellHi); // including extraCells // Get data from Data Warehouse abskg_dw->getRegion( abskg , d_abskgLabel , d_matl, level, cellLo, cellHi); sigmaT4_dw->getRegion( sigmaT4OverPi, d_sigmaT4_label, d_matl, level, cellLo, cellHi); celltype_dw->getRegion(celltype, d_cellTypeLabel, d_matl, level, cellLo, cellHi);

  27. RMCRT Single Level Task …continued // patch loop for (int p=0; p < patches->size(); p++){ constPatch* patch = patches->get(p); // Allocate memory and “put” it into new data warehouse CCVariable<double> divQ; if( modifies_divQ ){ new_dw->getModifiable( divQ, d_divQLabel, d_matl, patch ); }else{ new_dw->allocateAndPut( divQ, d_divQLabel, d_matl, patch ); } < Compute divQ >

  28. Material Point Method (MPM) 1. Particles have properties (velocity, mass etc) 2. Properties mapped onto mesh 3. Forces, accelerations, velocities calculated on grid nodes 4. Grid node motion calculated 5.Only the particles moved by mapping velocities back to particles . 1 2 3 4 6 5 Handles deformation, contact, high strain, fragmentation models solids MPM DAG

  29. MPM Task Update particle position: p.x updateParticles p.x delT gVelocity Input variables Output variables

  30. MPM Task Scheduling void MPM::scheduleInterpolateToParticlesAndUpdate(SchedulerP& sched, constPatchSet* patches, constMaterialSet* mpm_matls) { // Only schedule MPM on finest level if ( !flags->doMPMOnLevel( getLevelIndex(), numLevels() ) ) { return; } Task* task=scinew Task("MPM::interpolateToParticlesAndUpdate",this, &MPM::interpolateToParticlesAndUpdate); Ghost::GhostTypegac = Ghost::AroundCells; Ghost::GhostTypegnone= Ghost::None; task->requires( Task::OldDW, d_sharedState->get_delt_label() ); task->requires( Task::NewDW, lb->gVelocityStarLabel, gac, NGN ); task->requires( Task::OldDW, lb->pXLabel, gnone ); task->computes(lb->pXLabel_preReloc); sched->addTask( task, patches, mpm_matls );

  31. MPM Task void MPM::interpolateToParticlesAndUpdate(constProcessorGroup*, constPatchSubset* patches, constMaterialSubset* , DataWarehouse* old_dw, DataWarehouse* new_dw) { for(int p=0;p<patches->size();p++){ const Patch* patch = patches->get(p); ParticleInterpolator* interpolator = flags->d_interpolator->clone(patch); vector<IntVector> ni(interpolator->size()); vector<double> S(interpolator->size()); intnumMPMMatls = d_sharedState->getNumMPMMatls(); delt_vartypedelT; old_dw->get( delT, delt_label, getLevel(patches) );

  32. MPM Task …continued for(int m = 0; m < numMPMMatls; m++){ MPMMaterial* mpm_matl = d_sharedState->getMPMMaterial( m ); intmatl = mpm_matl->getmatlndex(); //______________________________ // ParticleSubset* pset = old_dw->getParticleSubset(matl, patch); constParticleVariable<Point> px; ParticleVariable<Point> pnew; old_dw->get(px, lb->pXLabel, pset); new_dw->allocateAndPut(pxnew, lb->pXLabel_preReloc, pset); Ghost::GhostTypegac = Ghost::AroundCells; constNCVariable<Vector> gvelocity_star; new_dw->get(gvelocity_star, lb->gVelocityStarLabel, matl,patch,gac,NGP); }

  33. MPM Task …continued //_______________________________ // Loop over particles for( iter = pset->begin(); iter != pset->end(); iter++ ){ particleIndexidx = *iter; // Get the node indices that surround the partcle interpolator->findCellAndWeights(px[idx],ni,S,psize[idx],pFOld[idx]); // Accumulate the contribution from each surrounding vertex Vector vel(0.0,0.0,0.0); for (int k = 0; k < flags->d_8or27; k++) { IntVector node = ni[k]; vel += gvelocity_star[node] * S[k]; } // Update the particle's position and velocity pxnew[idx] = px[idx] + vel*delT; } } // for materials delete interpolator; }

  34. MPMICE Fluid-Structure Interaction Algorithm ICE tasks MPM tasks MPMICE tasks Interpolate particle state to grid. Compute the equilibrium pressure Compute face centered velocities Compute sources of result of solidchemical reactions Compute an estimate of the time advanced pressure. Calculation of face centered pressure Material Stresses calculation Compute Lagrangian phase quantities at cell centers Calculate Momentum and heat exchange between materials

  35. MPMICE Fluid-Structure Interaction Algorithm ICE tasks MPM tasks MPMICE tasks Updatespecific volume due to the changes in temperature and pressure reactions. Advectfluids Update particle’s state

  36. Jeremy Thornock • Arches Component

More Related