380 likes | 552 Vues
Ghost Elements. Ghost Elements: Overview. Most FEM programs communicates via shared nodes, using FEM_Update_field Some computations require read-only copies of remote elements —“ghosts” Stencil-type finite volume computations The push form of matrix-vector product
E N D
Ghost Elements: Overview • Most FEM programs communicates via shared nodes, using FEM_Update_field • Some computations require read-only copies of remote elements—“ghosts” • Stencil-type finite volume computations • The push form of matrix-vector product • Many kinds of mesh modification • Ghosts are a recent addition to the FEM framework
Ghosts: 2D Example Serial Mesh 1 2 3 4 Right Chunk Left Chunk 1 2 Ghost of 3 Ghost of 2 3 4
Building Ghosts: • Add ghost elements layer-by-layer from init • A chunk will include ghosts of all the elements it is connected to by “tuples”—sets of nodes • For 2D, a tuple might be a 2-node edge • For 3D, a tuple might be a 4-node face • You specify a ghost layer with FEM_Add_ghost_layer(tupleSize,ghostNodes) • ghostNodes indicates whether to add ghost nodes as well as ghost elements.
Building Ghosts: • FEM_Add_ghost_elem(e,t,elem2tuple) • e is the element type • t is the number of tuples per element • elem2tuple maps an element to its tuples: • A tupleSize by t array of integers • Contains element-local node numbers • Repeat this call for each ghost element type
1 2 0 Ghosts: Node adjacency /* Node-adjacency: triangles have 3 nodes */ FEM_Add_ghost_layer(1,0); /* 1 node per tuple */ const static int tri2node[]={0,1,2}; FEM_Add_ghost_elem(0,3,tri2node);
1 2 0 Ghosts: Edge adjacency /* Edge-adjacency: triangles have 3 edges */ FEM_Add_ghost_layer(2,0); /* 2 nodes per tuple */ const static int tri2edge[]={0,1, 1,2, 2,0}; FEM_Add_ghost_elem(0,3,tri2edge);
Extracting and Using Ghosts • Ghosts are always given larger numbers than non-ghosts—that is, ghosts are at the end • FEM_Get_node_ghost() and FEM_Get_elem_ghost(e) • Return the index of the first ghost node or element • FEM_Update_ghost_field(fid,e,data) • Obtain other processor’s data (formatted like fid) for each ghost element of type e 0 g e
Ghosts and Symmetries • In addition to cross-processor ghosts, can build ghosts to model problem symmetries • Translational and rotational periodicities • Mirror symmetry • FEM_Add_linear_periodicity(nFaces,nPer, facesA,facesB, nNodes,nodeLocs) • Identify these two lists of faces under linear periodicity, and build ghosts to match
Symmetry Ghosts: 2D Example Serial Mesh 1 2 3 4 Horizontal Periodicity Left Chunk Right Chunk Sym. Ghost 4 1 2 Ghost of 3 Ghost of 2 3 4 Sym. Ghost 1
NetFEM Client: Pretty pictures ofwave dispersion around a crack
NetFEM Server Side: Overview • To allow the NetFEM client to connect, you add NetFEM registration calls to your server • Register nodes and element types • Register data items: scalars or spatial vectors associated with each node or element • You provide the display name and units for each data item • Link your program with “-module netfem” • Run with “++server”, and connect!
NetFEM Server Side: Setup • n=NetFEM_Begin(FEM_My_partition(),timestep, dim,NetFEM_POINTAT) • Call this each time through your timeloop; or skip • timestep identifies this data update • dim is the spatial dimension—must be 2 or 3 • Returns a NetFEM handle n used by everything else • NetFEM_End(n) • Finishes update n
NetFEM Server Side: Nodes • NetFEM_Nodes(n,nnodes,coord,”Position (m)”) • Registers node locations with NetFEM—future vectors and scalars will be associated with nodes • n is the handle returned by NetFEM_Begin • nnodes is the number of nodes • coord is a dim by nnodes array of doubles • The string describes the coordinate system and meaning of nodes • Currently, there can only be one call to nodes
NetFEM Server Side: Elements • NetFEM_Elements(n,nelem,nodeper, conn,”Triangles”) • Registers elements with NetFEM—future vectors and scalars will be associated with these elements • n is the handle returned by NetFEM_Begin • nelem is the number of elements • nodeper is the number of nodes per element • conn is a nodeper by nelem array of node indices • The string describes the kind of element • Repeat to register several kinds of element • Perhaps: Triangles, squares, pentagons, …
NetFEM Server Side: Vectors • NetFEM_Vector(n,val,”Displacement (m)”) • Registers a spatial vector with each node or element • Whichever kind was registered last • n is the handle returned by NetFEM_Begin • val is a dim by nitems array of doubles • There’s also a more general NetFEM_Vector_field in the manual • The string describes the meaning and units of the vectors • Repeat to register multiple sets of vectors • Perhaps: Displacement, velocity, acceleration, rotation, …
NetFEM Server Side: Scalars • NetFEM_Scalar(n,val,s,”Displacement (m)”) • Registers s scalars with each node or element • Whichever kind was registered last • n is the handle returned by NetFEM_Begin • val is an s by nitems array of doubles • There’s also a more general NetFEM_Scalar_field in the manual • s is the number of doubles for each node or element • The string describes the meaning and units of the scalars • Repeat to register multiple sets of scalars • Perhaps: Stress, plasticity, node type, damage, …
NetFEM Server Side: 2D Example integer :: t,n, numnp, numel real*8, dimension(2,numnp) :: coor,d,v,a integer, dimension(3,numel) :: conn n=NetFEM_Begin(FEM_My_partition(),t,2,NetFEM_POINTAT) CALL NetFEM_Nodes(n,numnp,coor,'Position (m)') CALL NetFEM_Vector(n,d,'Displacement (m)') CALL NetFEM_Vector(n,v,'Velocity (m/s)') CALL NetFEM_Vector(n,a,'Acceleration (m/s^2)') CALL NetFEM_Elements(n,numel,3,conn,'Triangles') CALL NetFEM_Scalar(n,stress,1,'Stress (pure)') CALL NetFEM_End(n)
NetFEM: Conclusion • Easy, general way to get output from an FEM computation • Client configures itself based on server • Client can be run anywhere (from home!) • Server performance impact minimal (1s!) • Future work: • Support multiple chunks per processor • Non-network, file-based version • Movie mode
Multiple Modules • Use of 2 or more CHARM++ frameworks in the same program • FEM—multiple unstructured mesh chunks • MBLOCK—multiple structured mesh blocks • AMPI—Adaptive MPI-on-Charm++ • All based on the Threaded CHARM++ framework (TCHARM) • For example, we may want to use AMPI in our FEM program for exchanging information between FEM chunks
Details • Can compose FEM programs with other modules by just calling that module’s attach routine from init() • For example: void init(void) { //Start AMPI, to allow drivers to use MPI calls: MPI_Attach(“myAMPIFEM”); // .. Use FEM_Set() calls as usual .. }
Example #include ‘fem.h’ #include ‘mpi.h’ void driver(void) { //..use FEM_Get calls as usual.. //Broadcast “data” from chunk 0: MPI_Bcast(&data,1,MPI_DOUBLE,0,MPI_COMM_WORLD); //...timeloop: Use FEM_Update_field calls as usual... if (dataToSend) MPI_Send(&data,4,MPI_INT,dest,tag,MPI_COMM_WORLD); else MPI_Recv(&data,4,MPI_INT,src,tag,MPI_COMM_WORLD,&status); }
Multiple Modules: Conclusion • Easy to use other modules from FEM framework • Just call MPI_Attach from init, and link with “-module ampi” • We could also have specified how to combine frameworks by writing a special startup routine named TCHARM_User_setup() • Not FEM-centric: overrides the normal call to init() • Allows you to call main computation routine something other than driver() • See TCHARM manual for details