1 / 13

Four Variations of Matrix Multiplication

Four Variations of Matrix Multiplication. About 30 minutes – by Mike. See http://www.cs.utah.edu/formal_verification Look under Concurrency Education Look at MPI teaching resources Many of these resources are due to Simone Atzeni Many are due to Geof Sawaya

idalia
Télécharger la présentation

Four Variations of Matrix Multiplication

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Four Variations ofMatrix Multiplication About 30 minutes – byMike

  2. See http://www.cs.utah.edu/formal_verification Look under Concurrency Education Look at MPI teaching resources Many of these resources are due to Simone Atzeni Many are due to GeofSawaya All the examples from Pacheco’s MPI book! This will soon be available as projects within our ISP Eclipse GUI! Matrix Challenge is due to Steve Siegel See http://www.cs.utah.edu/ec2 Steve’s challenge stems from an MPI book It is based on an example from the book Using MPI: Portable Parallel Programming with the Message-Passing Interface by William Gropp, Ewing Lusk, and Anthony Skjellum. Matrix Multiplication Challenge

  3. For this tutorial, we include four variants These try various versions of mat-mult Includes one buggy version Also reveals one definite future work item Detect symmetries in MPI programs Avoid redundant searches Very apparent when you run our fourth version Matrix Multiplication Illustration

  4. Example of MPI Code (Mat MatMult) MPI_Recv X = MPI_Send

  5. Salient Code Features if (myid == master) { ... MPI_Bcast(b, brows*bcols, MPI_FLOAT, master, …); ... } else { // All Slaves do this ... MPI_Bcast(b, brows*bcols, MPI_FLOAT, master, …); ... }

  6. Salient Code Features if (myid == master) { ... for (i = 0; i < numprocs-1; i++) { for (j = 0; j < acols; j++) { buffer[j] = a[i*acols+j]; } MPI_Send(buffer, acols, MPI_FLOAT, i+1, …); numsent++; } } else { // slaves ... while (1) { ... MPI_Recv(buffer, acols, MPI_FLOAT, master, …); ... } } Block till buffer is copied into System Buffer System Buffer

  7. Handling Rows >> Processors … MPI_Recv Send Next Row to First Slave which By now must be free MPI_Send

  8. Handling Rows >> Processors … MPI_Recv OR Send Next Row to First Slave that returns the answer! MPI_Send

  9. Optimization if (myid == master) { ... for (i = 0; i < crows; i++) { MPI_Recv(ans, ccols, MPI_FLOAT, FROM ANYBODY, ...); ... if (numsent < arows) { for (j = 0; j < acols; j++) { buffer[j] = a[numsent*acols+j]; } MPI_Send(buffer, acols, MPI_FLOAT, BACK TO THAT BODY, ...); numsent++; ... } }

  10. Optimization Shows that wildcard receives can arise quite naturally … if (myid == master) { ... for (i = 0; i < crows; i++) { MPI_Recv(ans, ccols, MPI_FLOAT, FROM ANYBODY, ...); ... if (numsent < arows) { for (j = 0; j < acols; j++) { buffer[j] = a[numsent*acols+j]; } MPI_Send(buffer, acols, MPI_FLOAT, BACK TO THAT BODY, ...); numsent++; ... } }

  11. Further Optimization if (myid == master) { ... for (i = 0; i < crows; i++) { MPI_Recv(ans, ccols, MPI_FLOAT, FROM ANYBODY, ...); ... if (numsent < arows) { for (j = 0; j < acols; j++) { buffer[j] = a[numsent*acols+j]; } … here, WAIT for previous Isend to finish (software pipelining) … MPI_Isend(buffer, acols, MPI_FLOAT, BACK TO THAT BODY, ...); numsent++; ... } }

  12. MPI_Irecv(source, msg_bug, req_struct, ..) This is a non-blocking receive call MPI_Wait(req_struct) awaits completion Source could be “wildcard” or * or ANY_SOURCE Receive from any eligible (matching) sender Summary of Some MPI Commands

  13. End of E

More Related