1 / 15

Matrix Multiply Methods

Matrix Multiply Methods. Some general facts about matmul. High computation to communication hides multitude of sins Many “symmetries” meaning that the basic O(n 3 ) algorithm can be reordered many ways As always must imagine data tied to a processor, or data in other ways.

rachel
Télécharger la présentation

Matrix Multiply Methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Matrix Multiply Methods

  2. Some general facts about matmul • High computation to communication hides multitude of sins • Many “symmetries” meaning that the basic O(n3) algorithm can be reordered many ways • As always must imagine data tied to a processor, or data in other ways

  3. Think 64x64 processor grid >> A=ones(64,1)*(0:63); B=A’; >> subplot(121),imagesc(A), axis('square') >> subplot(122),imagesc(A'), axis('square') Colors show cols of A and rows of B Colors only match on the diagonals

  4. Think 64x64 processor grid >> A=ones(64,1)*(1:64); B=A’; >> subplot(121),imagesc(A), axis('square') >> subplot(122),imagesc(A'), axis('square') Colors show cols of A and rows of B Colors only match on the diagonals

  5. In symbols • AijandBij • Column of first matches row of second if j=i • Using 0 based indexing let plus,minus denote modulo n • Cij=Ai,i-j Dij=Bi-j,j • A shifts by columns, B shifts by rows • Now colors match as required for matmul • Diag of C(D) now has 0th column of A (0th row of B)

  6. In pictures

  7. At kth timestep k=0,1,…,n-1 let • Cij =Ai,i-j-k Dij=Bi-j-k,j • Colors match again • What else would work?

  8. Canon with bitxor (Edelman, Johnson,.) • At kthtimestep k=0,1,…,n-1 let • Cij=Ai,i-j-k Dij=Bi-j-k,j • Implement by moving C left 1, and D up 1 at every cycle • Colors match again • What else would work? This is the kind of thinking that goes into good parallel algorithm design • Cij=Ai,i*j*k Dij=Bi*j*k,jwhere “*” denotes bitxor • What do we need, column/row index match • Cycle through all indices

  9. Bitxor • v=[]; for i=1:8, v(i,:)=bitxor( (0:7),i); end • v = 1 0 3 2 5 4 7 6 2 3 0 1 6 7 4 5 3 2 1 0 7 6 5 4 4 5 6 7 0 1 2 3 5 4 7 6 1 0 3 2 6 7 4 5 2 3 0 1 7 6 5 4 3 2 1 0 8 9 10 11 12 13 14 15

  10. In Canon can go left/up or right/down(-k or +k equally good) • Can even go left/up and right/down • Think of this as evens left/up and odds Right/down • Can thereby use north,south, east, west on a processor grid ALeft ARight BUp BDown

  11. On a hypercube • Hypercubes are connected by bitxor with powers of 2 • Gray Codes connect into cycles • Point not so much hypercubes and Gray codes but the underlying structure of a matmul

  12. Pros and Cons of CannonAccording to cs267 berkeley • Local computation one call to (optimized) matrix-multiply • Hard to generalize for • ( AE: Don’t think any of these are right. Don’t believe everything you read) • p not a perfect square • A and B not square • Dimensions of A, B not perfectly divisible by s=sqrt(p) • A and B not “aligned” in the way they are stored on processors • block-cyclic layouts • Memory hog (extra copies of local matrices) CS267 Guest Lecture 1

  13. In any event • This algorithm is communicate, compute, communicate, compute • If hardware allows of course one can overlap • In the end these are all owner compute algorithms where the blocks of Aix and Bxy find their way to the owner (lots of people just broadcast it around at the theoretical cost of more memory and more bandwidth, but probably not in practice)

  14. What does an n3 matmul require • One way or another the products happen somewhere and the adds happen somewhere • That‘s it. • Today bottlenecks may be from main memory and nothing else matters.

  15. What if • We are allowed to break up a matrix as in matmul arbitrarily but no cost to the matmul itself as if there were just a foo that needed to bring together A(i,j) and B(j,k) and combine with a commutative and associative operator? What is the best way?

More Related