html5-img
1 / 22

Scheduling Strategies for Numerical Methods on Large Scale Distributed Platforms

gl. Grand Large (INRIA UR Futur). Scheduling Strategies for Numerical Methods on Large Scale Distributed Platforms Serge G. Petiton, Guy Bergère, Lamine Aouad, Haiwa He et Benoit Hudzia En collaboration avec Tahar Kechadi (UCD, Irlande) et Isaac Scherson (UCI, USA). Introduction

gloria
Télécharger la présentation

Scheduling Strategies for Numerical Methods on Large Scale Distributed Platforms

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. gl Grand Large (INRIA UR Futur) Scheduling Strategies for Numerical Methods on Large Scale Distributed Platforms Serge G. Petiton, Guy Bergère, Lamine Aouad, Haiwa He et Benoit Hudzia En collaboration avec Tahar Kechadi (UCD, Irlande) et Isaac Scherson (UCI, USA)

  2. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  3. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  4. Large scale P2P computation Large number of computers (thousand of PCs) • Heterogeneous • Properties unknown during programming by the end-user • Toward a P2P large scale programming methodology • Compiler, system,floating point arithmetic,.. On each peer? • Asynchronous graph of components, with large granularity • What performance can we expect for scientific computing • (Evaluation, Simulation, Experimentation, • Emulation, Extrapolation) GRID Explorer, UPMC

  5. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  6. n I 0 0 0 0 I 0 0 B = 0 0 I 0 0 0 0 I p To invert a matrix 2N3 operations Challenge : N = 106 A B = B A = I Block Gauss-Jordan Matrix size = N = p n GRID Explorer, UPMC

  7. p n n 2 2 1 2 2 2 3 3 3 p 2 3 3 3 2 3 3 3 Element Gauss-Jordan, LAPACK, cx =2n3 +O(n2) 1 A = B A = +/- A B ; BLAS3, cx = 2 n3 – n2, 2 2 A = A – B C ; BLAS3, cx = 2n3 3 n2 64bit floating point numbers

  8. 2 3 3 3 1 2 2 2 2 2 3 3 3 2 3 3 3 Each computing task : 1 up to 3 blocks maximum n < (memory size of one pair) / 3 Up to (p-1)2 peers GRID Explorer, UPMC

  9. 2 3 3 3 1 2 2 2 2 2 3 3 3 2 3 3 3 • Computation of « new » blocks on peer which minimize communications • « update » of block at step k, on peer who updated the block a step k-1 • data send to dedicate peer ASAP GRID Explorer, UPMC

  10. 3 3 3 2 1 2 2 2 2 2 3 3 3 3 2 2 1 2 2 3 2 3 3 3 2 Nevertheless, peers are not stables. Then, we can have in parallel computing from several steps of the method. We have to use an inter and intra steps dependency graph. GRID Explorer, UPMC

  11. Element Gauss-Jordan 2p-1 A=AB, BLAS3 Element Gauss-Jordan (p-1)2 A=AB-C, BLAS3 n2 double floating points = 8n2 bytes One step of the Block Gauss-Jordan method ; p=4 GRID Explorer, UPMC

  12. Problems • Scheduling strategies for large P2P scientific computing, • Multicast communications, • Performance evaluations and extrapolations GRID Explorer, UPMC

  13. Problems • Scheduling strategies for large P2P scientific computing, • Multicast communications, • Performance evaluations and extrapolations To be analyzed and emulated on GRID Explorer GRID Explorer, UPMC

  14. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  15. Network topology and flow of multicast data among peers of the same multicast group Source Receivers Peers Routers Multicast flow Networking device ( switch router , hub etc … ) Nodes of 90 peers localized at the same geographical spot (LAN) ,900 nodes -> 81000 peers

  16. Teraflops With memory size of 256 Mbytes 2.1 8j 2 N=270000, p=90, n=3000 8100 peers : 1Kwords, efficiency< 0.5 % 8Kwords, efficiency de 5% N=900000, p=300,n=3000 90000 peers 1Kwords, efficiency of 0.5 % 8Kword, efficiency of 5% 2.4 n=3000, 8Kwords/s 1 0.3 2.4j 63j n=3000, 1Kwords/s 0.2 19j 0.02 0.27 0.9 Matrix size (106) GRID Explorer, UPMC

  17. Teraflops 27h 15 N=270000, p=90, n=3000 8100 peers : 1Mwords/s, efficiency of 30% N=900000, p=300,n=3000 90000 peers 1Mword/s, efficiency of 30% n=3000,64Mbits/s 1 1.5 8h 0.27 0.9 Matrix size (106) GRID Explorer, UPMC

  18. Nevertheless, we assumed that peers are stables!!!!! GRID Explorer would allow to obtain more realistic evaluations, with respect to multicast performance and volatility. GRID Explorer, UPMC

  19. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  20. Compilation des communications Mais pairs volatiles! Voisinage d’un pair : ensemble ouvert de pairs à une certaine distance (norme sous-jacente). Avec un seuil pour l’espérance d’avoir un nombre de pairs actifs dans le voisinage. Trace compacte : chemin entre deux pairs pouvant être recouvert par un nombre fini de voisinages ouverts. 1/ Trouver un chemin satisfaisant entre deux pairs (+ multicast?) en utilisant GRID Explorer, 2/ Trouver une trace compacte correspondante à sauvegarder GRID Explorer, UPMC

  21. Introduction • Block Gauss-Jordan as an example • Performance Evaluation • Communication Optimisations • Conclusion GRID Explorer, UPMC

  22. Conclusion Préparer l’utilisation de GRID Explorer GRID Explorer, UPMC

More Related