1 / 72

How to make PC Cluster Systems?

How to make PC Cluster Systems?. Tomo Hiroyasu Doshisha University Kyoto Japan tomo@is.doshisha.ac.jp. Cluster. clus · ter n.

Télécharger la présentation

How to make PC Cluster Systems?

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. How to make PC Cluster Systems? Tomo Hiroyasu Doshisha University Kyoto Japan tomo@is.doshisha.ac.jp

  2. Cluster • clus·tern. • A group of the same or similar elements gathered or occurring closely together; a bunch: “She held out her hand, a small tight cluster of fingers” (Anne Tyler). • Linguistics. Two or more successive consonants in a word, as cl and st in the word cluster. A Cluster is a type of parallel or distributed processing system, which consists of a collection of interconnected stand alone/complete computers cooperatively working together as a single, integrated computing resource.

  3. Why Parallel Processing?

  4. Evolutionary Computation Features It simulates the mechanism of creatures’ heredity and evolution. It can apply to several types of problems. It needs a huge computational costs. There are several individuals. Tasks can be divided into sub tasks. High Performance Computing

  5. http://www.top500.org Top500 Ranking Name # Proc Rmax (Gflops) 1 8192 ASCI White 4938 9632 2 ASCI Red 2379 5808 3 ASCI Blue Pacific 2144 1608 4 ASCI Blue 6144 5 SP Power III 1417 1336 Parallel Computers

  6. Commodity Hardware Networking Internet Lan Wan Gigabit cable less etc. CPU Pentium Alpha Power etc. PCs + Networking PC Clusters

  7. Why PC Cluster? High ability Low Cost Easy to setup Easy to use Possession hardware Commodity Off-the-shelf Software Open source Free ware Peopleware University students and staff Lab nerds

  8. http://www.top500.org Top500 Ranking Name # Proc Rmax (Gflops) 60 512 Los Lobos 237 232.6 84 CPlant Cluster 580 528 126 CLIC PIII 800 MHz 143.3 196 215 Kepler PIII 650 MHz 96.2 396 SCore II/PIII 800 MHz 132 64.7

  9. Contents of this tutorial Concept of PC Clusters Small Cluster Advanced Cluster Hardware Software Books, Web sites, … Conclusions

  10. What is cluster computing systems?

  11. Beowulf Cluster http://beowulf.org/ A Beowulf is a collection of personal computers (PCs) interconnected by widely available networking running any one of several open-source Unix-like operating systems. Some Linux clusters are built for reliability instead of speed. These are not Beowulfs. The Beowulf Project was started by Donald Becker when he moved to CESDIS in early 1994. CESDIS was located at NASA's Goddard Space Flight Center, and was operated for NASA by USRA.

  12. Avalon http://cnls.lanl.gov/Frames/avalon-a.html Los Alamos National Laboratory Alpha(140)+Myrinet Beowulf First Beowulf in the ranking of Top 500

  13. The Berkeley NOW project http://now.cs.berkeley.edu/ The Berkeley NOW project is building system support for using a network of workstations (NOW) to act as a distributed supercomputer on a building-wide scale. April 30, 1997: NOW makes LINPACK Top 500! June 15, 1998: NOW Retreat Finale

  14. Cplant Cluster http://www.cs.sandia.gov/cplant/ Sandia National Laboratory Alpha(580) + Myrinet

  15. RWCP Cluster http://pdswww.rwcp.or.jp/ Japanese typical cluster Score, Open MP Myrinet

  16. Doshisha Cluster http://www.is.doshisha.ac.jp/cluster/index.html Pentium III 0.8G (256) + Fast Ethernet Pentium III 1.0 G (2*64) + Myrinet 2000

  17. Let’s start to build simple Cluster system !!

  18. Simple Cluster 8nodes + gateway(file server) Fast Ethernet Switching Hub $10000

  19. What do we need? Normal PCs Hardware CPU memory motherboard hard disc case network card cable hub

  20. Classification of Parallel Computers

  21. Classification of Parallel Computers

  22. What do we need? Software OS tools Editor Compiler Parallel Library

  23. Message passing

  24. Message Passing Libraries PVM (Parallel Virtual Machine) http://www.epm.ornl.gov/pvm/pvm_home.html PVM was developed at Oak Ridge National Laboratory and the University of Tennessee. MPI (Message Passing Interface) http://www-unix.mcs.anl.gov/mpi/index.html MPI is an API of message passing. 1992: MPI forum 1994 MPI 1 1887 MPI 2

  25. Implementations of MPI Free Implementation MPICH : LAM: WMPI : Windows 95,NT CHIMP/MPI MPI Light Bender Implementation Implementations of parallel computers MPI/PRO :

  26. Procedure of constructing clusters Prepare several PCs Connected PCs Install OS and tools Install developing tools and parallel library

  27. Installing MPICH/LAM # rpm –ivh lam-6.3.3b28-1.i386.rpm # rpm –ivh mpich-1.2.0-5.i386.rpm # dpkg –i lam2_6.3.2-3.deb # dpkg –i mpich_1.1.2-11.deb # apt-get install lam2 # apt-get install mpich

  28. Parallel programming (MPI) Massive parallel computer gateway Jobs Tasks user PC-Cluster

  29. Initialization Communicator Acquiring number of process Acquiring rank Termination Programming style sheet # include “mpi.h” int main( int argc, char **argv ) { MPI_Init(&argc, &argv ) ; MPI_Comm_size( …… ); MPI_Comm_rank( …… ) ; /* parallel procedure */ MPI_Finalize( ) ; return 0 ; }

  30. Communications One by one communication Group communication Process A Process B Receive/send data Receive/send data

  31. One by one communication [Sending] MPI_Send( void *buf, int count, MPI_Datatype datatype, int dest, int tag, MPI_Comm comm) void *buf:Sending buffer starting address (IN) int count:Number of Data (IN) MPI_ Datatype datatype:data type (IN) int dest:receiving point (IN) int tag:message tag (IN) MPI_Comm comm:communicator(IN)

  32. One by one communication [Receiving] MPI_Recv( void *buf, int count, MPI_Datatypedatatype, int source, int tag, MPI_Commcomm, MPI_Statusstatus) void *buf:Receiving buffer starting address (OUT) int source:sending point (IN) int tag:Message tag (IN) MPI_Status *status:Status (OUT)

  33. ~Hello.c~ #include <stdio.h> #include "mpi.h" void main(int argc,char *argv[]) { int myid,procs,src,dest,tag=1000,count; char inmsg[10],outmsg[]="hello"; MPI_Status stat; MPI_Init(&argc,&argv); MPI_Comm_rank(MPI_COMM_WORLD,&myid); count=sizeof(outmsg)/sizeof(char); if(myid == 0){ src = 1; dest = 1; MPI_Send(&outmsg,count,MPI_CHAR,dest,tag,MPI_COMM_WORLD); MPI_Recv(&inmsg,count,MPI_CHAR,src,tag,MPI_COMM_WORLD,&stat); printf("%s from rank %d\n",&inmsg,src); }else{ src = 0; dest = 0; MPI_Recv(&inmsg,count,MPI_CHAR,src,tag,MPI_COMM_WORLD,&stat); MPI_Send(&outmsg,count,MPI_CHAR,dest,tag,MPI_COMM_WORLD); printf("%s from rank %d\n",&inmsg,src); } MPI_Finalize(); }

  34. One by one communication MPI_Recv(&inmsg,count,MPI_CHAR,src, tag,MPI_COMM_WORLD,&stat); MPI_Send(&outmsg,count,MPI_CHAR,dest, tag,MPI_COMM_WORLD); MPI_Sendrecv(&outmsg,count,MPI_CHAR,dest, tag,&inmsg,count,MPI_CHAR,src, tag,MPI_COMM_WORLD,&stat);

  35. 4 3.5 3 2.5 y 2 1.5 1 0.5 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 x Calculation of PI (approximation) -Parallel conversion- Integral calculus is divided in to sub sections. Each subsection is allotted to processors. Results of calculation are assembled.

  36. Group communication Broadcast MPI_Bcast( void *buf, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) Rank of sending point Data

  37. Group Communication • Communication and operation (reduce) MPI_Reduce( void *sendbuf, void *recvbuf, int count, MPI_Datatype datatype, MPI_Opop, int root, MPI_Comm comm ) Operation handle Rank of receiving point MPI_SUM, MPI_MAX, MPI_MIN, MPI_PROD Operation

  38. Approximation of PI Programming flow

  39. More Cluster systems !!

  40. Hardware CPU Intel Pentium III, IV AMD Athlon Transmeta Crusoe http://www.intel.com/ http://www.amd.com/ http://www.transmeta.com/

  41. Hardware Network Ethernet Gigabit Ethernet Myrinet QsNet Giganet SCI Atoll VIA Infinband Gigabit Wake On LAN

  42. Hardware Hard disc SCSI IDE Raid Diskless Cluster http://www.linuxdoc.org/HOWTO/Diskless-HOWTO.html

  43. Hardware Case Box inexpensive Rack compact maintenance

  44. Software Software

  45. OS Linux Kernels Open source network Free ware Features The /proc file system Loadable kernel modules Virtual consoles Package management

  46. OS Linux Kernels http://www.kernel.org/ Linux Distributions Red Hat www.redhat.com Debian GNU/Linux www.debian.org S.u.S.E. www.suse.com Slackware www.slackware.org

  47. client server client client Administration software NFS(Network File System) NIS (Network Information System) NTP (Network Time Protocol)

  48. Resource Management and Scheduling Process distribution Load balance Job scheduling of multiple tasks CONDOR http://www.cs.wisc.edu/condor/ DQS http://www.scri.fsu.edu/~pasko/dqs.html LSF http://www.platform.com/index.html The Sun Grid Engine http://www.sun.com/software/gridware/

  49. Tools for Program Development GNU http://www.gnu.org/ NAG http://www.nag.co.uk PGI http://www.pgroup.com/ VAST http://www.psrv.com/ Absoft http://www.absoft.com/ Fujitsu http://www.fqs.co.jp/fort-c/ Intel http://developer.intel.com/software/ products/compilers/index.htm Editor Emacs Language C, C++, Fortran, Java Compiler

  50. Tools for Program Development Make CVS Debugger Gdb Total View http://www.etnus.com

More Related