1 / 28

NPACI ROCKS Technology

Kittirak Moungmingsuk Managing Director Cluster Kit Co., Ltd. kittirak@clusterkit.co.th. NPACI ROCKS Technology. About NPACI ROCKS. NPACI N ational P artnership for A dvanced C omputational I nfrastructure. Rocks?. Rocks & Rolls. What is NPACI Rocks.

kieve
Télécharger la présentation

NPACI ROCKS Technology

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Kittirak Moungmingsuk Managing Director Cluster Kit Co., Ltd. kittirak@clusterkit.co.th NPACI ROCKS Technology

  2. About NPACI ROCKS NPACI National Partnership for Advanced Computational Infrastructure. Rocks?

  3. Rocks & Rolls

  4. What is NPACI Rocks It’ is a “Linux Cluster Distribution” http://www.rocksclusters.org/

  5. Agenda Why Cluster? HPC world and Trend HPC in THAI ROCKS Cluster Q & A

  6. Why Cluster? One man show not work but Team Work! Research More complex and more amount data. Simulation. Technical Can't scale heat problem hard to design price

  7. What cluster do? Aerodynamics Air pollution prediction Bioinformatics Chemistry Graphic Rendering Oil and gas Video Conference Weather prediction

  8. HPC world and Trend

  9. By architecture • Cluster 361 • Constellation 31 • MPP 108

  10. Top of cluster in 500 Rank 5 : MareNostrum at Barcelona Supercomputing Center Processors 10240 Rmax 62630 GFLOPs Rpeak 94208 GFLOPs

  11. HPC in THAI Thaigrid and 14 members. 400 processor installed in FEB 2007 Chula University 88 processor called PAKSA at Chemical dep. GISTDA 6 nodes pilot cluster. BIOTEC Mac Cluster 96 Processor ThaiHPC group (32 nodes * 4 group)

  12. NPACI ROCKSNational Partnership for Advanced Computational Infrastructure • A scalable, complete and fully automated cluster deployment solution with out of the box rational default settings. • Developed by • San Diego Supercomputer Centre - Grid & Cluster Computing Group • UC Berkeley Millennium Project • SCS Linux Competency Centre

  13. Rocks (Linux Cluster Distribution)www.rockscluster.org Technology transfer of commodity clustering to application scientists “make clusters easy” Scientists can build their own supercomputers and migrate up to national centers as needed Rocks is a cluster on a CD Red Enterprise Hat Linux (open source and free) Clustering software (PBS, SGE, Ganglia, NMI) Highly programmatic software configuration management First Software release Nov, 2000 Supports x86, Opteron/EM64T, and Itanium RedHat/CentOS 4.x

  14. Rocks Philosophy Optimize for Installation Get the system up quickly Build supercomputer in hours Automatic configuration Manage through re-installation Integrated de-facto standard cluster packages PBS/MAUI, MPICH, ATLAS, ScaLAPack, HPL... SGE, PVFS (added by SCS-LCC)

  15. Rocks Philosophy Integrated, easy to manage cluster system. Excellent scaling to large number of nodes. Single consistent cluster management methodology. Avoid version skew and maintains consistent image across cluster!

  16. Rocks basic approach Install a frontend insert rocks base insert rocks roll fill information Tea break Install compute nodes Login to frontend Execute insert-ethers Boot compute node with Rocks base CD (or PXE) Add user account and Run!

  17. More rolls area51 System security related services and utilites bio Bioinformatics utilities condor High throughput computing tools ganglia Cluster monitoring system from UCB grid Globus 4.0.2 (GT4) java Sun Java SDK and JVM pbs Portable Batch System pvfs2 PVFS2 File System sge Sun Grid Engine job queueing system topspin-ib Topspin’s IB stack packaged by Cluster Corp. viz Support for building visualization clusters voltaire-ib InfiniBand support for Voltaire’s IB hardware

  18. System monitoring with Ganglia

  19. Batch Scheduler with SGE

  20. Viz wall

  21. Minimum Requirements Frontend 2 Ethernet Ports CDROM 18 GB Disk space 512 MB RAM Compute Nodes 1 Ethernet Ports 18 GB Disk space 512 MB RAM

  22. Question & Answer Thank you,

  23. What is Grid computing? Group of computer in wide area network Sharing utilization idle time of CPU Security Public Key Infrastructure (PKI) Easy to use Single Sign On Web portal, Scheduler, Resource Discovery

  24. Difference between Grid & Cluster

  25. Difference between Grid and Server

  26. Virtual Organization (VO)

  27. Thanks again.

More Related