1 / 32

iCER User M eeting

iCER User M eeting. 3/26/10. Agenda. W hat’s new in iCER (Wolfgang) W hats new in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs, scheduling) Other. What’s New in iCER. New iCER Website. http:// icer.msu.edu. Part of VPRGS News Showcased Projects

mervin
Télécharger la présentation

iCER User M eeting

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. iCER User Meeting 3/26/10

  2. Agenda What’s new in iCER (Wolfgang) Whatsnew in HPCC (Bill) Results of the recent cluster bid Discussion of buy-in (costs, scheduling) Other

  3. What’s New in iCER

  4. New iCER Website http://icer.msu.edu • Part of VPRGS • News • Showcased Projects • Supported Funding • Recent Publications

  5. User Dashboard http://wiki.hpcc.msu.edu • Common Portal to User Resources • FAQ • Documentation • Forums • Research Opportunities • Known Issues

  6. Current Research Opportunities http://wiki.hpcc.msu.edu • NSF Postdoc Fellowships for Transformative Computational Science using CyberInfrastructure • Website • Proposals • Classes • Seminars • Papers • Jobs

  7. Postdoc Matching • 50/50 match from iCER for a postdoc for large grant proposals (multi-investigator, inter-disciplinary) • Currently only three matches picked up • Titus Brown • Scott Pratt • Eric Goodman • Several other matches promised, but grants not decided yet • More opportunities!

  8. Personnel • New Hire! • Eric McDonald • System Programmer • Partnership with NSCL (Alex Brown et al.)

  9. IGERT Grant Proposal • Interdisciplinary graduate education in high-performance-computing & science • Big Data • Leads: • Dirk Colbry • Bill Punch

  10. BEACON • NSF STC • Funded, starting in June • $5M/year for 5 years • New joint space with iCER & HPCC • First floor BPS • Former BPS library space

  11. What’s New in HPCC

  12. Graphics Cluster 32 node cluster • 2 x Quad 2.4GHz • 18GB ram • Two Nvidia M1060 • no Infinband (Ethernet only)

  13. Result of aBuyin 21 of the nodes were purchased by funds from users Can be used by any HPCC user

  14. Each nVidia Tesla M1060 Number of Streaming Processor Cores 240 Frequency of processor cores 1.3 GHz Single Precision peak floating point performance 933 gigaflops Double Precision peak floating point performance 78 gigaflops Dedicated Memory 4 GB GDDR3 Memory Speed 800 MHz Memory Interface 512-bit Memory Bandwidth 102 GB/sec System Interface PCIe

  15. Example Script #!/bin/bash –login #PBS –l nodes=1:ppn=1:gfx10,walltime=01:00:00 #PBS –ladvres=gpgpu.6364,gres=gpu:1 cd ${PBS_O_WORKDIR} module load cuda myprogrammyarguments

  16. CELL Processor 2 Playstation 3’s • running linux • for experimenting with CELL • dev-cell08 and test-cell08 (see the web for more details)

  17. Green Restrictions The machine Green is still up an running, especially after having removed some problematic memory Mostly replaced by AMD fat nodes On April 1st, it will be reserved for jobs requesting 32 cores (or more) and/or 250 GB of memory (or more) Hope to help people running larger jobs

  18. HPCC Stats Ganglia (off the main web page, Status) is back and working. Gives you a snapshot of the present system We are nearly done with a database of all run jobs that can be queried for all kinds of information. Should be up in the next couple of weeks.

  19. Cluster Bid Results

  20. How it was done • HPCC submitted a Request for Quotes for a new cluster system. • Targeted: • performance vs. power main concern • Inifinband • 3GB per core of memory • approximately $500K of cluster

  21. Results Received 13 bids from 8 vendors Found 3 options that were suitable for the power, space, cooling and performance we were looking for. Looking for some guidance from you on a number of issues

  22. Choice 1:Infinibandconfig Two ways to configure Infiniband: • series of smaller switches configured in a hierarchy (leaf switches) • one big switch (director) • leaf switches are cheaper, harder to expand (requires reconfiguration), more wires, more points of failure • director is more expandable, convenient, expensive

  23. Choice 2:BuyinCost buyin cost could reflect just the cost of the compute nodes itself, HPCC provides infrastructure (switches, wires, racks, etc.) buyin cost could reflect the total hardware cost obviously, subsidizing costs means cheaper buyin costs, fewer general nodes.

  24. Remember HPCC is still subsidizing costs, even if hardware is not subsidized still must buy air-conditioning equipment, OS licenses, MOAB (scheduling) licenses, software licenses (Not to mention salaries, power) Combined, “other” hardware will run to about $75K scheduler about $100K for 3 years.

  25. Some Issues 1 node = 8 cores, 1 chassis = 4 nodes. Buyin will be at the chassis level (32 cores)

  26. For 1024cores

  27. Scheduling We are working on some better scheduling methods. We think they have promise and would be very useful to the user base For the moment, it will be the Purdue model. We guarantee access to nodes within 8 hours of a request from a buyin user. Still a week max run time (though can be changed)

More Related