1 / 58

Commodity Computing

Commodity Computing. Computing power is everywhere, how can we make it usable by anyone?. Here is what empowered scientists can do with commodity computing …. NUG28 - Solved!!!!.

amiel
Télécharger la présentation

Commodity Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Commodity Computing

  2. Computing power is everywhere,how can we make it usable by anyone?

  3. Here is what empowered scientists can do with commodity computing …

  4. NUG28 - Solved!!!! We are pleased to announce the exact solution of the nug28 quadratic assignment problem (QAP). This problem was derived from the well known nug30 problem using the distance matrix from a 4 by 7 grid, and the flow matrix from nug30 with the last 2 facilities deleted. This is to our knowledge the largest instance from the nugxx series ever provably solved to optimality. The problem was solved using the branch-and-bound algorithm described in the paper "Solving quadratic assignment problems using convex quadratic programming relaxations," N.W. Brixius and K.M. Anstreicher. The computation was performed on a pool of workstations using the Condor high-throughput computing system in a total wall time of approximately 4 days, 8 hours. During this time the number of active worker machines averaged approximately 200. Machines from UW, UNM and (INFN) all participated in the computation.

  5. NUG30 Personal Condor … For the run we will be flocking to -- the main Condor pool at Wisconsin (600 processors) -- the Condor pool at Georgia Tech (190 Linux boxes) -- the Condor pool at UNM (40 processors) -- the Condor pool at Columbia (16 processors) -- the Condor pool at Northwestern (12 processors) -- the Condor pool at NCSA (65 processors) -- the Condor pool at INFN (200 processors) We will be using glide_in to access the Origin 2000 (through LSF ) at NCSA. We will use "hobble_in" to access the Chiba City Linux cluster and Origin 2000 here at Argonne.

  6. It works!!! Date: Thu, 8 Jun 2000 22:41:00 -0500 (CDT) From: Jeff Linderoth <linderot@mcs.anl.gov> To: Miron Livny <miron@cs.wisc.edu> Subject: Re: Priority This has been a great day for metacomputing! Everything is going wonderfully. We've had over 900 machines (currently around 890), and all the pieces are working great… Date: Fri, 9 Jun 2000 11:41:11 -0500 (CDT) From: Jeff Linderoth <linderot@mcs.anl.gov> Still rolling along. Over three billion nodes in about 1 day!

  7. Up to a Point … Date: Fri, 9 Jun 2000 14:35:11 -0500 (CDT) From: Jeff Linderoth <linderot@mcs.anl.gov> Hi Gang, The glory days of metacomputing are over. Our job just crashed. I watched it happen right before my very eyes. It was what I was afraid of -- they just shut down denali, and losing all of those machines at once caused other connections to time out -- and the snowball effect had bad repercussions for the Schedd.

  8. Back in Business Date: Fri, 9 Jun 2000 18:55:59 -0500 (CDT) From: Jeff Linderoth <linderot@mcs.anl.gov> Hi Gang, We are back up and running. And, yes, it took me all afternoon to get it going again. There was a (brand new) bug in the QAP "read checkpoint" information that was making the master coredump. (Only with optimization level -O4). I was nearly reduced to tears, but with some supportive words from Jean-Pierre, I made it through.

  9. The First 600K seconds …

  10. NUG30 - Solved!!! Sender: goux@dantec.ece.nwu.edu Subject: Re: Let the festivities begin. Hi dear Condor Team, you all have been amazing. NUG30 required 10.9 years of Condor Time. In just seven days ! More stats tomorrow !!! We are off celebrating ! condor rules ! cheers, JP.

  11. What can commodity computing do for you?

  12. I have a job parallel MW application with 600 workers. How can I benefit from Commodity Computing?

  13. My Application … Study the behavior of F(x,y,z) for 20 values of x, 10 values of y and 3 values of z (20*10*3 = 600) • F takes on the average 3 hours to compute on a “typical” workstation (total = 1800 hours) • F requires a “moderate” (128MB) amount of memory • F performs “little” I/O - (x,y,z) is 15 MB and F(x,y,z) is 40 MB

  14. Step I - get organized! • Write a script that creates 600 input files for each of the (x,y,z) combinations • Write a script that collects the data from the 600 output files • Turn your workstation into a “Personal Condor” • Submit a cluster of 600 jobs to your personal Condor • Go on a long vacation … (2.5 months)

  15. Your Personal Condor will ... • ... keep an eye on your jobs and will keep you posted on their progress • ... implement your policy on when the jobs can run on your workstation • ... implement your policy on the execution order of the jobs • .. add fault tolerance to your jobs • … keep a log of your job activities

  16. personal Condor your workstation 600 Condor jobs

  17. Step II - build your personal Grid • Install Condor on the desk-top machine next door. • Install Condor on the machines in the class room. • Install Condor on the O2K in the basement. • Configure these machines to be part of your Condor pool. • Go on a shorter vacation ...

  18. personal Condor Group Condor your workstation 600 Condor jobs

  19. Tasks Jobs Condor Layers Application Application Agent Customer Agent Environment Agent Owner Agent Local Resource Management Resource

  20. Matchmaking in Condor Environment Agent Negotiator Collector N C Submit Machine Execution Machine [...A] CA RA [...C] [...B] Customer Agent Resource Agent

  21. Application Agent Execution Submission Owner Agent Customer Agent Request Queue Object Files Object Files Execution Agent Data & Object Files Ckpt Files Application Process Application Process Remote I/O & Ckpt

  22. Step III - Take advantage of your friends • Get permission from “friendly” Condor pools to access their resources • Configure your personal Condor to “flock” to these pools • reconsider your vacation plans ...

  23. personal Condor Group Condor your workstation 600 Condor jobs friendly Condor

  24. Think big. Go to the Grid

  25. Upgrade to Condor-G A Grid enabled version of Condor that uses the inter-domain services of Globus to bring Grid resources into the domain of your Personal-Condor • Supports Grid Universe jobs • Uses GSIFTP to move glide-in software • Uses MDS for submit information

  26. Condor-glide-in Enable an application to dynamically turn allocated grid resources into members of a Condor pool for the duration of the allocation. • Easy to use on different platforms • Robust • Supports SMPs

  27. Step IV - Go for the Grid • Get access (account(s) + certificate(s)) to a “Computational” Grid • Submit 599 “Grid Universe” Condor- glide-in jobs to your personal Condor • Take the rest of the afternoon off ...

  28. personal Condor Globus Grid Group Condor your workstation 600 Condor jobs LSF PBS 599 glide-ins friendly Condor Condor

  29. Driving Concepts

  30. HW is a Commodity Raw computing power is everywhere - on desk-tops, shelves, and racks. It is • cheap • dynamic, • distributively owned, • heterogeneous and • evolving.

  31. “ … Since the early days of mankind the primary motivation for the establishment of communitieshas been the idea that by being part of an organized group the capabilities of an individual are improved. The great progress in the area of inter-computer communication led to the development of means by which stand-alone processing sub-systems can be integrated into multi-computer ‘communities’. … “ Miron Livny, “Study of Load Balancing Algorithms for Decentralized Distributed Processing Systems.”, Ph.D thesis, July 1983.

  32. Every Communityneeds a Matchmaker!

  33. Why? Because ... .. someone has to bring together community members who have requests for goods and services with members who offer them. • Both sides are looking for each other • Both sides have constraints • Both sides have preferences

  34. We use Matchmakersto build Computing Communitiesout of Commodity Components

  35. The Matchmaking Process Advertising Protocol Each party uses ClassAd to declare its type, target type, constraints on target, ranking of new match, ranking of current match ... Matchmaking Algorithm Used by Matchmaker to create matches Match Notification Protocol Used by Matchmaker to notify matched parties Claiming Protocol Used by matched parties to claim each other

  36. High Throughput Computing For many experimental scientists, scientific progress and quality of research are strongly linked to computing throughput. In other words, they are less concerned about instantaneous computing power. Instead, what matters to them is the amount of computing they can harness over a month or a year --- they measure computing power in units of scenarios per day, wind patterns per week, instructions sets per month, or crystal configurations per year.

  37. High Throughput Computingis a24-7-365activity FLOPY (60*60*24*7*52)*FLOPS

  38. Obstacles to HTC (Sociology) (Education) (Robustness) (Portability) (Technology) • Ownership Distribution • Customer Awareness • Size and Uncertainties • Technology Evolution • Physical Distribution

  39. Basic HTC Mechanisms • Matchmaking - enables requests for services and offers to provide services find each other (ClassAds). • Checkpointing - enables preemptive resume scheduling (go ahead and use it as long as it is available!). • Remote I/O - enables remote (from execution site) access to local (at submission site) data. • Asynchronous API - enables management of dynamic (opportunistic) resources.

  40. Master-Worker (MW) computing is Naturally Parallel.It is by no means Embarrassingly Parallel. Doing it right is by no means trivial.

  41. The Tool

  42. C High Throughput Computing ondor Our Answer to High Throughput MW Computing on commodity resources

  43. The Condor System A High Throughput Computing system that supports large dynamic MW applications on large collections of distributively owned resources developed, maintained and supported by the Condor Team at the University of Wisconsin - Madison since ‘86. • Originally developed for UNIX workstations • Based on matchmaking technology. • Fully integrated NT version is available. • Deployed world-wide by academia and industry. • More than 1300 CPUs at the U of Wisconsin. • Available at www.cs.wisc.edu/condor.

  44. Condor CPUs on the UW Campus

  45. C High Throughput Computing ondor

  46. Some Numbers:UW-CS Pool 6/98-6/004,000,000 hours ~450 years “Real” Users 1,700,000 hours ~260 years CS-Optimization 610,000 hours CS-Architecture 350,000 hours Physics 245,000 hours Statistics 80,000 hours Engine Research Center 38,000 hours Math 90,000 hours Civil Engineering 27,000 hours Business 970 hours “External” Users 165,000 hours ~19 years MIT 76,000 hours Cornell 38,000 hours UCSD 38,000 hours CalTech 18,000 hours

  47. Key Condor User Services • Local control - jobs are stored and managed locally by a personal scheduler. • Priority scheduling - execution order controlled by priority ranking assigned by user. • Job preemption - re-linked jobs can be checkpointed, suspended, hold and resumed. • Local executing environment preserved - re-linked jobs can have their I/O re-directed to submission site.

  48. More Condor User Services • Powerful and flexible means for selecting execution site (requirements and preferences) • Logging of job activities. • Management of large (10K) numbers of jobs per user. • Support for jobs with dependencies - DAGMan (Directed Acyclic Graph Manager) • Support for dynamic MW (PVM and File) applications

  49. A Condor Job-Parallel Submit File executable = worker requirement = ((OS == “Linux2.2”) && (Memory >= 64)) rank = KFLOP initialdir = worker_dir.$(process) input = in output = out error = err log = Condor_log queue 1000

  50. Worker Tasks Master Tasks Task Parallel MW Application Potential = start FOR cycle = 1 to 36 FOR location = 1 to 31 totalEnergy =+ Energy(location,potential) END potential = F(totalEnergy) END Implemented as a PVM application with the Condor MW services. Two traces (execution and performance) visualized by DEVise.

More Related