1 / 41

Grid Computing in the Terascale Age

Grid Computing in the Terascale Age. SC2001. Francine Berman Director, NPACI and SDSC Professor, Department of Computer Science and Engineering, UCSD. DATA ACQUISITION. ADVANCED VISUALIZATION. , ANALYSIS. COMPUTATIONAL RESOURCES. IMAGING INSTRUMENTS. LARGE-SCALE DATABASES.

dalton
Télécharger la présentation

Grid Computing in the Terascale Age

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Grid Computing in the Terascale Age SC2001 Francine Berman Director, NPACI and SDSC Professor, Department of Computer Science and Engineering, UCSD

  2. DATA ACQUISITION ADVANCEDVISUALIZATION ,ANALYSIS COMPUTATIONALRESOURCES IMAGING INSTRUMENTS LARGE-SCALE DATABASES What is Grid Computing? Resource sharing & coordinated problem solving in dynamic, multi-institutional virtual organizations “Telescience Grid”, Courtesy of Mark Ellisman

  3. Why the Computational Grid is like the Electric Power Grid Electric power is ubiquitous Don’t need to know the source of the power (transformer, generator) or the power company that serves it Why the Computational Grid is different from the Electric Power Grid Wider spectrum of performance Wider spectrum of services Access governed by more complicated issues Security Performance Socio-political factors Computational Grids and Electric Power Grids

  4. Today’s Presentation • A short history of Grid computing • Building the infrastructure • Demonstration applications • Early international efforts • Grids today • Towards production-level Grids • Emerging focus on middleware • Globalization • Grid computing in the digital millennium • Looking to the future

  5. Today’s Presentation • A short history of Grid computing • Building the infrastructure • Demonstration applications • Early international efforts • Grids today • Towards production-level Grids • Emerging focus on middleware • Globalization • Grid computing in the digital millennium • Looking to the future

  6. A Short History of the Grid • Beginnings • “Science as a team sport” • Grand Challenge Problems (80’s) • Rise of “coupling applications” • Coordination of instruments/viz/data and computation • Evolution of related communities • Parallel computation • Need to address resource limitations • “Virtualization” of parallel computation (PVM) • Networking • Gigabit testbed Program – dual goals: to investigate potential testbed network architectures, and to explore usefulness for end-users CASA Gigabit Testbed

  7. The I-Way @ SC95 Conference • First large-scale “modern” Grid experiment • provided the basis for modern Grid infrastructure efforts • I-Way included • A Grid of 17 sites connected by vBNS • 60+ application groups • OC-3 backbone • Large-scale use of immersive displays • CAVE and I-Desk • I-Soft programming environment • Pioneered security, scheduling ideas • Scheduling done with a “human-in-the-loop” (Warren Smith!)

  8. An Emerging Grid Community 1995-2000 • “Grid book” gave a comprehensive view of the state of the art • Important infrastructure and middleware efforts initiated • Globus • Legion • Condor • NetSolve, Ninf • Storage Resource Broker • Network Weather Service • AppLeS, …

  9. g Applications High-level Services and Tools GlobusView Testbed Status DUROC MPI MPI-IO CC++ Nimrod/G globusrun Core Services Nexus GRAM Metacomputing Directory Service Globus Security Interface Heartbeat Monitor Gloperf GASS Local Services Condor MPI TCP UDP LSF AIX Irix Easy NQE Solaris The Globus Project – The Grid as a Layered Set of Services • Leads: Ian Foster and Carl Kesselman • Globus model focuses on providing key Grid services • Resource access and management • Grid FTP • Information Service • Security services (authentication, authorization, policy, delegation) • Network reservation, monitoring, control

  10. The Legion Project - The Grid as a Single Virtual Machine • Lead: Andrew Grimshaw • Legion model provides a Grid “architecture” • Based on idea that everything is an object • Legion provides traditional OS-centric view of the grid, including • Security, file system, process management • High-level Grid services, e.g., scheduling, accounting, etc. • Legion formed foundation for Avaki

  11. Reply Choice File SID DBLobj SID Obj SID Application Agent SRB Unix DB2 Oracle ADSM HPSS Basic Grid Building Blocks SDSC’s SRB– uniform access to remote data Application Resource Third-party copy Computational Resources User Clusters MPP Workstations MPI, Condor,... Remote Proxies MCAT Dublin Core DataCutter Application Meta-data Client Agent Request Execution Submission Customer Agent Owner Agent RPC-like NetSolve– Solving computationalproblems remotely Object Files Object Files Condor– harnessing idle workstationsfor high-throughput computing Execution Agent Data & Object Files Application Process Application Process Ckpt Files Remote I/O & Ckpt

  12. An Emerging Application Community • Development of grid-friendly applications which target coordinated resources • Computers and • Other Computers • Data archives • Visualization • Remote instruments • Real application successes • NPACI Telescience team developed production code which adaptively targeted supercomputers and lab clusters • International Cactus team developed grid toolkit for large-scale simulations in numerical relativity • Grid calculations provided largest simulations of colliding black holes Cactus Gravity Waves, courtesy of Ed Seidel

  13. Reducing Execution Time by a Factor of 10 • MCell – Monte Carlo simulation of Cellular Microphysiology • Code in development for over a decade • Large user community – over 20 sites • Grid software (APST parameter sweep middleware) expands the target platform beyond a single site • Grid-enabled MCell used regularly in production from the desktop • Month-long computations done in days • Ultimate Goal: A complete molecular model of neuro-transmission at level of entire cell • MCell simulation of the process of synaptic transmission at the neuromuscular junction of a rat diaphragm muscle • Courtesy of T. Bartol, Salk/UCSD and J. Stiles, PSC

  14. Grid Software Enabled New Results Five Years Early • SF Express (Synthetic Forces Express) • large scale distributed interactive battle simulation • Simulation decomposed terrain (Saudi Arabia, Kuwait, Iraq) contiguously among supercomputers • Simulation of 50,000 entities in 8/97, 100,000 entries in 3/98 • Location and state of entities (tanks, trucks, planes) updated several times a second Image courtesy of Carl Kesselman

  15. Wolski’s Everyware crew hard at workat the NPACI booth Using Everything • Everyware[Wolski, SC’98] ran on • Berkeley NOW • Convex Exemplar • Cray T3E • HPVM/NT Supercluster • IBM SP2 • Intel X86 • SGI • Sun SPARC • Tera MTA • Laptops • Batch Systems • Condor • Globus • Java • Legion • Netsolve • Unix • Windows NT all at the same time Everyware -- a highly adaptive Grid application which investigated solutions to the Ramsey Number Problem

  16. Adapting to Resource Availability • Everyware application adapted to whatever resources were available • Solution was • Ubiquitous -- able to run everywhere • Resource Aware – capable of managing heterogeneity • Adaptive-- able to dynamically tailor its behavior to optimize performance • NOT embarrassingly parallel --Branch-and-Bound and Simulated Annealing used

  17. 1995-2000: Beginnings of International Grid Collaborations • Emerging Grid efforts worldwide • Early collaborations between US, European and Asia-Pacific researchers • Ninf + NetSolve • Nimrod + Globus • Flock of Condors • NWS in Europe, Japan • PACI Collaborations

  18. Grid Computing Today DISCOM SinRG APGrid IPG …

  19. The Maturation of Grid Computing • Research focus moving from building of basic infrastructure and application demonstrations to • Middleware • Usable production environments • Application performance • Scalability  Globalization • Development, research, and integration happening outside of the original infrastructure groups • Grids becoming a first-class tool for scientific communities • GriPhyN (Physics), BIRN (Neuroscience), NVO (Astronomy), Cactus (Physics), …

  20. Portals: Making the Grid Usable Comparison of the 3D structure of proteins leads to new biological understanding The CE portal uses the grid to make these computations

  21. NSF’s Cyberinfrastructure, courtesy of Ruzena Bajscy Broad Acceptance of Grids as a Critical Platform for Computing • Widespread interest from government in developing computational Grid platforms NASA’s Information Power Grid

  22. AVAKI Broad Acceptance of Grids as a Critical Platform for Computing • Widespread interest from industry in developing computational Grid platforms • IBM, Sun, Entropia, Avaki, Platform, … On August 2, 2001, IBM announced a new corporate initiative to support and exploit Grid computing. AP reported that IBM was investing $4 billion into building 50 computer server farms around the world.

  23. Globalization of the Grid Global Grid Forum 2, Jul01, WDC • 340 Participants • 180 Organizations • 20 Countries • At GGF-2… • 36 documents • 48 working group sessions • 6 tutorials • 6 BOFs • 2 days General Updates International Grid Efforts • EU Grid, ApGrid, PRAGMA, UK Grid, … 1999 2001

  24. Grids Form the Basis of a National Information Infrastructure TeraGrid will provide in aggregate • 13.6 trillion calculations per second • Over 600 trillion bytes of immediately accessible data • 40 gigabit per second network speed • Provide a new paradigm for data-oriented computing • Critical for disaster response, genomics, environmental modeling, etc. August 9, 2001: NSF Awarded $53,000,000 to SDSC/NPACI and NCSA/Alliance for TeraGrid

  25. PIs: Berman, Foster, Messina, Reed, Stevens Sites: SDSC/UCSD, Caltech, NCSA/UIUC, ANL Partners: IBM, Intel, Qwest, Sun, Myricom, Oracle and others Cool Things about the TeraGrid Big data, simulation, modeling Grid computing, Globus, portals, middleware Clusters, Linux Usability, impact, production facility TeraGrid Software Environment Linux Basic and Core Globus Services Advanced Services Data Services Over .6 Petabytes of on-line disk will provide ultimate environment for data-oriented computation Linux environment provides more direct path from development on lab cluster to performance on high-end platform

  26. OC-12 vBNS Abilene MREN OC-12 OC-3 TeraGrid: 13.6 TF, 6.8 TB memory, 79 TB internal disk, 576 network disk ANL 1 TF .25 TB Memory 25 TB disk Extreme Blk Diamond Caltech 0.5 TF .4 TB Memory 86 TB disk 574p IA-32 Chiba City 256p HP X-Class 32 32 24 32 32 128p HP V2500 128p Origin 24 32 24 92p IA-32 32 HR Display & VR Facilities 5 4 8 5 8 HPSS HPSS NTON OC-48 OC-12 Calren ESnet HSCC MREN/Abilene Starlight Chicago & LA DTF Core Switch/Routers Cisco 65xx Catalyst Switch (256 Gb/s Crossbar) Juniper M160 OC-12 ATM OC-48 OC-12 GbE NCSA 6+2 TF 4 TB Memory 240 TB disk SDSC 4.1 TF 2 TB Memory 225 TB SAN vBNS Abilene Calren ESnet OC-12 OC-12 OC-12 OC-3 Myrinet 4 8 HPSS 300 TB UniTree 2 Myrinet 4 10 1024p IA-32 320p IA-64 1176p IBM SP 1.7 TFLOPs Blue Horizon 14 Sun Server 15xxp Origin 4 16 2 x Sun E10K

  27. New Results Possible on TeraGrid • Biomedical Informatics Research Network [BIRN] • Evolving reference set of brains provides essential data for developing therapies for neurological disorders (Multiple Sclerosis, Alzheimer’s disease). • Pre-TeraGrid: • One PET or MRI lab • Small patient base • 4 TB collection • Post-TeraGrid • Many collaborating labs • Larger population sample • 400 TB data collection: more brains, higher resolution • Multiple scale data integration and analysis

  28. EACH BRAIN REPRESENTS A LOT OF DATA AND COMPARISONS MUST BE MADE BETWEEN MANY We need to get to one micron to know location of every cell. We’re just now starting to get to 10 microns – the TeraGrid will help get us there and further

  29. Targeting the Grid as a First-Class Scientific Tool The Biomedical Informatics Research Network High Performance Computing Codes for • “blending/bending” • Visualization Large Scale On-line Data Archive TeraGrid Federated Database Layer MRI UCSD EM UCLA WashU Local Database PET Data Acquisition Local database Local database Mark Ellisman, et al.

  30. Picture ofearthquakeand bridge Sensors Grid Computing in the Digital Millennium Personalized Medicine Picture ofdigital sky Knowledge from Data Wireless networks Instruments

  31. In the Next Decade, Grids will Require Unprecedented Integration and Scale • More of Everything • Immense amounts of data • Apps of immense scale (megacomputing) • Global Grids • New resources • Sensors and sensornets • New devices (personal digital devices, computer-enabled clothing, cars, …) • Huge diversity of scale

  32. HPSS HPSS HPSS HPSS TeraGrids will Evolve to PetaGrids ~ Online System HPSS EU Grid NASA IPG Data Grid Science Grid iVDGL Sensor nets, wireless“throwaway” end devices, Personal digital devices

  33. The SDSC and CAL-IT2 Partnership CAL-IT2 (California Institute on Telecommunications and Information Technology) provides regional wireless focus SDSC provides national, high-end focus Collaborative projects integrate sensors to supercomputers The SDSC/Cal-IT2/UCSD “PetaGrid” – AnEarly Prototype TeraGrid(Powerful nodes, Strong coordination) Campus Infrastructure Cal-IT2(sensornets, wireless, personal digital devices)

  34. Commodity Internet, Internet2 • Link UCSD and UCI • CENIC’s ONI, Cal-REN2, Dig. Cal. • High-speed optical core ½ Mile The UCSD “PetaGrid Laboratory”—Fiber, Wireless, Compute, Data, Software SDSC Eng. / Cal-(IT)2 CS Hosp Med Chem • Campus Wireless Scripps Institution of Oceanography

  35. Applications What application paradigms work for the Grid? What “gateways” are required to make the Grid usable without heroics? What are the Killer Apps for the PetaGrid? Real-world modelstargeting Dynamic systems, contingent behavior Partial and poor information “Good enough” performance Synthesis of knowledge from data Data  Information  Knowledge Tomorrow’s Grid Infrastructure Helps Form Today’s Research Agenda

  36. Sharing as the default mode of interaction Trust, policy, negotiation, payment, … Managing unprecedented heterogeneity Wireless and wired High-end nodes and low-end sensors Computers, bridges, clothing, vehicles, … Adaptivity as the prevalent mode for performance Execution on constantly changing resources Performance in the face of unpredictable information Location-independence Tomorrow’s Grid Infrastructure Helps Form Today’s Research Agenda

  37. The GrADS (Grid Application Development Software) Project Design and development of a Grid program development and execution environment Tight coupling between program preparation and program execution environment Contract-based performance economy Performance feedback Perf problem Software components Realtime perf monitor Scheduler/ Service Negotiator Grid runtime System (Globus) Config. object program Source appli- cation whole program compiler P S E negotiation Dynamic optimizer libraries We Shouldn’t Have to be Heroes to Achieve Grid Program Performance

  38. Drivers Wanted • Where are new Grid researchers and developers being trained? • How many CS departments have faculty with a focus in Grid computing? • How can we increase the number of students with expertise and experience in Grid computing? • Authors of the Grid Book will not live forever … 

  39. Leadership Needed • CS community needs to stand together to accomplish our research agenda • Physicists as a model • Networking, Grid, Collaboratory, Digital Libraries and other related communities need to learn to interoperate • Scientific community must advancea cohesive visionof computational infrastructure and makea compelling casefor why it’s needed for the next generation of scientific advances • CS Grid community must engage with application communitieswho need Grids as a first-class scientific instrument • If we don’t show leadership, other communities will plow the same field and hit the same rocks

  40. Thanks • Phil Bourne, Henri Casanova, Jack Dongarra, Mark Ellisman, Ian Foster, Dennis Gannon, Dave Hart, Carl Kesselman, Paul Messina, Phil Papadopoulos, Dan Reed, Mark Sheddon, Larry Smarr, Rick Stevens, Shankar Subramaniam, Mike Vildibill, Rich Wolski • UCSD Grid Lab, SDSC GRAIL Lab, GrADS PIs, TeraGrid team • SDSC Staff, NSF, NASA, DOE • and many others …

  41. Grid Challenges for the Digital Millennium • Grid Functionality and Usability • How do we bring the Grid into prime time? • Bootstrapping the Grid • How can the CS and Disciplinary communities leverage each others’ efforts to create new scientific discoveries? • Scaling the Grid • Policy: How do we develop new policies and paradigms for multi-institutional virtual organizations? • Economics: Who pays and how? • Social Engineering: How can we develop Grids which “play well together”?

More Related