1 / 75

Denis Caromel Institut universitaire de France (IUF) OASIS Team

Programming, Composing, Deploying for the GRID. Denis Caromel Institut universitaire de France (IUF) OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis JAOO, Cannes, May 2004. 1. Grid principles 2. Distributed Objects and Components: ObjectWeb ProActive LGPL environment

calder
Télécharger la présentation

Denis Caromel Institut universitaire de France (IUF) OASIS Team

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Programming, Composing, Deploying for the GRID Denis Caromel Institut universitaire de France (IUF) OASIS Team INRIA -- CNRS - I3S -- Univ. of Nice Sophia-Antipolis JAOO, Cannes, May 2004 • 1. Grid principles • 2. Distributed Objects and Components: • ObjectWeb ProActive LGPL environment • 3. Application: 3D Electromagnetism • 4. Towards Peer-To-Peer (P2P)

  2. The GRID PCs : 1 Milliard in 2002 (25 years) Forecast: 2 Milliards in 2008

  3. The Grid idea • GRID = Electric Network • Computer Power (CPU cycles) <==> Electricity • Can hardly be stored, if not used --> Lost • Global management, • Mutual sharing of the resource • But CPU cycles much harder to share than electricity: • Production cannot be adjusted! • Cannot really be delivered where needed! • Not yet interoperability: MultipleAdministrative Domains • 2 important aspects : • Computational + Data Grid

  4. Example Challenge: Scale up • 50 Machines, 1.5 year of computation • 5000 Machines, with only 50 % Efficiency • ==> 10 days • Applications: • Simulate the stock market evolution • Research for an urgent vaccine • Forecast a bush-fire path • Forecast in real time a flooding consequences ... • etc.

  5. Ubiquitous: some numbers • PCs in my lab (INRIA Sophia) : ~ 1500 French Riviera : 1.3 Millions • France : 25 Millions Europe : 108 Millions USA : 400 Millions • World : 1 Milliard in 2002 (25 ans) Forecast: 2 Milliards in 2008 • France : • 36 Millions of cellular phones • 2.2 Millions of laptops • 630 Thousand PDA (sources: ITU, Gartner Dataquest, IDC, 02-03, )

  6. The multiple GRIDs • Scientific Grids : • Parallel machines, Clusters • Large equipments: Telescopes, Particle accelerators, etc. • Enterprise Grids : • Data, Integration: Web Services • Remote connection, Security • Intranet and Internet Grids, (miscalled P2P grid): • Desktop office PCs: Desktop Intranet Grid • Home PC: Internet Grid (e.g. SETI@HOME)

  7. Internet Apache Servlets EJB Databases Enterprise Grids

  8. Large Equipment Internet Parallel Machine Clusters Scientific Grids

  9. Internet Job management for embarrassingly parallel application (e.g. SETI) Internet Grids

  10. The multiple GRIDs • Scientific Grids • Enterprise Grids • Intranet and Internet Grids Strong convergence in process! At least at the infrastructure level, i.e. WS

  11. Grid: from enterprise ... to regional Very hard deployment problems … right from the beginning

  12. Grid: from regional ... to worldwide • Communication Cannes-Los Angeles: 70 ms Light Speed • Challenge: Hide the latency ! Define adequate programming model

  13. Distributed Objects and ComponentsProActiveProgramming

  14. SMP LAN Clusters ProActive:A Java API + Tools for the GRID Parallel, Distributed, Mobile, Activities, across the world ! Desktop • Model: • Remote Objects (Active Objects vs. Java RMI) • Asynchronous Communications, with automatic Futures • Group Communications, Migration (computation mobility) • Environment: • XML Deployment, dynamic class-loading • Various protocols: rsh, ssh, LSF, Globus, BPS, ... • Graphical Visualization and monitoring: IC2D

  15. A V Creating AO and Groups A ag = newActiveGroup (“A”, […], VirtualNode) V v = ag.foo(param); ... v.bar(); //Wait-by-necessity Group, Type, and Asynchrony are crucial for Cpt. and GRID Typed Group Java or Active Object

  16. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder)

  17. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder)

  18. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct

  19. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct direct

  20. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct forwarder direct

  21. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct forwarder direct

  22. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct forwarder direct

  23. Mobility • Same semantics guaranteed (RDV, FIFO order point to point, asynchronous) • Safe migration (no agent in the air!) • Local references if possible when arriving within a VM • Tensionning (removal of forwarder) direct forwarder direct

  24. Parallel, Distributed, HierarchicalComponentsfor the GridComposing

  25. Component interface My Business Component Facets Receptacles OFFERED REQUIRED Event sinks Event sources Attributes A CORBA Component Courtesy of Philippe Merle, Lille, OpenCCM platform

  26. Building CCM Applications =Assembling CORBA Component Instances Provide + Use, but flat assembly

  27. Controller Content The Fractal model:Hierarchical Component Defined by E. Bruneton, T. Coupaye, J.B. Stefani, INRIA & FT

  28. Controller Content Interface = access point

  29. Controller Content Hierarchical model : composites encapsulate primitives, which encapsulates Java code

  30. Controller Content Binding = interaction

  31. Controller Content Binding = interaction

  32. Component Identity Content Controller LifeCycle Controller Binding Controller Controllers : non-functional properties Controller Content Component = runtime entity

  33. D C ProActiveComponents for the GRID 1. Primitive component Java + Legacy An activity, a process, … potentially in its own JVM 2. Composite component Composite: Hierarchical, and Distributed over machines 3. Parallel and composite component Parallel: Composite + Broadcast (group)

  34. Composing: XML ADL • Primitive-component"cd-player” • implementation = "CdPlayer” // Java class with functional code • Providesinterface "input” … • Requires … • VirtualNode = VNa // Virtual Node name • Composite-component”stereo” • VirtualNode = VNc, vn ... // Virtual Node • Provides …Requires … • Primitive-component”cd-player” • Primitive-component”speaker” • Bindingsbind "cd-player.output" to "speaker.input" • Merging VNa, VNb, ---> VNc // Co-allocation in that case Example of an XML component file:

  35. P A B A A C B D C Group proxy Group proxy Groups in Components A parallel component! Broadcast at binding, on client interface At composition, on composite inner server interface

  36. Migration Capabilityof composites • Migrate sets of components, including composites

  37. Migration Capabilityof composites • Migrate sets of components, including composites

  38. Co-allocation, Re-distribution • e.g. upon communication intensive phase

  39. Co-allocation, Re-distribution • e.g. upon communication intensive phase

  40. Co-allocation, Re-distribution • e.g. upon communication intensive phase

  41. Environment Deploying

  42. Large Equipment Internet Internet Internet Apache Servlets EJB Databases Job management for embarrassingly parallel application (e.g. SETI) Parallel Machine Clusters How to deploy on the Various Kind of Grids ?

  43. Abstract Deployment Model • Problem: • Difficulties and lack of flexibility in deployment • Avoid scripting for: configuration, getting nodes, connecting, etc. • A key principle: Virtual Node (VN) + XML deployment file • Abstract Away from source code: • Machines • Creation Protocols • Lookup and Registry Protocols • Protocols and infrastructures: • Globus, ssh, rsh, LSF, PBS, … Web Services, WSRF, ...

  44. JVM on the current Host JVM started using SSH Mapping Virtual Nodes: example (1) • <processDefinition id="linuxJVM"> • <jvmProcess class="org.objectweb.proactive.core.process.JVMNodeProcess"/> • </processDefinition> • <processDefinition id=”sshProcess"> • <sshProcess class="org.objectweb.proactive.core.process.ssh.SSHJVMProcess" • hostname="sea.inria.fr"> • <processReference refid="linuxJVM"/> • </sshProcess> • </processDefinition> Infrastructure informations

  45. Definition of LSF deployment, … Globus Mapping Virtual Nodes: example (2) • <processDefinition id=" clusterProcess"> • <bsubProcess class="org.objectweb.proactive.core.process.lsf.LSFBSubProcess" • hostname=”cluster.inria.fr"> • <processReference refid=”singleJVM"/> • <bsubOption> • <processor>12</processor> • </bsubOption> • </bsubProcess> • </processDefinition> Infrastructure informations

  46. XML Deployment (Not in source) VNa VNb VNc = VN(a,b) C C A B A B Separate or Co-allocation

  47. IC2D: Interactive Control and Debugging of Distribution

  48. Monitoring of RMI, Globus, Jini, LSF cluster Nice -- Baltimore ProActive IC2D: Width of links proportional to the number of com- munications

  49. Application

  50. A Parallel Object-Oriented Application for 3D Electromagnetism • Visualize the radar reflection of a plane, medicine on head, etc. • Pre-existing Fortran MPI version: EM3D • Jem3D: • Sequential object-oriented design, modular and extensible (in Java) • Sequential version can be smoothly distributed: --> keeping structuring and object abstractions • Efficient distributed version, large domains, Grid environment

More Related