1 / 39

Extending Software Component Model to Simplify Application Development

Extending Software Component Model to Simplify Application Development. Christian Perez Christian.Perez@inria.fr IRISA, INRIA Rennes France PARIS Seminar Dinard, November the 29 th 2007. Acknowledgement. Contributors to this presentation Thierry Priol (PARIS project-team leader)

rona
Télécharger la présentation

Extending Software Component Model to Simplify Application Development

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extending Software Component Modelto Simplify Application Development Christian Perez Christian.Perez@inria.fr IRISA, INRIA Rennes France PARIS Seminar Dinard, November the 29th 2007

  2. Acknowledgement Contributors to this presentation • Thierry Priol (PARIS project-team leader) • Andre Ribes (PhD, 2004) • Sebastien Lacour (PhD, 2005) • Hinde Bouziane (PhD, 2008) • Julien Bigot (PhD, 20xx) • Landry Breuil (Engineer) • Raul Lopez (Engineer) • Jocelyn Erhel (SAGE INRIA project-team leader) • Caroline de Dieuleveut (PhD) • And many others!

  3. Structural Mechanics Optics Thermal Dynamics Scientific Applications: Numerical Simulations • More accurate • More realistic • Multi-physic simulations • A code for each physic • Code can be parallel • Distributed and parallel features Satellite: multiple physics codes Electromagnetism:plane + antenna interactions Flow of a fluid:transport + flowing simulations Discrete simulation

  4. Visualization The Grid Computer • Hierarchical networks • WAN • Internet, VTHD, Grid5000, etc. • LAN • Ethernet • SAN • Myrinet, Quadrics • Computing resources • Homogeneous clusters • NUMA nodes • Heterogeneous node • CELL, GPU, • Parallel computers • Fast evolution! • Heterogeneity! SAN LAN Homogeneous cluster WAN SAN Homogeneous cluster Parallel computer

  5. Structural Mechanics Optics SAN LAN Visualization Homogeneous cluster Thermal Dynamics WAN SAN Supercomputer Homogeneous cluster Applications on Resources?

  6. Structural Mechanics Optics SAN LAN Visualization Homogeneous cluster Thermal Dynamics WAN SAN Supercomputer Homogeneous cluster Applications on Resources? Programming Environment ( Virtualization)

  7. Resources and programming models • Resources are utilized through a programming model • Sequential resources, sequential programming model • From assembly languages to imperative/functional/OO • Parallel resources, parallel programming model • From message passing/thread to data parallel/tasks/mixed languages

  8. 1st Evolution: Code Reuse • Motivation • Do not reinvent the wheel every day • Applications can be made of different building blocks • Should support several programming models • Possible solutions • Libraries • Sequential code are made of several languages • Interface Definition Languages • Need of a common model to act as a gateway • Sequential app: libc standard (ELF)

  9. 2nd Evolution: the Grid concept • Motivation • On demand computing • Applications do not know their execution environment • Should abstract from resources • Tracks • Virtual machines • JIT-compilation • Need to control the compilation from the programming model to the resource model (OS model?)

  10. What is Software Component? • Technology that advocates for composition • Aim of a component: to be composed! • Consequence • Fabrication stage • Reuse stage

  11. Software component: Black box and Ports • A component is a black box that interacts by its ports • Port ~ access point • Name • Description and protocol • Abstract notion, without implementation • Consequence: • Separation of the interface and the implementation • Type of a component = set of its ports • A component may have several implementations • Different languages/OS/architectures • Different algorithms

  12. Assembly of Software Components • Description of an assembly • Dynamically (API) • Statically • Architecture Description LanguageADL • Describes • Component instances • Port connexion • Available in many CM • Primitive and composite components C2 C5 C3 C4 C5 C1 Composite

  13. Structural Mechanics Optics Thermal Dynamics Component-based middleware for computational Grids • Apply modern software practices to scientific computing • High-performance & Scalability • Specialized component models for high performance computing (clusters & grids) • Code coupling applications • Parametric applications • Increase the level of abstraction • SPMD paradigm (MxN communication) • Master-worker paradigm • Data sharing paradigm • High-performance communication • Independence vis à vis of the networking technologies • Deployment • Map components to available resources • Technology independent • (CCM, CCA, Fractal, CoreGRID GCM) Master-Slaves component SPMD components

  14. Application in hydrogeology: Saltwater intrusion • Coupled physical models • One model = one software • Saltwater intrusion • Flow / transport • Reactive transport • Transport / chemistry • Hydrogrid project,supported by theFrench ACI-GRID flow : velocity and pressure function of the density Density function of salt concentration Salt transport : by convection (velocity) and diffusion

  15. Application in hydrogeology: Saltwater intrusion • Coupled physical models • One model = one software • Saltwater intrusion • Flow / transport • Reactive transport • Transport / chemistry • Hydrogrid project,supported by theFrench ACI-GRID flow : velocity and pressure function of the density Density function of salt concentration Salt transport : by convection (velocity) and diffusion

  16. control Numerical coupling in saltwater intrusion Transport Flow t = 0 t = t time iterative scheme at each timestep

  17. HPC Component A SPMD Components What the application designer should see… HPC Component B … and how it must be implemented ! Component A Component B

  18. to_client to_server Comp. A-0 Comp. A-0 Comp. A-3 Comp. A-0 Comp. A-2 Comp. A-1 Comp. A-0 Comp. A-0 Comp. A-4 Comp. A-0 SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc. SPMD Proc. GridCCM Component interface IExample } void factorise(in Matrix mat); }; IDL2 component CoPa { provides IExample to_client; uses Itfaces2to_server; }; IDL3 Component: CoPa Port: to_client Name: IExample.factorise Type: Parallel Argument1: Basic_BC[*, bloc] ReturnArgument: noReduction XML Non functional property ofa component implementation

  19. Components for code coupling:SPMD paradigm in GridCCM CommunicationLibrary ( MPI) Application • SPMD component • Parallelism is a non-functionalproperty of a component • It is an implementation issue • Collection of sequential components • SPMD execution model • Support of distributed arguments • API for data redistribution • API for communication scheduling w.r.t. network properties • Support of parallel exceptions GridCCM runtime Redistribution Library 1 Application view management- Data distribution description Communication management- Comm. Matrix computation- Comm. Matrix scheduling- Communication execution Scheduling Library CommunicationLibrary CORBA stub/skeleton Object Request Broker Component A Component B

  20. Let come back to applications

  21. Application in hydrogeology: Saltwater intrusion • Coupled physical models • One model = one software • Saltwater intrusion • Flow / transport • Reactive transport • Transport / chemistry • Hydrogrid project,supported by theFrench ACI-GRID flow : velocity and pressure function of the density Density function of salt concentration Salt transport : by convection (velocity) and diffusion

  22. Gaz-liquid exchange Relargage u Dissolution Convection u Dispersion Sorption Precipitation Biology Reactive transport in Hydrogeology Aqueous Réactions

  23. control Numerical coupling in reactive transport N species Chemistryon the N species Transport 1 Transport N t = 0 t = t ….. time Iterative scheme at each timestep

  24. concentrations Component Model for reactive transport Controller component N components Transport component Transport component Transport component Transport component Chemistry component Transport component

  25. Generalization of the problem Master worker worker worker Workers collection Requests transport Scheduling Fault tolerance worker • Simultaneous independent computations (~ForAll loop) • Dedicated API/environments • BOINC, XTremWEB, DIET, NetSolve, Nimrod/G, ...

  26. W W w10 Limits with existing component models • Resources infrastructure dependence • SMP machine vs Cluster • Request delivery implementation at the burden of the master • No transparency • Complex • Request delivery policy outside • Resources infrastructure dependence! • For grids, complex request delivery implementations • No transparency m w W W m w10 m proxy

  27. worker worker High Performance Componentsfor parametric codes: Master-worker paradigm Programmer view Abstract ADL • Collection of components • Simple model for application developers • A request is delivered to an instance of the collection • The selection of collection instance is delegated to a request transport policy • Enable the use of existing MW environments (DIET, ….) • Resources infrastructure independence • No dealing with the number of workers • No dealing with request delivery concerns • Valid for ADL and non ADL based component models • Preliminary results • Fluid motion estimation • Experiment on Grid’5000 • 4003 components • 974 processors • 7 sites • Speedup of 213 with a round robin pattern master Resources Exposed provided port binding XML collection definition Set of “request delivery” patterns Number of workers & policy pattern selection RR Proxy Round-Robin Concrete ADL

  28. Composition Operators Data Sharing

  29. Problem overview A C • Multiple concurrent accesses to a piece of data • Data localization and management • Intra-process : local shared memory • Intra-machine: shared memory segment • Intra-cluster : distributed shared memory (DSM) • Intra-grid : grid data sharing service (JuxMem) Read / write Read Data B D write Read / write

  30. Data Data Data Limits with data sharing in component models • Ports: active communication operation • Data must be part of a message • Centralized approach • Bottleneck for the performance • Single point of failure • Distributed approach • Explicit management of datareplication/migration by components • Functional code mixed withdata management code Data

  31. Transparent data localization and management Local shared memory, DSM, JuxMem, … Our proposal: the data port model • Principle: rely on a data sharing management service accesses port shares port component D { shares array<float> u; }; component A { accesses array<float> v; }; A D data_ref B interface data_handling { float* get_pointer(); long get_size(); void acquire(); void acquire_read(); void release(); } Data data sharing management service

  32. Data Hierarchical Component Model

  33. CFD and CEM applications Ω Neighbor based communications: DoSeq ForPar each elementary block of Ω Communicate with neighbors Compute locally Partitioning Overlapping ornon overlapping mesh partitioning • Complex communication patterns

  34. WAN SAN SAN Resource hierarchy representation Gridgroup Clusterlgroup Cluster1group Grid 0 Level 0 Build by theruntime Level 1 0.0 0.1 process Level 2 0.0.0 0.0.1 0.0.2 0.1.0 0.1.1 Cluster2 Cluster1 MPI rank 0 1 2 3 4 • Hierarchy id: hid • Hierarchy navigation API • void HMPI_GET_SELF(&hid) • void HMPI_GET_PARENT(&hid) • void HMPI_GET_SONS(&array of hid, max) • + operations to get the number of sons, of siblings, the hids of siblings, … • Resource properties API • double HMPI_GET_LATENCY(hid1, hid2) • double HMPI_GET_BANDWIDTH(hid1, hid2) • .. Note: deepest level = process level

  35. 0 0.0 0.1 0.0.0 0.0.1 0.0.2 0.1.0 0.1.1 Data at level 0 Data at level 1 Data at level 0 Communication Operation Extension • Collective operations • Hid implicitly defines groups • Enable collective operations on them • HMPI_REDUCE(sbuf, rbuf, nb, datatype, op, root, HMPI_COMM_WORLD, group) • Examples • Group = 0, root = 0.1.0  mpi_reduce on { 0, …4} with root = 3 • Group = 0.0, root = 0.0.1  mpi_reduce on {0,1,2} with root = 1 • Point to point • Replace MPI id by hid • Same semantic than MPI if hid is a leaf • HMPI_SEND(…, 0.1.1)  MPI_SEND(…, 4) • MxN semantic if hid is not a leaf • HMPI_SEND(…, 0.1)

  36. High level component model • Hierarchy configuration is too complex

  37. Hierarchical Component Definition IDL File: DataType 3DMatrix; interface anInterface { exception BadConfiguration; void calculate(Matrix m) raises (BadConfiguration); } hierarchical component aComponent { manages 3DMatrix M; provides anInterface P; } P 3DMatrix aComponent

  38. Hierarchical Component Implementation Description CIDL File: composition session aComponentSession{ home executor aComponentHomeImpl { implements aComponentHome; } component executor aComponentImpl { implements aComponent; } distribution library DL { handles M; } hierarchy manager HM { implements hierarchy aCompHierarchy; } }

  39. Conclusion • Software component is a promising technology for handling code & grid complexity • Adapt component models to applications • NxM component • Master-Worker paradigm • Data sharing between component • Data Hierarchical Component Model • Collective communication between components • Other dimensions to take into account • Temporal (see Hinde’ s talk) • Aspect • Control the mapping of application onto resources • Deployment (see Boris’ talk)

More Related