1 / 15

Parallelisation of Wave Propagation Algorithms for Odour Propagation in Multi-Agent Systems

Parallelisation of Wave Propagation Algorithms for Odour Propagation in Multi-Agent Systems. Eugen Dedu , Supélec Stéphane Vialle , Supélec Claude Timsit , University of Versailles France. IWCC 2001, Septembre 1-6 Mangalia, Romania. Obstacle. Agent. Resource. Motivations and context.

ronda
Télécharger la présentation

Parallelisation of Wave Propagation Algorithms for Odour Propagation in Multi-Agent Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallelisation of Wave Propagation Algorithms for Odour Propagation in Multi-Agent Systems Eugen Dedu, Supélec Stéphane Vialle, Supélec Claude Timsit, University of Versailles France IWCC 2001, Septembre 1-6 Mangalia, Romania

  2. Obstacle Agent Resource Motivations and context Large & distributed problem  too complex for total planning  distributed computing: sMAS & self organisation Example: carrying agents, obstacle avoiding, resources spread potential High execution times, especially for wave propagation  optimisation & parallelisation Global data access during the simulation  shared-memory most appropriate

  3. Several resources 1 2 1 2 3 2 1 1 1 1 1 2 3 4 3 2 2 3 2 1 2 1 1 1 5 1 3 1 2 3 4 3 2 2 4 4 2 3 2 3 6 5 3 1 5 5 2 4 4 3 4 3 4 3 2 2 1 1 4 1 5 5 5 4 4 3 1 5 1 2 3 4 3 4 6 3 3 2 3 4 5 5 2 1 1 5 4 1 2 1 2 3 4 3 3 2 1 1 2 3 2 1 5 4 4 3 1 1 2 1 2 1 2 3 2 2 3 1 1 1 2 1 2 3 2 1 1 2 1 1 p = max pi Wave propagation model 1 resource 6 p = pR - d Avoid obstacles Simple (& fast) hypotheses working in our AI simulations

  4. 1 1 2 1 1 2 3 1 2 1 1 2 3 3 2 3 1 1 2 2 2 1 1 1 Sequential, recursive method Depth-first Breadth-first 1 2 1 1 3 1 1 2 2 1 4 4 3 1 fine square update => fewer updates & overhead

  5. Sequential, iterative method • put potential of resources • repeat • for each square • p = max pi - 1 • until no modification systematic & simple => numerous updates & less overhead

  6. Iterative vs. recursive Recursive Iterative Number of obstacles numerousvery few Potential of resources smallhigh

  7. Parallel, domain decomposition Domain propagation Frontier exchange Frontier propagation Advantage: small data transfer Drawback: several repropagations

  8. P1 P2 P3 Parallel, private environments P1 P2 P3 Advantage: avoid repropagations Drawbacks: cache misses higher memory requirements

  9. Performance, execution time Recursive domain Iterative domain Recursive private Obstacles 0% Resources 1% Potential 16 SMP, Sparc, 4 processors

  10. Performance, execution time Recursive domain Iterative domain Recursive private Obstacles 16% Resources 1% Potential 8 DSM, Origin2000, 64 processors

  11. Performance, execution time Recursive domain Iterative domain Recursive private Obstacles 0% Resources 1% Potential 16 DSM, Origin2000, 64 processors

  12. Performance, theoretical speed-up Use iterative method for domain propagation and recursive method for frontier propagation “User point of view”

  13. Future directions • Current results • Sequential methods: • Recursive - fine-grained & needed updates, overhead • Iterative - systematic & simple, less overhead • Parallelisation methods: • Frontier exchanges - no data transfer, repropagations • Private environments - data miss, avoid repropagation Future research Implement & evaluate other sequential and parallel methods Measure performance on clusters offering shared-memory semantic

  14. Final goal... distributed memory Parallel programming paradigm chosen: shared-memory programming (easier) Shared-memory architectures Distributed shared-memory architectures Cluster of workstations... - cheap (expected) - user can upgrade frequently  always better than best sequential machine End user

  15. Questions...

More Related