1 / 20

Parallel and Distributed Simulation Techniques

Parallel and Distributed Simulation Techniques. Achieving speedups in model execution. The problem. Generating samples significant to draw conclusions. Sequential simulation: low performance. Worse when the problem is complex.

rpfeiffer
Télécharger la présentation

Parallel and Distributed Simulation Techniques

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel and Distributed Simulation Techniques Achieving speedups in model execution

  2. The problem • Generating samples significant to draw conclusions. • Sequential simulation: low performance. Worse when the problem is complex. • SOLUTION: to devote more resources (for instance, multiple processors). • GOAL: achieving speedups to obtain results.

  3. Decomposition alternatives • Independent replications • Parallelizing compilers • Distributed functions • Distributed Events • Model decomposition

  4. Independent replications • (+) Efficient • (-) Memory constraints • (-) Complex models cannot divided in simpler submodels executing simultaneously

  5. Parallel Compilers • (+) Transparent to programmers • (+) Existing code can be reused • (-) Ignores the structure of the problem • (-) Parallelism not being exploited

  6. Distributed functions Support tasks to independent processors • (+) Transparent to the user • (+) No synchronization problems • (-) Limited speedup (low degree of parallelism)

  7. Distributed Events • Global Event List • When a processor is free, we execute the following event in the list • (+) Speedups • (-) Protocols to preserve consistency • (-) Complex for distributed memory systems • (+) Effective when much global information is shared

  8. Model decomposition • The model is divided in components loosely coupled. • They interact through message passing. • (+) Speedups when there is little global information. • (+) Make use of the model paralelism • (-) Synchronization problems

  9. Classification of synchronization mechanisms • Time driven simulation: clock advances one tick at a time. Respond to all the events corresponding to that tick. • Event driven simulation: clock is advanced up to the time of simulated message. • Synchronous communication: global clock is used. Every processor: same simulated time. • Aynchronous communication:each processor uses its own clock.

  10. Time driven simulation • Time advances in fixed increments (ticks). • Process simulates the events for the tick. Precission: short ticks. • Synchronous communication • Every process must finish with a tick before advancing to the next. • Synchronization phase. • Central/Distributed global clock.

  11. Time Driven simulation (cont.) • Asynchronous communication • Higher level of concurrency. • Processor cannot simulate events for a new simulated tick without knowing the previous have finished. • More overhead (synchronization).

  12. Event driven simulation • Time advances due to the occurrence of events. • Higher potential speedups. • Synchronous • Global clock: minimum time of the next event. • Centralized or distributed.

  13. Event driven simulation (Cont.) • Asynchronous • Local clock: minimum time for the following event. • More independence between processes. • Independent events: simulated in parallel • Improved performance

  14. PDES (Parallel Discrete Event Simulation) • Asynchronous simulation, • Event-based. • Logical Processes (represent • physical processes). • Every LP uses a clock, • a local event list, and two • link lists • (one input, one output). • A part of the global state.

  15. Causality • Causality Errors. • Cannot occur if we meet the Local Causality Constraint: each/LP processes events in increasing timestamp order. • Sufficient condition (not necessary) to avoid causality errors. • CONSERVATIVE strategies: avoid causality errors. • OPTIMISTIC strategies: errors can occur, and they are fixed.

  16. Conservative mechanisms (Chandy/Misra) • When it is safe to process an event? • A message (representing an event) with a timestamp, and NO other with a smaller timestamp can arrive: the event can be processed. While non processed message x in each/input link Clocki = min(input links’ timestamps); Process EVERY message with that timestamp; If the link with smaller timestamp does not have a message to process then Block the Logical Process.

  17. Deadlock in pessimist strategies

  18. Null Message strategies • Null message from A to B: a “promise” that A will not send a message to B with smaller timestamp than the null message. • Input links can be used to know the minimum future timestamp. • When an event is processed, a Null message is sent with the time of this lower bound. • The receiver computes new output bounds using this as a base, and sends it to the neighbors.

  19. Avoiding deadlock • There cannot be cycles in which the increment of the timestamp is 0. • Null messages can be sent on request • Better results can be obtained when you have lookahead of the values of the timestamps.

  20. Analysing Pessimist strategies • The degree of lookahead is critical. • Cannot exploit parallelism to the maximum. • Not robust: changes in the application represent changing the lookahead times. • The programmer should provide lookahead information. Optimist: causality errors. Pessimist: reduced performance.

More Related