1 / 33

Implementing Synchronous Models on Loosely Time Triggered Architectures

Implementing Synchronous Models on Loosely Time Triggered Architectures. Discussed by Alberto Puggelli. Outline. From synchronous to LTTA Analysis of system throughput . Synchronous is good…. Predictability Theoretical backing Easy verification Automated synthesis ….

nardo
Télécharger la présentation

Implementing Synchronous Models on Loosely Time Triggered Architectures

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Implementing Synchronous Models on Loosely Time Triggered Architectures Discussed by Alberto Puggelli

  2. Outline • From synchronous to LTTA • Analysis of system throughput

  3. Synchronous is good… • Predictability • Theoretical backing • Easy verification • Automated synthesis • …..

  4. … but difficult to implement! • Need for clock synchronization in embedded systems • Long wires • Cheap hardware • Timing constraints

  5. Solution • Complete all verification steps in the synchronous domain • De-synchronize the system while preserving the semantic • Stream Equivalence Preservation

  6. Steps to desynchronize • Design the system with sync semantic • Choose a suitable architecture • Platform Based Design: select a set of intermediate layers to map the system description on the architecture while preserving the semantic

  7. Architecture: LTTA • Loosely Time Triggered Architecture • Each node has an independent clock and it can write and read to the medium asynchronously • The medium is a shared memory, that is refreshed synchronously to the medium clock • Neither blocking read nor blocking write • Each node has to check for the presence of data in the medium (reading) and for the availability of memory (writing)

  8. Intermediate Layer: FFP • Kahn Process Network with bounded FIFOs (to represent a real system) • Marked Directed Graphs (MDG) to allow semantic preservation and to analyze system performance

  9. Synchronous Model • Set of Mealy machines and connections among them  directed graph (nodes & edges) • Every loop is broken by a unit-delay element (UD) • Partial order: Mi < Mjif there is a link from Mi to Mj without UD (reflexive and transitive) • Minimal element: Mi if there is no Mjs.t. Mj < Mi • Each link is an infinite stream of values in V (also UD has output streams) • Each machine produce an output and a next state as a function of the inputs and of the current state • For UD y(k+1) = x(k) y(0) = i.v. (x input, y output) • Firing the machines in any total order that respects the partial order

  10. Example

  11. Architecture: LTTA • Each node runs a single process triggered by a local clock. • Communication by Sampling (CbS) links among nodes. • API: set of functions to call CbS. These functions can be run at certain conditions (assumption) and they guarantee certain functionalities. • The execution time of each block is less than the time between 2 triggers.

  12. Architecture: CbS • Only source nodes can write (fun: write()) • Only destination nodes can read (fun: read()) • Unknown execution time • Atomicity is guaranteed (a function ends before the following one starts) • No guarantee in freshness of data (due to execution time) • Fun isNew: returns true if there are fresh data • Write add an index (sn) to the data; Reader keeps the index (lsn) of the last read data: if lsn = sn the data is old.

  13. Example

  14. Intermediate Layer: FFP • Architectural similarities with LTTA and semantics close to synchronous • Set of sequential processes communicating with finite FIFOs. • Processes do NOT block: process has to check whether they can execute (queue is not empty before reading; queue is not full before writing) • isEmpty; isFull; put; get (API similar to CbS)

  15. Mapping sync to FFP • Each machine is mapped into a process (UD are not) • There is a queue for each link • If the link has a UD the queue is size 2 • If the link has no UD the queue size is 1 • At each trigger • IF (all input queues are non-empty and all output queues are non-full) • Compute outputs and new state • Write outputs to output queues • Else • Skip step • End if

  16. Mapping sync to FFP (2) • Conversion into a Mark Directed Graph • Every process becomes a transition • Every queue is converted in a forward (model non-empty queues) and in a backward place (model non-full queues) • If the queue has k places and it has(not) a UD, I put k-1(k) tokens in the backward place and 1(0) in the forward place

  17. Example

  18. Example AFTER FIRING T1

  19. Example AFTER FIRING T3

  20. Example AFTER FIRING T2

  21. Mapping sync to FFP • Theorem: semantic is preserved with queues of size at most 2. • Queue of size 2 if there is a UD; size 1 if there is not. • Step 1: the FFP has no deadlocks (this is true by construction since I put at least one token in each directed circuit) • Step 2: any execution of MDG is one possible execution of the corresponding FFP. • Note: check for isFull is not necessary, because by construction, if inputs are not empty, outputs can’t be full.

  22. Mapping FFP to LTTA • It is possible to map 1:1 from FFP to LTTA • FFP API can be implemented on top of LTTA API • Semantic is preserved by skipping processes that can’t be fired (either for empty inputs or for full outputs)

  23. Throughput analysis • Need for an estimation of the system throughput (λ): each process either runs or skips for every trigger. • Upper bound: clock rate (if globally sync) • Is there a lower bound (worst case)?

  24. Throughput analysis • In RT: • Theorem: if the size of a queue is increased, the resulting throughput either increases or remains equal or larger. • Need for a symbolical definition of throughput that is independent of the implementation  logical-time throughput • In LT: • The worst-case throughput is

  25. Throughput analysis • To find the minimum we define a “slow triggering policy”: at each time step, the clock of each process ticks one and only one time and the clocks of disabled processes tick before the clocks of enabled processes • Theorem: the throughput of a system that “adopts” the slow triggering policy is the lowest possible • All disabled processes can’t run until the following time step the throughput is minimized

  26. Throughput analysis • To evaluate the value of λmin we first analyze the associated MDG. • Create a reachability graph (RG) that implements the slow triggering policy  it is a graph in which all enabled transitions are fired (i.e. all transitions that are not enabled can’t be fired until the following step) • Determine the lasso starting from M0 • Lasso: a loop in the (RG) that the system travels an infinite number of times (remember: deadlock free)

  27. Throughput analysis • If L is the length of the lasso, for the process P: • The WC throughput is the same for all processes (a lasso is periodic, so all transitions have to be fired the same number of times to return to the starting marking) • If Δis the periodof the slowest clock:

  28. Example Initial Marking: M0 = (0,1,0,1) #transitions = 3 #places = 2(3-1) = 4 Two adjacent processes can’t be enabled at the same time step!  The lasso is (0,1,0,1)  (1,0,0,1)  (0,1,1,0)(1,0,0,1) The lasso has length 2 and each transition is fired once  throughput = 0.5

  29. Example

  30. Example

  31. Example

  32. Example 2 Initial Marking: M0 = (0,2,0,2) #transitions = 3 #places = 2(3-1) = 4 The lasso is (0,2,0,2)  (1,1,1,1)  (1,1,1,1) The lasso has length 1 and each transition is fired once  throughput = 1

  33. Example 2

More Related