1 / 34

Embedded Design Pattern

Embedded Design Pattern. Dr.Basem Alkazemi bykazemi@uqu.edu.sa http:// uqu.edu.sa/bykazemi. Embedded System. Is a self-contained application that can provide its own functionality without major interdependencies on other parts of the overall system that it is incorporated in. Design Metrics.

elmo
Télécharger la présentation

Embedded Design Pattern

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Embedded Design Pattern Dr.Basem Alkazemi bykazemi@uqu.edu.sa http://uqu.edu.sa/bykazemi

  2. Embedded System Is a self-contained application that can provide its own functionality without major interdependencies on other parts of the overall system that it is incorporated in.

  3. Design Metrics • Nonrecurring engineering cost (NRE) • Unit cost • Size – bytes/gates-trans • Performance • Power • Flexibility • Time-to-prototype • Time-to-market • Maintainability • Correctness • Safety

  4. Design Patterns Is an abstract representation of best practices for resolving commonly known problems in an application domain. It is advantageous to build a common language between different developers and also to reduce development time and cost.

  5. Design Patterns • Synchronizer • High Speed Serial Port • Hardware Device • Resource Allocation • Feature Coordination

  6. Synchronizer Design Pattern Motivation: To obtain synchronization between two components, this pattern provides mechanisms for: • Achieving initial synchronization (sync) • Once sync is achieved, confirming the presence of the sync framing • Initiating loss of sync procedures

  7. Synchronizer Design Pattern Structure The following high level states are defined for the state machine: • Establishment of synch • Lose synch

  8. Synchronizer Design Pattern Establishment of synch • System starts up in "Searching For Sync" state. In this state, the incoming data stream is being analyzed bit by bit, looking for the occurrence of the sync pattern. • As a soon as a first sync pattern is detected, the system transitions to the "Confirming Sync Pattern" state. • Now the system checks if the sync pattern is repeating as expected. This check is made according to the specified periodicity. • If the sync pattern is repeating, the system transitions to the "In Sync" state. (If the sync pattern was not found, the system would have transitioned back to the "Searching For Sync" state) • At this point, the system is considered to be synchronized. 

  9. Synchronizer Design Pattern Lose Synch: • When the system is synchronized, it is in the "In Sync" state. In this state, the system is constantly monitoring the periodic occurrence of the sync pattern. • If an expected sync pattern is found to be missing, the system transitions to the "Confirming Sync Loss". The system is still in a synchronized state. The main purpose of the "Confirming Sync Loss" state is to check if the loss of sync was an isolated event or it represents complete loss of sync. • In the "Confirming Sync Loss" state, the system looks for the sync pattern at the expected time interval. If the sync pattern is seen again, the system transitions back to the "In Sync" state. • In this case, the sync pattern is not detected for a preconfigured number of times. • The system is now in sync loss state. The system is transitioned to the "Searching For Sync" state.

  10. Synchronizer Design Pattern

  11. Synchronizer Design Pattern • Sub-Classes: • Searching_For_Sync : detecting the sync pattern in the coming stream of bits • Confirming_Sync_Pattern: after detection wait till detecting another pattern within a specified period • In_Sync: check if sync is still alive • Confirming_Sync_Loss: if sync bit is lost or want to return to the first state

  12. Synchronizer Design Pattern

  13. High Speed Serial Port Motivation The main motivation is to minimize dependency on hardware due to the frequent changes in interface device that may involves a costly configuration exercise. This design pattern encapsulates DMA configuration, register interfacing and interrupt handling specific to a device. Change in the device will just result in changes to the set of classes involved in implementing this design pattern without affecting consumer classes.

  14. High Speed Serial Port Structure Serial Port pattern is implemented with the SerialPort and SerialPortManager classes. The SerialPortManager maintains an array of SerialPort objects. Each SerialPort object manages the transmit and receive buffers. The SerialPortManager class also implements the interrupt service routine.

  15. High Speed Serial Port • Serial Port Manager: Manages all the Serial Ports on the board. • Serial Port: Handles the interface with a single serial port device. It contains the transmit and receive buffers. • Transmit Queue: This queue contains messages awaiting transmission on the serial port. • Receive Queue: Messages received on the serial link are stored in this queue. 

  16. High Speed Serial Port

  17. High Speed Serial Port Transmitting a Message • Initialize SerialPortManager'sconstructor for the InterruptServiceRoutine() and Serial Port's constructor to initialize TX & RX to initial states (TX = empty, RX = Ready). • invoke the HandleTxMessage() method to enqueue message by SerialPort • The method enqueues the message in the Transmit Queue and checks if this is the first message in the queue. • Since this is the first message in the queue, the message is removed from the queue and copied into a transmission buffer and the "ready for transmission" flag is set. • The flag is set, so the TX device begins transmission of the buffer. • When all bytes of the message have been transmitted, the device set the "finished transmission" bit in the buffer header. • The device checks the next buffer to determine if it is ready for transmission. • In this scenario, no other buffer is ready for transmission. Device raises the transmission complete interrupt. (If more messages were enqueued, the device would have automatically started transmitting the buffer). • The InterruptServiceRoutine() is invoked. • The ISR invokes HandleInterrupt() method of the SerialPortto select the interrupting device. • SerialPort checks the interrupt status register to determine the source of the interrupt. • This is a transmit interrupt, so the HandeTxInterrupt() method is invoked. • A transmission complete event is sent to the task. • This event is routed by the SerialPortManager to the SerialPort. • SerialPort checks if the transmit queue has any more messages. • If a message is found, message transmission of the new message is initiated.

  18. High Speed Serial Port Receiving a Message • When the device detects the start of a new message, it accesses the receive_buffersand checks the "free buffer" bit in the buffer header. • The RX device finds a free buffer, so it starts DMA operations to copy all the received bytes into the designated buffer. • The device raises an interrupt when message reception is completed. It also sets the "received message" bit in the buffer header. (If another message reception starts, the device will automatically start receiving that message in the next buffer) • At this point a message receive_completeevent is dispatched to task list for sender acknowledgement. • The Serial Port's event handler allocates memory for the received message and writes the new message into the receive queue. • Then it cleans up the receive buffer by setting the "free buffer" bit in the buffer header. 

  19. Hardware Device Design Pattern Motivation Very often the lowest level of code that interfaces with the hardware is difficult to understand and maintain. One of the main reasons for this is the behavior of register level programming model of hardware devices. Very often devices require registers to be accessed in a certain sequence. Defining a class to represent the device can go a long way in simplifying the code by decoupling the low level code and register manipulation. Also facilitates porting of the code to a different hardware platform.

  20. Hardware Device Design Pattern Structure The structure of class in this design pattern largely depends upon the register programming model of the device being programmed. In most cases, this design pattern would be implemented as a single class representing the device. In case of complex devices, the device might be modeled as a main device class and other subclasses modeling different parts of the device.

  21. Hardware Device Design Pattern Sample Implementation: • Status Register (STAT): This read only register contains the following status bits: • Bit 0: Transmit Buffer Has Empty Space • Bit 1: Receive Buffer Has Data • Bit 2: Transmit under run • Bit 3: Receive overrun • Action Register (ACT): Bits in this write only register correspond to the bits in the status register. A condition in the status register can be cleared by writing the corresponding bit as 1. Note that bit 0 automatically gets cleared when writes are performed to the transmit buffer. Bit 1 is cleared automatically when reads are performed from the receive buffer. Bit 2 and 3 however need to be cleared explicitly. • Transmit Buffer (TXBUF):Write only buffer in which bytes meant for transmission should be written. • Receive Buffer (RXBUF):Read only buffer in which received bytes are stored.

  22. Resource Allocation Patterns • Resource Allocation Algorithms • Hottest First • Coldest First • Load Balancing • Future Resource Booking

  23. Resource Allocation Patterns Hottest First • In hottest first resource allocation, the resource last released is allocated on next resource request. To implement this last in first out, LIFO type of allocation, the list of free resources is maintained as a stack. An allocation request is serviced by popping a free resource from the stack. When a resource is freed, it is pushed on the free resource list. • The disadvantage of this scheme is that there will be uneven utilization of resources. The resources at the top of the stack will be used all the time. If the resource allocation leads to wear and tear, the frequently allocated resources will experience a lot of wear and tear. This scheme would be primarily used in scenarios where allocating a resource involves considerable setup before use. With this technique, under light load only a few resources would be getting used, so other resources would be powered down or operated in low power mode.

  24. Resource Allocation Patterns Coldest First • In coldest first resource allocation, the resource not allocated for maximum time is allocated first. To implement this first in first out, FIFO type of allocation , the resource allocating entity keeps the free resources in a queue. A resource allocation request is serviced by removing a resource from the head of the queue. A freed resource is returned to the free list by adding it to the tail of the queue. • The main advantage of this scheme is that there is even utilization of resources. Also, freed resource does not get reused for quite a while, so inconsistencies in resource management can be easily resolved via audits.

  25. Resource Allocation Patterns Load Balancing • In situations involving multiple resource groups, load balancing is used. A resource group is controlled by a local resource controller. In this technique, the resource allocator first determines the lightly loaded resource group. Then, the resource controller of the lightly loaded resource group performs the actual resource allocation. The main objective of resource allocations is to distribute the load evenly amongst resource controllers.

  26. Resource Allocation Patterns Future Resource Booking • Here, each resource allocation is for a specified time. The resource allocation is only valid till the specified time is reached. When the specified time is reached, the resource is considered to be free. Thus the resource does not need to be freed explicitly. • This technique is used in scenarios where a particular resource needs to be allocated for short duration to multiple entities in the future. When an allocation request is received, the booking status of the resource is searched to find the earliest time in future when the resource request can be serviced. Resource booking tables are updated with the start and end time of each resource allocation.

  27. Feature Coordination Patterns • Feature design involves defining the sequence of messages that will be exchanged between tasks. When designing a feature one of the tasks in the feature should be identified as the Feature Coordinator. The main role of the Feature Coordinator is to ensure that the feature goes to a logical completion. No feature should be left in suspended animation because of message loss or failure of a single task involved in the message interactions. • In most cases, the task coordinating the feature will be running a timer to keep track of progress of the feature. If the timer times out, the coordinator will take appropriate recovery action to take the feature execution to a logical conclusion, i.e. feature success or failure. • Feature coordination can be achieved in several ways. Some of the frequently seen design patterns are described here. The description is in terms of four tasks A, B, C and D that are involved in a feature. A is the feature coordinator in all cases.

  28. Feature Coordination Patterns • Cascading Coordination Here, on receipt of the feature initiation trigger, A handles the message and further sends a message trigger to B. As a part of the feature, B sends a message to C. Again, C does some action and further sends a message to D. D replies back to C, C replies back to B and B further replies back to A. Finally, A indicates about the feature completion. Most of the times, tasks A, B and C will be keeping a timer to monitor the message interaction. It can be seen that there is cascade of sub-feature control at tasks C, B and A. The main advantage of this scheme is that if any involved task misbehaves, appropriate recovery action can be taken at points C, B or A, thus isolating the failure condition. This design however is more complicated to implement because B and C have to share the coordination role.

  29. Feature Coordination Patterns • Loose Coordination • Here, on receipt of the feature initiation trigger, A handles the message and sends a message to B. B further sends a message to C and C in turn sends a message to D as part of the feature. D takes appropriate action and replies to A. Here, the feature coordinator task A would be running a timeout. The main advantage of this type of coordination is that it involves fewer message exchanges. The message handling at B and C would be fairly straightforward. However, it has a disadvantage that if some involved task misbehaves, only A would timeout and would know about the failure. But A has no means of isolating it.

  30. Feature Coordination Patterns • Serial Coordination • Here, the feature is initiated by A by sending a message to B. B completes its job and replies back to A. A registers the completion of first phase of the feature and initiates the second phase by sending a message to C. C takes some action and replies back to A. A registers the completion of the second phase of the feature and initiates the next phase by sending a message to D. D then performs its job and replies back to A. Here, A keeps a timer for each phase of the feature. This scheme allows the feature coordinator task, A to know about the progress of the feature at all times. Thus the advantage is that A can take intelligent recovery action if a failure condition hits at some point. The main disadvantage is additional complexity at A.

  31. Feature Coordination Patterns • Parallel Coordination • Here, on receipt of the feature initiation trigger, A sends message triggers to B, C and D tasks. B, C and D perform their jobs and reply back to A. In this case A may keep one timer for all the message interactions or it may keep different timers. The main difference of this scheme from the serial coordination scheme is that there is no dependence of the different phases of the feature on each other, so they can be initiated at the same time. Like in case of serial coordination, in this scheme also, intelligent recovery action can be taken if a failure condition is hit because A knows about the feature progress at all times. In parallel coordination, the delay in feature execution is minimized due to parallel activation of sub-features. But parallel activation places a higher resource requirement on the system, as multiple message buffers are acquired at the same time.

  32. Summary Design patterns offers the following benefits: • Provides a common framework for exchanging ideas. • Reduce time to market as designers can re-use ready made design patterns, without wasting time for re-inventing the wheels. • Quality assurance as design patterns usually tested thoroughly.

More Related