introduction to real time systems n.
Skip this Video
Loading SlideShow in 5 Seconds..
Introduction to real-time systems PowerPoint Presentation
Download Presentation
Introduction to real-time systems

Introduction to real-time systems

212 Vues Download Presentation
Télécharger la présentation

Introduction to real-time systems

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Introduction to real-time systems CI346

  2. What does ‘Real Time’ mean? • All computer systems model some aspect of the outside world • the timescale does not necessarily match that of the real world • Real time systems required to conform to timescales imposed by the outside world • must work as fast as the outside world • must work as slowly as the outside world

  3. What does ‘Real Time’ mean? • Hard real time • very tight deadlines • failure to meet deadlines is catastrophic • e.g. aerospace applications (autopilot)

  4. What does ‘Real Time’ mean? • Soft real time • deadlines are more flexible • failure to meet deadlines is not necessarily catastrophic • e.g. multimedia applications (video display) • e.g. financial systems (payroll)?

  5. Embedded systems • Real time systems are often computer systems which are part of (embedded in) a larger system • e.g. process control, autopilot, manufacturing • ‘Embedded’ often used as a synonym for real time

  6. Characteristics • Need to cope with a variety of external events • software frequently large and complex • Reliable and safe • able to detect and recover from failures • Need to interact with external hardware

  7. Characteristics • Need to be able to specify timing requirements • when to perform actions • when to complete actions by • what to do when deadlines are missed • granularity is important (e.g. IBM PC clock granularity is 55ms = 100yds at 600mph)

  8. Characteristics • Event-driven rather than process-driven • external events must be dealt with as they occur • event ordering is not generally predictable • Generally uses concurrent processes • each event source can be handled by a separate process

  9. Characteristics • Predictable response times • guaranteed maximum processing times (rules out e.g. use of Ethernet) • How to measure response times? • caching, pipelining etc. affect program speed • worst case is an order of magnitude slower than the average • fast programs + external time references

  10. Object-oriented real time systems • O-O systems are modelled as collections of interacting objects • Objects are generally passive, responding only when requested to (sent a message) • Real time systems are modelled as collections of interacting processes • Processes are active entities, operating autonomously and interacting with each other whenever necessary

  11. Object-oriented real time systems • O-O real time systems combine the two approaches • an OORTS is a collection of objects, each of which is associated with an autonomous process

  12. Why concurrency? • Example: system to service multiple telex & fax connections • each device sends headers (e.g. date, time, message number) followed by the message itself

  13. Why concurrency? • Non-concurrent (sequential) implementation: for each device loop if character available then get character if waiting for header then ... elsif in message header then ... elsif in message body then ... end if else try next device end if end loop

  14. Design issues • Body of program is a large finite state machine with many states • program hard to read; transitions between states is buried in body of code • program is hard to maintain; adding new states may affect existing states

  15. Design issues • All data ends up globally accessible • variables must retain their value from one pass through loop to the next • leads to more maintenance problems • Bottom-up approach to coding: get a character and then decide what to do

  16. Why concurrency? • Concurrent implementation: loop wait for start of header process message header process message body end loop process message header: read a line of text from device

  17. Why concurrency? read a line of text: while not end of line loop read a character from device end loop read a character: if character available then get character else suspend and activate another process end if • Allows use of a top-down approach to coding

  18. Processes and programs • A program is a sequence of instructions • a passive collection of binary data stored on e.g. a disk • A processor is a device for processing instructions • a hardware device

  19. Processes and programs • A process is the act of processing a sequence of instructions • requires a program and a processor • more than one process can be executing the same program • one process can execute more than one program • processes are also called tasks

  20. Processes and programs • Operating systems provide separated environments for processes • separate address space • Each process is like an emulation of a complete independent computer • has its own private processor, memory and I/O devices • communication between processes is relatively expensive

  21. Processes and programs • In real-time systems, processes are usually lightweight processes (aka threads) • they share the same address space • communication is by shared memory areas • Threads behave like a single computer with multiple processors • each OS process contains one or more threads

  22. Processes and programs • Real-time kernel represents processes and threads using context blocks (aka descriptors, or many other names) • volatile context (copies of processor registers) • system info (process priority, periodicity, current state...)

  23. Types of concurrency • Co-operative scheduling (’coroutines’) • processes suspend voluntarily when they need to wait for an event • processes must also suspend at regular intervals during lengthy processing

  24. Types of concurrency • Pre-emptive scheduling (‘timeslicing’) • processes suspended as a result of interrupts from external hardware • problems with access to shared resources: impossible to predict at what point a process will be suspended

  25. Implementing concurrency • Most languages don't provide support for concurrency (e.g. C, C++) • operating system (or runtime system) may support concurrency via a function library • Languages which do support concurrency include occam, Ada, Java

  26. Implementing concurrency • Languages with concurrency facililties are not automatically real-time languages • depends on definition of how queues are managed etc. • even languages which are suitable for real-time are dependent on the underlying runtime system and operating system (e.g. clock granularity, interrupt latency)

  27. Implementing concurrency • Real-time behaviour requires predictability • Java is not a real-time language! • garbage collection occurs at unpredictable times • the order in which waiting threads are woken up is not defined • no timing guarantees • Work is in progress on a real-time version of Java

  28. Concurrency in Java • Derive from class Thread • implement the thread body by overriding the ‘run’ method public class MyThread extends Thread { public void run () { for (int i = 0; i < 1000; i++) { System.out.println(’Hello world’); } } }

  29. Concurrency in Java • Create objects of the thread class: MyThread t1 = new MyThread(); MyThread t2 = new MyThread(); • Start the threads by calling ‘start’: t1.start(); t2.start(); • Each thread executes in parallel with the rest of the program • threads halt on exit from ‘run’

  30. Synchronization • Problem if two threads try to update the same object: int i = sharedObject.get(); i = i + 1; sharedObject.set(i); • If sharedObject holds the value 5 and two threads execute this code: • each thread copies 5 into its own local variable, updates the local variable and sets sharedObject to 6

  31. Synchronization • ‘Synchronized’ blocks prevent this happening: synchronized (sharedObject) { int i = sharedObject.get(); i = i + 1; sharedObject.set(i); } • only one thread at a time can execute a ‘synchronized’ block for a particular object • other threads must until they are allowed to enter it

  32. Synchronization • Each Java object has an internal ‘lock’ variable and a queue for waiting threads • if the lock is clear, lock the object and enter the block • if the lock is set, wait in the queue • on exit from the block, clear the lock and wake up one of the waiting threads (if there are any)

  33. Synchronization • This is not very O-O • data is protected at the point where it is accessed, not at the point it is defined • Solution: synchronized methods • any method can be marked as ‘synchronized’ in a class declaration public synchronized void() { ... } • only one thread can execute a synchronized method of a particular object at any one time

  34. Synchronization • Problem: • it’s up to you to decide what methods should be synchronized • if there’s a maximum time you’re prepared to wait, you can’t specify a timeout • one solution: use a second thread to interrupt the first one after a timeout period (messy!)

  35. Thread methods • sleep(n) – sleep for at least n milliseconds • getPriority(), setPriority(n)– change a thread’s priority • interrupt() – interrupt a thread • join(), join(n)– wait for a thread to die (indefinitely, or for up to n milliseconds) • Thread.currentThread()– get a reference to the currently-executing thread

  36. Object methods • wait(), wait(n) – wait inside a synchronized block, releasing any object locks • note: threads can’t continue after a wait() until they have reacquired the lock • notify()– wake up one waiting thread • notifyAll()– wake up all waiting threads • note: this can only be used inside a synchronized block for the relevant object

  37. Other approaches • Semaphores (Dijkstra) • a semaphore has an initial integer value N • the wait operation decrements the counter and suspends the caller if the counter becomes negative • the signal operation increments the counter and resumes suspended task(s)

  38. Other approaches • Problems: • non-OO: client code must use semaphores to protect critical sections where shared data is updated • very low-level and hence error-prone • requires shared memory

  39. Other approaches • Channels (mailboxes) • like a pipe in Unix: write to one end, read from the other • suspend if nothing to read • doesn't provide updatable shared data • can encapsulate shared data in a process with separate reader and writer channels

  40. Safety and liveness • Concurrency provides many new opportunities for errors • Concurrent code must be safe (nothing ‘bad’ will happen) and live (something ‘good’ will eventually happen) • In a real-time system, it must be possible to establish an upper bound on ‘eventually’

  41. Safety issues

  42. Exclusion issues • Mutual exclusion • protection against more than one process updating the same data at the same time • synchronized methods can do this • Conditional exclusion • e.g. making tasks wait for write access to shared buffer if buffer is full

  43. Exclusion issues • Use wait() and notify(): while (bufferEmpty()) { wait(); } Item i = getItem(); notifyAll(); return i; ----------------------------------- while (bufferFull()) { wait(); } insertItem(i); notifyAll();

  44. Deadlock • Example: gridlocked cars at a junction • all paths are blocked so all cars are unable to move • unless a car moves the paths will remain blocked

  45. Deadlock • An even simpler example: two trains going in opposite directions on a single track: • neither train can proceed unless the other train gets out of the way • if trains can’t go backwards, they’re stuck

  46. Deadlock • Processes prevent each other from proceeding • Possible solutions • prevent (make deadlock impossible) • avoid (see it coming and take steps to avoid it happening) • recover (let it happen, then fix it) • ignore (cross fingers and hope)

  47. Deadlock • Four necessary & sufficient conditions for deadlock: • tasks require to use a non-shareable resource • tasks hold onto resources while waiting for extra ones to be assigned to them • resources cannot be taken away from tasks by an outside agency • there is a circular chain of tasks requesting a resource held by another task

  48. Deadlock • Tasks require to use a non-shareable resource: • virtualise resources, e.g. print spooling on disk (printer is non-shareable, disk is shareable) • not possible in all cases (e.g. a railway track is not shareable between two trains and can't be virtualised away!)

  49. Deadlock • Tasks hold onto resources while waiting for extra ones to be assigned to them: • insist that all resources are allocated at once (task cannot proceed until all requests have been granted) • inefficient (resources will be allocated when they're not needed)

  50. Deadlock • Resources cannot be taken away from tasks by an outside agency: • similar to the previous case (if any request fails, release all resources and loop back to start again) • inefficient, as before