1 / 89

RTOS

RTOS. Unit 5 Resources: limayesir.wordpress.com Raj Kamal Heath. What RTOS?.

Télécharger la présentation

RTOS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. RTOS Unit 5 Resources: limayesir.wordpress.com Raj Kamal Heath

  2. What RTOS? • Operating systems are software environments that provide a buffer between the user and the low level interfaces to the hardware within a system. They provide a standard interface and a set of utilities to enable users to utilise the system. Some applications do not require any operating system support at all and run direct on the hardware. They are called “Bare Metal” systems. • Real Time Operating System is a multitasking operating system for applications that need to meet time deadlines for operations and have a constraint on time taken to respond to an event. • RTOS has two main parts: • Kernel for task management and inter task communication and coordination • Support services like device drivers, file system, memory management

  3. Types of Real Time Systems • Hard Real Time Systems • Meeting deadlines is very critical and missing the target can result in catastrophic losses to life and property. The latencies can be deterministically calculated. E.g. Air bag control in cars, anti-lock brake, engine control system • Soft Real Time Systems • These systems also have deadlines but they are not very stringent. Missing them results in minor inconvinience to the user or a slight degradation of performance. E.g. Digital camera, mobile phones, online data base.

  4. What is “Task”? • Task (also called process) is a set of programs responsible for one aspect of a system. E.g. one task is to periodically read temperature from a sensor. Another task is to transmit these temperatures on internet and update these values on cloud. • A task has 3 main states • Running (Only one can be running in a single processor system) • Ready (Waiting for its turn) • Blocked (Because it is waiting for some essential input)

  5. Task life time • On booting, RTOS gets initialized and then it creates a master user task. This task creates more tasks and they all run in time sliced manner. • Some tasks are temporary and they are terminated once their function is over. • Some tasks are active throughout the life of the system. • On creation, every task is assigned a Task Control Block (TCB) which stores its state. It is in “Idle” state. When it is activated, it goes to “Ready” state. When a task is finished, its TCB is deallocated and it no longer exists.

  6. Task coding • A task cannot call another task. The initial calling will be done by the scheduler. It may pass some parameters which will be stored in TCB. • As the task is endless, it is not expected to return. Hence it will be of void type. • Task code must be reentrant. i.e. no global variables. • If two or more tasks share a common data area, appropriate measures are taken to avoid conflict.

  7. Example of tasks void taskSense(){ while(1){ Temp = ADCport; //read ADC value OSMboxPost(&Temp); //Post it on mailbox OSFlagPend(TimerFlag); //Wait for next timer tick } } void taskSend(){ while(1){ Temp = OSMboxPend(); //Wait for message & read UpdateCloud(Temp); //Send temp to cloud } }//Note simplified versions of uC/Osiii function are shown

  8. Task control Block (TCB or PCB) • TCB is a data structure associated with each task that maintains information about the task. Various elements of TCB are: • State – Ready/Running/Blocked • Context – CPU registers incl. flags, SP, PC • Task ID, parent ID, Child ID • Allocated memory blocks and page table start address • Security and permissions

  9. Thread • Thread is a sequentially executable code. • It can be considered as a lightweight task as the Thread Control Block is simpler than TCB because It uses resources of the parent task. • One task may have single or multiple threads. Threads are scheduled and managed by process. • Thread has states, (Running/Ready/Blocked), priority, stack space, context, masking flag

  10. ISR ISR is invoked on occurrence of a hardware signal or software instruction like SWI. It first saves the registers that it is going to use It performs input/output operation and resets the interrupt condition. Enables interrupt system. It does not perform time consuming processing on I/O data. It posts a message to interrupt service task (IST) or any other task that is waiting for it. ISR is not allowed to use any blocking statement like OSSemPend() or a wait loop as it will increase latency for other interrupt sources, because RTOS has to give guaranteed latency to other interrupts and tasks. After performing I/O, it restores registers and returns. Scheduler has no control over its execution. Scheduler itself is invoked by timer interrupt.

  11. RTOS services • RTOS offers its services in the form of API (Application Program Interface) which is a set of functions that can be called by application programs. E.g. OSTaskCreate() is used to create a task. • We can broadly divide the services in two categories • Kernel for process management and IPC • Device drivers and file managers

  12. services • Process management • Task creation and deletion • Scheduling (Despatching) • Inter process communication (Semaphores, messagesboxes, events, pipes, queues) • Timer functions • Other services • Device drivers like TCP/IP stacks, USB stacks, Graphic display, ISRs. • File system managers which provide functions like open, read, write and close.

  13. Processor modes • Processor has two modes of operation • User mode • Kernel mode • In user mode, some instructions are disabled (e.g. Disabling or enabling interrupts) and some memory areas are not accessible. • Kernel mode has full control over the processor. • User mode threads run slower because they undergo various checks. Also when they call kernel services, mode is switched and time is lost.

  14. Kernel architecture types 1 Monolithic kernel- All OS functions run in Kernel mode 2 Micro kernel – Only basic functions in kernel mode 3 Hybrid kernel – IPC and device drivers in kernel mode

  15. Difference between RTOS and GPOS • RTOS uses preemptive scheduling to ensure that high priority task gains access quickly. GPOS uses non preemptive, i.e. cooperative scheduling to minimize task switching overhead. • RTOS scheduler uses algorithms like Earliest Deadline First (EDF) so that deadline is not missed. • Unlike RTOS, there is no priority inversion in GPOS.

  16. Scheduling models • Cooperative (Tasks cooperate) • Cooperative Scheduling in the cyclic order • Cooperative Scheduling according to priority • Cyclic scheduling • Round robin time slicing • Preemptive (Tasks are switched fast) • Earliest Deadline First • Rate monotonic

  17. Cooperative Scheduling in the cyclic order • Each task cooperates to let the running task finish or blocking stage. i.e. running task is not interrupted. • The service is in the cyclic order for all tasks in the “READY” list. This is equivalent to assigning rotating priorities, i.e. the task which has just finished gets lowest priority. • Task switching takes place when one task has finished and is waiting for something to happen. • Simple to implement.

  18. Disadvantage • Worst case latency is same for every task, i.e. sum of execution times (eti) and switching time (sti) of all tasks and total time for ISRs. • Tworst = {(sti + eti )1 + (sti + eti )2 +...+ (sti + eti )N-1 + (sti + eti )N} + tISR. • This may not be acceptable for some tasks.

  19. Cooperative Scheduling according to priority • In this scheme, the “READY” list is ordered according to priority and the control is given to the highest priority task. • Thus the latency for high priority tasks is reduced. For highest priority task it is, • Tbest = (dti + sti + eti ) p(m)} + tISR • Where dti is event detection time • Worst case latency is for lowest priority task. It is, • Tworst {(dti + sti + eti )p1 + (dti + sti + eti ) p2 +...+ (dti + sti + eti ) p(m-1) + (dti + sti + eti ) p(m)} + tISR. • Every task must meet the condition • Tworst < Td (Time Deadline)

  20. Automatic chocolate vending machine • It has three tasks. • Sense the coins inserted by the user. (Highest priority) • Deliver chocolate. (Medium priority) • Display message ‘thank you, visit again’. (Lowest priority) • We can use cooperative Scheduling according to priority

  21. Tasks and ISR together • There are two layers of execution. ISR is in the first layer and it executes immediately. ISR quickly performs I/O and passes the message to Interrupt Service Task (IST). This puts IST in ready list and it does processing of the message. • The tasks run in the second layer according to their priorities. Priority of interrupts is decided by hardware and corresponding IST has same priority.

  22. Cyclic scheduling • This is suitable for a small number of periodic tasks. Assume that we have three tasks that need to be executed after every Tcycle seconds. • The first task executes at t1, t1 + Tcycle, t1+ 2 * Tcycle, … • second task executes at t2, t2 + Tcycle, t2+ 2 * Tcycle and • Third task at t3, t3 + Tcycle, t3+ 2 * Tcycle, … • Timer function of RTOS is used for this purpose.

  23. Latency in cyclic scheduling • Tcycle is chosen such that • Tcycle = N * Sum of the maximum times for each task • Each task is executed once and finishes in one cycle itself. When a task finishes the execution before its maximum time, there is an idle period in-between period between two cycles. • If the tasks are fired in a staggered manner, then there is no waiting period. In the worst-case, all tasks are fired at the same time. In this case, • latency = N * Sum of the maximum times for each task. • A task’s need of repeat execution is an integral multiple of tcycle.

  24. Example of Video and audio signals • Signals reach at the ports in a multimedia system in MP3 format. They converted to bitmap format suitable for displaying on LCD screen. • The video frames arrive at the rate of 25 frames per second. • The cyclic scheduler is used in this case to process video and audio with Tcycle = 40 ms or in multiples of 40 ms.

  25. Round robin with time slicing • This scheme is a hybrid between cooperative and preemptive scheduling. It minimizes switching while achieving low latency. • Each task in ready list runs in a cyclic queue but it gets a limited time period (Tslice)to execute. If the cycle period is Tcycle, and there are N ready tasks, then Tslice = Tcycle / N seconds. • If the task is not finished within its slice, then it is interrupted and control is given to the next task. It will do the unfinished job in the next one or more cycles.

  26. Latency of round robin • Tcycle ={Tslice )}*N + tISR. • t ISR is the sum of all execution times for the ISRs, i = 1, 2, …, N -1 , N • Tworst = N *(Tslice) + tISR. • Worst case latency is same for every task in the ready list.

  27. Preemptive scheduling • GPOS uses cooperative scheduling to minimize switching overhead, • In cyclic scheduling, Worst case latency equals the sum of execution times of all tasks. • In priority based scheduling a long execution time of a low- priority task lets a high priority task wait till it finishes. • RTOS always uses preemptive scheduling to achieve low latency. • Scheduler is triggered at periodic intervals with system timer. With each invocation, it schedules the task having highest priority in the ready list. If the highest priority task is other than the current task, then task switching takes place.

  28. Preemptive with Fixed priorities • A long low priority task does not keep high priority task waiting. Maximum wait is Tslice, i.e. clock tick period. • At every clock tick, scheduler provides for preemption of lower priority process by higher priority process. • Assume priority of task_1 > task_2> task_3 > task_4…. > task N • Each task has an infinite loop from start (Idle state) up to finish.

  29. Assigning priorities • Priorities are assigned by the programmer during programming phase depending on the relative importance. Scheduler cannot change them. • For example, • Highest priority for “panic” pushbutton. • Power fail detection is next priority • Print events log (Soft deadline) • Run system diagnostics (lowest priority)

  30. Preemptive fixed priority scheduling

  31. Program counter assignment • Initially Task2 is running. • At clock tick, OS preempts Task2 and runs higher priority task1. • Task1 is highest priority, so it runs to completion and yields control to OS • Task2 resumes where it left and completes. • Another ready task3 takes over

  32. Earliest Deadline First (EDF) • Fixed priority based schemes are like a traffic signal without traffic sensors. They do not have actual control over deadlines. • EDF algorithm assigns dynamic priorities based on time of deadline. The task having earliest deadline is assigned highest priority. It may also have two or more priority queues where tasks could be inserted. • Example TaskRelease time Compute Time Ci Deadline Period Ti T1 0 1 4 4 T2 0 2 6 6 T3 0 3 8 8

  33. EDF example • First let us see schedulability. • CPU utilization U is given by S Ci / Ti • U= 1/4 +2/6 +3/8 = 0.25 + 0.333 +0.375 = 0.95 • = 95% • Since U < 1, schedule is feasible. • At t=0 all the tasks are released, but priorities are decided according to their absolute deadlines so T1 has higher priority as its deadline is 4 earlier than T2 whose deadline is 6 and T3 whose deadline is 8, that’s why it executes first.

  34. At t=1 again absolute deadlines are compared and T2 has shorter deadline so it executes and after that T3 starts execution but at t=4 T1 comes in the system and deadlines are compared, at this instant both T1 and T3 has same deadlines so ties are broken randomly so we continue to execute T3. • At t=6 T2 is released, now deadline of T1 is earliest than T2 so it starts execution and after that T2 begins to execute. At t=8 again T1 and T2 have same deadlines i.e. t=16, so ties are broken randomly an T2 continues its execution and then T1 completes. Now at t=12 T1 and T2 come in the system simultaneously so by comparing absolute deadlines, T1 and T2 has same deadlines therefore ties broken randomly and we continue to execute T3.

  35. At t=13 T1 begins it execution and ends at t=14. Now T2 is the only task in the system so it completes it execution. • At t=16 T1 and T2 are released together, priorities are decided according to absolute deadlines so T1 execute first as its deadline is t=20 and T3’s deadline is t=24.After T1 completion T3 starts and reaches at t=17 where T2 comes in the system now by deadline comparison both have same deadline t=24 so ties broken randomly ant we T continue to execute T3. • At t=20 both T1 and T2 are in the system and both have same deadline t=24 so again ties broken randomly and T2 executes. After that T1 completes it execution. In the same way system continue to run without any problem by following EDF algorithm.

  36. Rate monotonic algorithm • This is a fixed priority algorithm in which task priorities are assigned according to their rate of arrival. Priority of a task is proportional to its arrival rate. Task with Highest frequency has highest priority and the task with lowest frequency will have lowest priority.  • Its disadvantage is that sporadic and aperiodic tasks are not supported. Sporadic tasks may arrive at a high frequency during burst period but they will have low priority. This problem is solve by assigned tickets, i.e. special slots to sporadic tasks. • Another disadvantage is that a task may have a low frequency but its deadline may be critical. This is solved by splitting such task into two or more sub-tasks.

  37. Advantages of EDF over rate monotonic • No need to define priorities offline • It has less context switching than rate monotonic • It utilize the processor maximum up to 100% utilization factor as compared to rate monotonic • Disadvantages of EDF over rate monotonic • It is less predictable. Because response time of tasks are variable and response time of tasks are constant in case of rate monotonic or fixed priority algorithm. • EDF provided less control over the execution • It has high overheads

  38. Context switching • Context refers to the CPU state, namely registers, including SP, PC, Flags. When interrupt occurs, ISR saves the context on stack. At the end, ISR pops the context from stack. When PC is restored, the interrupted program resumes. • In preemptive scheduler RTOS, task switching occurs At SysTick intervals. The hardware clock interrupt saves the context of the running program on stack. But at the end, control is not given back to the interrupted program. Control goes to OS. It copies the context to TCB and also stores its current state and other information. The OS updates message Pend and Post functions, updates the READY/BLOCKED states of all tasks. Then the scheduler is run to determine the next task to be run. If it is different from the interrupted task, then Page table base address is updated for the new task and its context is copied from its TCB to stack as if it was interrupted. Then RETI instruction causes it to resume.

  39. Inter process communication (IPC) • If different tasks have to run in a coordinated manner, then they should be able to communicate with each other. RTOS provides several mechanisms for this. • Message queue • Mailbox • Pipe and socket • Signal and event

  40. Message queue • A task can create one or more queues with OSQCreate() function. Each queue has a unique ID. The maximum queue length is specified during creation. • OSQPost() function is used to put data in FIFO queue. • OSQPostFront() function is used to put data in LIFO queue. • OSQPend() is used to wait for a message and read it and remove it from the queue. • There may be multiple receivers of the message. • The queue entries may be data or pointer to data.

  41. mailbox • Mailbox is similar to queue. • It is used to send data from one task to another. • It can contain only one entry whereas queue can contain FIFO or LIFO queue. • The API functions are • OSMBoxCreate() • OSMBoxPost() • OSMBoxPend()

  42. Using messagebox or queue for sequencing of tasks

  43. pipe • Pipe is another inter process communication mechanism but unlike message queue and mail box, it is like a byte stream and accessed like file functions. • Pipe is unidirectional. One thread or task inserts into it and other one deletes from it. • Pipe has the functions pipeDevCreate , connect and delete and functions. • A has a variable number of bytes per message between the initial and final pointers. • Data transfer is done similar to a file (open, write, read, close)

  44. Pipe example

  45. signal • Signal is used to synchronize two processes without the need for sending any data. It is shortest and fastest method compared to message box or message queue. • signal( ) function is Provided in Unix, Linux and several RTOSes. It is software equivalent of interrupt and it causes execution of “Signal handler”. Each signal has an ID number. A vector table indexed by ID is used to find the address of corresponding signal handler. • Signal bypasses the scheduler mechanism and passes the event immediately.

  46. Event Functions • Semaphore or mailbox-message posting functions wait for only one event . • Event related OS functions are more flexible. Using event function, a tasks can wait for number of events to occur or wait for any of the predefined set of events. • Events for wait can be from different tasks or the ISRs. • OSEventCreate ( ) creates an event register containing 8, 16 or 32 bits. Each bit corresponds to an event flag. We can form groups of the bits so that wait can be for all flags in a group or for any flag in a group. • SET (event_flag) is used to set and CLEAR(event_flag) is used to clear flag.

  47. Types of wait • WAIT_ALL() function─ task waits for the occurrences of setting all the event flags in a group. [Wait till AND operation between all flags in the group equals true.] • WAIT_ANY() function─ task waits for the occurrences of setting at least one event flag setting in a group. [Wait till OR operation between all flags in the group equals true.]

  48. Sharing data • We need to share a memory block among two or more processes for following reasons. • Time, which is updated continuously by a process, is also used by display process in a system • Port input data, which is received by one process and further processed and analysed by another process. • Memory Buffer data which is inserted by one process and further read (deleted), processed and analysed by another process

More Related