1 / 58

COT 5611 Operating Systems Design Principles Spring 2012

COT 5611 Operating Systems Design Principles Spring 2012. Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM. Lecture 19 – Wednesday March 21, 2012. Reading assignment: Chapter 9 from the on-line text Last time – All-or-nothing and before-or after atomicity

havyn
Télécharger la présentation

COT 5611 Operating Systems Design Principles Spring 2012

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COT 5611 Operating SystemsDesign Principles Spring 2012 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 5:00-6:00 PM

  2. Lecture 19 – Wednesday March 21, 2012 • Reading assignment: • Chapter 9 from the on-line text • Last time – • All-or-nothing and before-or after atomicity • Atomicity and processor management • Processes, threads, and address spaces • Thread coordination with a bounded buffer – the naïve approach • Thread management • Address spaces and multi-level memories • Kernel structures for the management of multiple cores/processors and threads/processes Lecture 19

  3. Today • Locks and before-or-after actions; hardware support for locks • YIELD • Conditions for thread coordination – Safety, Liveness, Bounded-Wait, Fairness • Critical sections – a solution to critical section problem • Deadlocks • Signals • Semaphores • Monitors • Thread coordination with a bounded buffer. • WAIT • NOTIFY • AWAIT • ADVANCE • SEQUENCE • TICKET Lecture 19

  4. Locks; Before-or-After actions • Locks shared variables which acts as a flag to coordinate access to a shared data. Manipulated with two primitives • ACQUIRE • RELEASE • Support implementation of before-or-after actions; only one thread can acquire the lock, the others have to wait. • All threads must obey the convention regarding the locks. • The two operations ACQUIRE and RELEASE must be atomic. • Hardware support for implementation of locks • RSM – Read and Set Memory • CMP –Compare and Swap • RSM (mem) • If mem=LOCKED then RSM returns r=LOCKED and sets mem=LOCKED • If mem=UNLOCKED the RSM returns r=LOCKED and sets mem=LOCKED Lecture 19

  5. Lecture 19

  6. Lecture 19

  7. Important facts to remember • Each thread has a unique ThreadId • Threads save their state on the stack. • The stack pointer of a thread is stored in the thread table. • To activate a thread the registers of the processor are loaded with information from the thread state. • What if no thread is able to run  • create a dummy thread for each processor called a processor_thread which is scheduled to run when no other thread is available • the processor_thread runs in the thread layer • the SCHEDULER runs in the processor layer • We have a processor thread for each processor/core. • We can use spin locks only if the two processes (the producer and the consumer) run on different CPUs; we need an active process to release a spin lock…. Lecture 19

  8. Switching threads with dynamic thread creation • Switching from one user thread to another requires two steps • Switch from the thread releasing the processor to the processor thread • Switch from the processor thread to the new thread which is going to have the control of the processor • The last step requires the SCHEDULER to circle through the thread_tableuntil a thread ready to run is found • The boundary between user layer threads and processor layer thread is crossed twice • Example: switch from thread 0 to thread 6 using • YIELD • ENTER_PROCESSOR_LAYER • EXIT_PROCESSOR_LAYER Lecture 19

  9. Lecture 19

  10. The control flow when switching from one thread to another • The control flow is not obvious as some of the procedures reload the stack pointer (SP) • When a procedure reloads the stack pointer then the place where it transfers control when it executes a return is the procedure whose SP was saved on the stack and was reloaded before the execution of the return. • ENTER_PROCESSOR_LAYER • Changes the state of the thread calling YIELD from RUNNING to RUNNABLE • Save the state of the procedure calling it , YIELD, on the stack • Loads the processors registers with the state of the processor thread, thus starting the SCHEDULER • EXIT_PROCESSOR_LAYER • Saves the state of processor thread into the corresponding PROCESSOR_TABLE and loads the state of the thread selected by the SCHEDULER to run (in our example of thread 6) in the processor’s registers • Loads the SP with the values saved by the ENTER_PROCESSOR_LAYER Lecture 19

  11. Lecture 19

  12. Lecture 19

  13. In ENTER PROCESSOR_LAYER instead of SCHEDULER() should be SP processor_table[processor].topstack Lecture 19

  14. Lecture 19

  15. Implicit assumptions for the correctness of the implementation • One sending and one receiving thread. Only one thread updates each shared variable. • Sender and receiver threads run on different processors to allow spin locks • in and out are implemented as integers large enough so that they do not overflow (e.g., 64 bit integers) • The shared memory used for the buffer provides read/write coherence • The memory provides before-or-after atomicity for the shared variables in and out • The result of executing a statement becomes visible to all threads in program order. No compiler optimization supported Lecture 19

  16. In practice….. Threads run concurrently  Race conditions may occur  data in the buffer may be overwritten  a lock for the bounded buffer the producer acquires the lock before writing the consumer acquires the lock before reading Lecture 19

  17. Lecture 19

  18. We have to avoid deadlocks If a producer thread cannot write because the buffer is full it has to release the lock to allow the consumer thread to acquire the lock to read, otherwise we have a deadlock. If a consumer thread cannot read because the there is no new item in the buffer it has to release the lock to allow the consumer thread to acquire the lock to write, otherwise we have a deadlock. Lecture 19

  19. Lecture 19

  20. In practice… We have to ensure atomicity of some operations, e.g., updating the pointers Lecture 19

  21. One more pitfall of the previous implementation of bounded buffer • If in and out are long integers (64 or 128 bit) then a load requires two registers, e.,g, R1 and R2. int “00000000FFFFFFFF” L R1,int /* R1  00000000 L R2,int+1 /* R2  FFFFFFFF • Race conditions could affect a load or a store of the long integer. Lecture 19

  22. Lecture 19

  23. In practice the threads may run on the same system…. We cannot use spinlocks for a thread to wait until an event occurs. That’s why we have spent time on YIELD… Lecture 19

  24. Lecture 19

  25. Thread coordination • Critical section code that accesses a shared resource • Race conditions  two or more threads access shared data and the result depends on the order in which the threads access the shared data. • Mutual exclusion  only one thread should execute a critical section at any one time. • Scheduling algorithms  decide which thread to choose when multiple threads are in a RUNNABLE state • FIFO – first in first out • LIFO – last in first out • Priority scheduling • EDF – earliest deadline first • Preemption  ability to stop a running activity and start another one with a higher priority. • Side effects of thread coordination • Deadlock • Priority inversion  a lower priority activity is allowed to run before one with a higher priority Lecture 19

  26. Solutions to thread coordination problems must satisfy a set of conditions • Safety: The required condition will never be violated. • Liveness: The system should eventually progress irrespective of contention. • Freedom From Starvation: No process should be denied progress for ever. That is, every process should make progress in a finittime. • Bounded Wait: Every process is assured of not more than a fixed number of overtakes by other processes in the system before it makes progress. • Fairness: dependent on the scheduling algorithm • • FIFO: No process will ever overtake another process. • • LRU: The process which received the service least recently gets the service next. • For example for the mutual exclusion problem the solution should guarantee that: • Safety  the mutual exclusion property is never violated • Liveness  a thread will access the shared resource in a finittime • Freedom for starvation  a thread will access the shared resource in a finittime • Bounded wait  a thread will access the shared resource at least after a fixed number of accesses by other threads. Lecture 19

  27. Thread coordination problems Dining philosophers Critical section Lecture 19

  28. A solution to critical section problem • Applies only to two threads Ti and Tjwith i,j ={0,1} which share • integer turn if turn=ithen it is the turn of Ti to enter the critical section • boolean flag[2] if flag[i]= TRUE then Ti is ready to enter the critical section • To enter the critical section thread Ti • sets flag[i]= TRUE • sets turn=j • If both threads want to enter then turn will end up with a value of either i or j and the corresponding thread will enter the critical section. • Ti enters the critical section only if either flag[j]= FALSE or turn=i • The solution is correct • Mutual exclusion is guaranteed • The liveliness is ensured • The bounded-waiting is met • But this solution may not work as load and store instructions can be interrupted on modern computer architectures Lecture 19

  29. Lecture 19

  30. Deadlocks • Happen quite often in real life and the proposed solutions are not always logical: “When two trains approach each other at a crossing, both shall come to a full stop and neither shall start up again until the other has gone.” a pearl from Kansas legislation. • Deadlock jury. • Deadlock legislative body. Lecture 19

  31. Examples of deadlock • Traffic only in one direction. • Solution  one car backs up(preempt resources and rollback). Several cars may have to be backed up . • Starvation is possible. Lecture 19

  32. Lecture 19

  33. Thread deadlock • Deadlocks  prevent sets of concurrent threads/processes from completing their tasks. • How does a deadlock occur  a set of blocked threads each holding a resource and waiting to acquire a resource held by another thread in the set. • Example • locks A and B, initialized to 1 P0P1 wait (A); wait(B) wait (B); wait(A) • Aim prevent or avoid deadlocks Lecture 19

  34. System model • Resource types R1, R2, . . ., Rm (CPU cycles, memory space, I/O devices) • Each resource type Ri has Wi instances. • Resource access model: • request • use • release Lecture 19

  35. Simultaneous conditions for deadlock • Mutual exclusion: only one process at a time can use a resource. • Hold and wait: a process holding at least one resource is waiting to acquire additional resources held by other processes. • No preemption: a resource can be released only voluntarily by the process holding it (presumably after that process has finished). • Circular wait: there exists a set {P0, P1, …, P0} of waiting processes such that P0 is waiting for a resource that is held by P1, P1 is waiting for a resource that is held by P2, …, Pn–1 is waiting for a resource that is held by Pn, and P0 is waiting for a resource that is held by P0. Lecture 19

  36. Wait for graphs Lecture 19

  37. Semaphores • Abstract data structure introduced by Dijkstra to reduce complexity of threads coordination; has two components • C  count giving the status of the contention for the resource guarded by s • L  list of threads waiting for the semaphore s • Counting semaphore – for an arbitrary resource count. Supports two operations: V - signal() increments the semaphore C P  - wait() P decrements the semaphore C. • Binary semaphore: C is either 0 or 1. Lecture 19

  38. The wait and signal operations P (s) (wait) { If s.C > 0 then s.C − −; else join s.L; } V (s) (signal) { If s.L is empty then s.C + +; else release a process from s.L; } Lecture 19

  39. Monitors • Semaphores can be used incorrectly • multiple threads may be allowed to enter the critical section guarded by the semaphore • may cause deadlocks • Threads may access the shared data directly without checking the semaphore. • Solution  encapsulate shared data with access methods to operate on them. • Monitors  an abstract data type that allows access to shared data with specific methods that guarantee mutual exclusion Lecture 19

  40. Lecture 19

  41. Asynchronous events and signals • Signals, or software interrupts, were originally introduced in Unix to notify a process about the occurrence of a particular event in the system. • Signals are analogous to hardware I/O interrupts: • When a signal arrives, control will abruptly switch to the signal handler. • When the handler is finished and returns, control goes back to where it came from • After receiving a signal, the receiver reacts to it in a well-defined manner. That is, a process can tell the system (OS) what they want to do when signal arrives: • Ignore it. • Catch it and deliver it. In this case, it must specify (register) the signal handling procedure. This procedure resides in the user space. The kernel will make a call to this procedure during the signal handling and control returns to kernel after it is done. • Kill the process (default for most signals). • Examples: Event - child exit, signal - to parent. Control signal from keyboard. Lecture 19

  42. Signals state and implementation • A signal has the following states: • Signal send - A process can send signal to one of its group member process (parent, sibling, children, and further descendants). • Signal delivered - Signal bit is set. • Pending signal - delivered but not yet received (action has not been taken). • Signal lost - either ignored or overwritten. • Implementation: Each process has a kernel space (created by default) called signal descriptor having bits for each signal. Setting a bit is delivering the signal, and resetting the bit is to indicate that the signal is received. A signal could be blocked/ignored. This requires an additional bit for each signal. Most signals are system controlled signals. Lecture 19

  43. Back to thread coordination with a bounded buffer The bounded buffer is a shared resource thus it must be protected; the critical section is implemented with a lock. The lock must be released if the thread cannot continue. Spin lock  a lock which involves busy wait. The thread must relinquish control of the processor, it must YIELD. Lecture 19

  44. Lecture 19

  45. Lecture 19

  46. Coordination with events and signals • We introduce two events • p_room  event which signals that there is room in the buffer • p_notempty event which signals that there is a new item in the buffer • We also introduce two new system calls • WAIT(ev)  wait until the event ev occurs • NOTIFY(ev)  notify the other process that event ev has occurred. • SEND will wait if the buffer is full until it is notified that the RECIVE has created more room • SEND  WAIT(p_room) and RECIVE NOTIFY(p_room) • RECEIVE will wait if there is no new item in the buffer until it is notified by SEND that a new item has been written • RECIVEWAIT(p_notempty) and SENDNOTIFY(p_notempty) Lecture 19

  47. Lecture 19

  48. NOTIFY could be sent before the WAIT and this causes problems The NOTIFY should always be sent after the WAIT. If the sender and the receiver run on two different processor there could be a race condition for the notempty event. Tension between modularity and locks Several possible solutions: AWAIT/ADVANCE, semaphores, etc Lecture 19

  49. AWAIT - ADVANCE solution • A new state, WAITING and two before-or-after actions that take a RUNNING thread into the WAITING state and back to RUNNABLE state. • eventcount variables with an integer value shared between threads and the thread manager; they are like events but have a value. • A thread in the WAITING state waits for a particular value of the eventcount • AWAIT(eventcount,value) • If eventcount >value  the control is returned to the thread calling AWAIT and this thread will continue execution • If eventcount ≤value  the state of the thread calling AWAIT is changed to WAITING and the thread is suspended. • ADVANCE(eventcount) • increments the eventcount by one then • searches the thread_tablefor threads waiting for this eventcount • if it finds a thread and the eventcount exceeds the value the thread is waiting for then the state of the thread is changed to RUNNABLE Lecture 19

  50. Lecture 19

More Related