1 / 46

Concurrency & Processes

This program discusses the concepts of concurrency and processes in operating systems, including critical regions, race conditions, and synchronization. It also explores different solutions such as Peterson's solution and the Test & Set instruction.

norbertoj
Télécharger la présentation

Concurrency & Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency & Processes CS502 Operating Systems Concurrency and Processes

  2. int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Thought experimentstatic int y = 0; Concurrency and Processes

  3. int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Thought experimentstatic int y = 0; Upon completion of main,y == 1 Concurrency and Processes

  4. Program 1 int main(int argc, char **argv) { extern int y; y = y + 1; return y; } Program 2 int main2(int argc, char **argv) { extern int y; y = y - 1; return y; } Thought experiment (continued)static int y = 0; Assuming programs run “in parallel,” what are possible values of y after both terminate? Concurrency and Processes

  5. Fundamental Abstraction • On (nearly) all computers, reading and writing operations of machine words can be considered as atomic • Non-interruptible • Appears to take zero time • It either happens or it doesn’t • Not in conflict with any other operation • No other guarantees • (unless we take extraordinary measures) Concurrency and Processes

  6. Definitions • Definition: race condition • When two or more concurrent activities are trying to do something with the same variable resulting in different values • Random outcome • Critical Region (aka critical section) • One or more fragments of code that operate on the same data, such that at most one activity at a time may be permitted to execute anywhere in that set of fragments. Concurrency and Processes

  7. Synchronization – Critical Regions Concurrency and Processes

  8. Class Discussion • How do we keep multiple computations from being in a critical region at the same time? • Especially when number of computations is > 2 • Remembering that read and write operations are atomic example Concurrency and Processes

  9. Possible ways to protect acritical section • Without OS assistance — Locking variables & busy waiting • Peterson’s solution (p. 195-197) • Atomic read-modify-write – e.g. Test & Set • With OS assistance on single processor • (See later) • What about multiple processors? • (Later in the term) Concurrency and Processes

  10. Requirements – Controlling Access to a Critical Section • Symmetrical among n computations • No assumption about relative speeds • A stoppage outside critical section does not lead to potential blocking of others • No starvation — i.e. no combination of timings that could cause a computation to wait forever to enter its critical section Concurrency and Processes

  11. Computation 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1(); }; Computation 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2(); }; Non-solutionstatic int turn = 0; Concurrency and Processes

  12. Computation 1 while (TRUE) { while (turn !=0) /*loop*/; critical_region(); turn = 1; noncritical_region1(); }; Computation 2 while (TRUE) { while (turn !=1) /*loop*/; critical_region(); turn = 0; noncritical_region2(); }; Non-solutionstatic int turn = 0; What is wrong with this approach? Concurrency and Processes

  13. Peterson’s solution (2 computations)static int turn = 0;static int interested[2]; void enter_region(int process) { int other = 1 - process; interested[process] = TRUE; turn = process; while (turn == process && interested[other] == TRUE) /*loop*/; }; void leave_region(int process) { interested[process] = FALSE; }; Concurrency and Processes

  14. Homework Assignment Can Peterson’s solution be generalized to more than 2 concurrent computations? • A: Read Silbershatz, §§ 6.1–6.5 • B: Read Dijkstra, “Solution to a Problem of Concurrent Program Control” (pdf) • C: Write 1-2 paragraphs of thoughts; submit via turnin next week • http://web.cs.wpi.edu/Help/turnin.html • Class = “cs502”, project=“homework1” • D: Class discussion next week. Concurrency and Processes

  15. Another approach: Test & Set Instruction(built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; }; Concurrency and Processes

  16. Test & Set Instruction(built into CPU hardware) static int lock = 0; extern int TestAndSet(int &i); /* sets the value of i to 1 and returns the previous value of i. */ void enter_region(int &lock) { while (TestAndSet(lock) == 1) /* loop */ ; }; void leave_region(int &lock) { lock = 0; }; What about this solution? Concurrency and Processes

  17. Variations • Compare & Swap (a, b) • temp = b • b = a • a = temp • return(a == b) • … • A whole mathematical theory about efficacy of these operations • All require extraordinary circuitry in processor memory, and bus to implement atomically Concurrency and Processes

  18. Different ApproachUse OS to help • Implement an abstraction:– • A data type called semaphore • Non-negative integer values. • An operation wait_s(semaphore &s) such that • if s > 0, atomically decrement s and proceed. • if s = 0, block the computation until some other computation executes post_s(s). • An operation post_s(semaphore &s):– • Atomically increment s; if one or more computations are blocked on s, allow precisely one of them to unblock and proceed. Concurrency and Processes

  19. Computation 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Computation 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); }; Critical Section control with Semaphorestatic semaphore mutex = 1; Concurrency and Processes

  20. Computation 1 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region1(); }; Computation 2 while (TRUE) { wait_s(mutex); critical_region(); post_s(mutex); noncritical_region2(); }; Critical Section control with Semaphorestatic semaphore mutex = 1; Does this meet the requirements for controlling access to critical sections? Concurrency and Processes

  21. Semaphores – History • Introduced by E. Dijkstra in 1965. • wait_s() was called P() • Initial letter of a Dutch word meaning “test” • post_s() was called V() • Initial letter of a Dutch word meaning “increase” Concurrency and Processes

  22. Abstractions • The semaphore is an example of a new and powerful abstraction defined by OS • I.e., a data type and some operations that add a capability that was not in the underlying hardware or system. • Once available, any program can use this abstraction to control critical sections and to create more powerful forms of synchronization among computations. Concurrency and Processes

  23. Questions? Concurrency and Processes

  24. Processes Concurrency and Processes

  25. Why worry about all this? • Since the beginning of computing, management of concurrent activity has been the central issue • Concurrency between computation and input or output • Concurrency between computation and user • Concurrency between essentially independent computations that have to take place at same time Concurrency and Processes

  26. Background – Interrupts • A mechanism in (nearly) all computers by which a running program can be suspended in order to cause processor to do something else • Two kinds:– • Traps – synchronous, caused by running program • Deliberate: e.g., system call • Error: divide by zero • Interrupts – asynchronous, spawned by some other concurrent activity or device. • Essential to the usefulness of computing systems Concurrency and Processes

  27. Hardware Interrupt Mechanism • Upon receipt of electronic signal, the processor • Saves current PSW to a fixed location • Loads new PSW from another fixed location • PSW — Program Status Word • Program counter • Condition code bits (comparison results) • Interrupt enable/disable bits • Other control and mode information • E.g., privilege level, access to special instructions, etc. • Occurs between machine instructions • An abstraction in modern processors (see Silbershatz, §1.2.1 and §13.2.2) Concurrency and Processes

  28. Interrupt Handler /* Enter with interrupts disabled */ Save registers of interrupted computation Load registers needed by handler Examine cause of interrupt Take appropriate action (brief) Reload registers of interrupted computation Reload interrupted PSW and re-enable interrupts or Load registers of another computation Load its PSW and re-enable interrupts Concurrency and Processes

  29. Requirements of interrupt handlers • Fast • Avoid possibilities of interminable waits • Must not count on correctness of interrupted computation • Must not get confused by multiple interrupts in close succession • … • More challenging on multiprocessor systems Concurrency and Processes

  30. “process” (with a small “p”) • Definition: A particular execution of a program. • Requires time, space, and (perhaps) other resources • Can be • Interrupted • Suspended • Blocked • Unblocked • Started or continued • Fundamental abstraction of all modern operating systems • Also known as thread (of control), task, job, etc. • Note: a Unix/Windows “Process” is a heavyweight concept with more implications than this simple definition Concurrency and Processes

  31. State information of a process • PSW (program status word) • Program counter • Condition codes • Registers, stack pointer, etc. • Whatever hardware resources needed to compute • Control information for OS • Owner, privilege level, priority, restrictions, etc. • Other stuff … Concurrency and Processes

  32. Process Control Block (PCB)(example) Concurrency and Processes

  33. Switching from process to process Concurrency and Processes

  34. Process States Concurrency and Processes

  35. class State { long int PSW;long int regs[R]; /*other stuff*/ } class PCB { PCB *next, prev, queue; State s; PCB (…); /*constructor*/ ~PCB(); /*destructor*/ } class Semaphore { int count;PCB *queue; friend wait_s(Semaphore &s); friend post_s(Semaphore &s); Semaphore(int initial); /*constructor*/ ~Semaphore(); /*destructor*/ } Process and semaphoredata structures Concurrency and Processes

  36. Implementation Ready queue PCB PCB PCB PCB Semaphore Acount = 0 PCB PCB Semaphore Bcount = 2 Concurrency and Processes

  37. Implementation Ready queue PCB PCB PCB PCB • Action – dispatch a process to CPU • Remove first PCB from ready queue • Load registers and PSW • Return from interrupt or trap • Action – interrupt a process • Save PSW and registers in PCB • If not blocked, insert PCB into ReadyQueue (in some order) • Take appropriate action • Dispatch same or another process from ReadyQueue Concurrency and Processes

  38. Implementation – Semaphore actions Ready queue PCB PCB PCB PCB • Action – wait_s(Semaphore &s) • Implement as a Trap (with interrupts disabled) if(s.count == 0) • Save registers and PSW in PCB • Queue PCB on s.queue • Dispatch next process on ReadyQueue else • s.count = s.count – 1; • Re-dispatch current process Event wait Concurrency and Processes

  39. Implementation – Semaphore actions Ready queue PCB PCB PCB PCB • Action – post_s(Semaphore &s) • Implement as a Trap (with interrupts disabled) if(s.queue != null) • Save current process in ReadyQueue • Move first process on s.queueto ReadyQueue • Dispatch some process on ReadyQueue else • s.count = s.count + 1; • Re-dispatch current process Semaphore Acount = 0 PCB PCB Event completion Concurrency and Processes

  40. Interrupt Handling • (Quickly) analyze reason for interrupt • Execute equivalent post_s to appropriate semaphore as necessary • Implemented in device-specific routines • Critical sections • Buffers and variables shared with device managers • More about interrupt handling later in the course Concurrency and Processes

  41. Timer interrupts • Can be used to enforce “fair sharing” • Current process goes back to ReadyQueue • After other processes of equal or higher priority • Simulates concurrent execution of multiple processes on same processor Concurrency and Processes

  42. Definition – Context Switch • The act of switching from one process to another • E.g., upon interrupt or some kind of wait for event • Not a big deal in simple systems and processors • Very big deal in large systems such • Linux and Windows • Pentium 4 Many microseconds! Concurrency and Processes

  43. Complications for Multiple Processors • Disabling interrupts is not sufficient for atomic operations • Semaphore operations must themselves be implemented in critical sections • Queuing and dequeuing PCB’s must also be implemented in critical sections • Other control operations need protection • These problems all have solutions but need deeper thought! Concurrency and Processes

  44. Summary • Interrupts transparent to processes • wait_s() and post_s() behave as if they are atomic • Device handlers and all other OS services can be embedded in processes • All processes behave as if concurrently executing • Fundamental underpinning of all modern operations systems Concurrency and Processes

  45. Summary • Homework: reading and extending Peterson’s solution to n > 2. Concurrency and Processes

  46. Break (next topic) Concurrency and Processes

More Related