1 / 0

Concurrency, Memory Managment & Uniprocessor Scheduling Stallings OS 6 th edition Chapters 5, 7 & 9 Kevin C

Concurrency, Memory Managment & Uniprocessor Scheduling Stallings OS 6 th edition Chapters 5, 7 & 9 Kevin Curran. Part 1: Concurrency: Mutual Exclusion and Synchronization. Multiple Processes. Operating System design is concerned with the management of processes and threads:

cargan
Télécharger la présentation

Concurrency, Memory Managment & Uniprocessor Scheduling Stallings OS 6 th edition Chapters 5, 7 & 9 Kevin C

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concurrency, Memory Managment & Uniprocessor Scheduling Stallings OS 6th edition Chapters 5, 7 & 9 Kevin Curran
  2. Part 1: Concurrency: Mutual Exclusion and Synchronization
  3. Multiple Processes Operating System design is concerned with the management of processes and threads: Multiprogramming Multiprocessing Distributed Processing
  4. Concurrency Arises in Three Different Contexts:
  5. Concurrency Table 5.1 Some Key Terms Related to Concurrency
  6. Principles of Concurrency Interleaving and overlapping can be viewed as examples of concurrent processing both present the same problems Uniprocessor – the relative speed of execution of processes cannot be predicted depends on activities of other processes the way the OS handles interrupts scheduling policies of the OS
  7. Difficulties of Concurrency Sharing of global resources Difficult for the OS to manage the allocation of resources optimally Difficult to locate programming errors as results are not deterministic and reproducible
  8. Race Condition Occurs when multiple processes or threads read and write data items The final result depends on the order of execution the “loser” of the race is the process that updates last and will determine the final value of the variable
  9. Operating System Concerns Design and management issues raised by the existence of concurrency: The OS must: be able to keep track of various processes allocate and de-allocate resources for each active process protect the data and physical resources of each process against interference by other processes ensure that the processes and outputs are independent of the processing speed
  10. Process Interaction
  11. Resource Competition Concurrent processes come into conflict when they are competing for use of the same resource for example: I/O devices, memory, processor time, clock
  12. Mutual Exclusion Illustration of Mutual Exclusion
  13. Requirements for Mutual Exclusion Must be enforced A process that halts must do so without interfering with other processes No deadlock or starvation A process must not be denied access to a critical section when there is no other process using it No assumptions are made about relative process speeds or number of processes A process remains inside its critical section for a finite time only
  14. Mutual Exclusion: Hardware Support Interrupt Disabling uniprocessor system disabling interrupts guarantees mutual exclusion Disadvantages: the efficiency of execution could be noticeably degraded this approach will not work in a multiprocessor architecture
  15. Mutual Exclusion: Hardware Support Special Machine Instructions Compare&Swap Instruction also called a “compare and exchange instruction” a compare is made between a memory value and a test value if the values are the same a swap occurs carried out atomically
  16. Compare and Swap Instruction Hardware Support for Mutual Exclusion
  17. Exchange Instruction Hardware Support for Mutual Exclusion
  18. Special Machine Instruction:Advantages Applicable to any number of processes on either a single processor or multiple processors sharing main memory Simple and easy to verify It can be used to support multiple critical sections; each critical section can be defined by its own variable
  19. Special Machine Instruction:Disadvantages Busy-waiting is employed, thus while a process is waiting for access to a critical section it continues to consume processor time Starvation is possible when a process leaves a critical section and more than one process is waiting Deadlock is possible
  20. CommonConcurrency Mechanisms
  21. Semaphore May be initialized to a nonnegative integer value The semWait operation decrements the value The semSignal operation increments the value
  22. Consequences
  23. Semaphore Primitives
  24. Binary Semaphore Primitives
  25. Strong/Weak Semaphores A queue is used to hold processes waiting on the semaphore
  26. Example of Semaphore Mechanism
  27. Mutual Exclusion
  28. Shared Data Protected by a Semaphore
  29. Producer/Consumer Problem
  30. Implementation of Semaphores Imperative that the semWait and semSignal operations be implemented as atomic primitives Can be implemented in hardware or firmware Software schemes such as Dekker’s or Peterson’s algorithms can be used Use one of the hardware-supported schemes for mutual exclusion
  31. Monitors Programming language construct that provides equivalent functionality to that of semaphores and is easier to control Implemented in a number of programming languages including Concurrent Pascal, Pascal-Plus, Modula-2, Modula-3, and Java Has also been implemented as a program library Software module consisting of one or more procedures, an initialization sequence, and local data
  32. Monitor Characteristics
  33. Synchronization Achieved by the use of condition variables that are contained within the monitor and accessible only within the monitor Condition variables are operated on by two functions: cwait(c): suspend execution of the calling process on condition c csignal(c): resume execution of some process blocked after a cwait on the same condition
  34. Structure of a Monitor Structure of a Monitor
  35. Reference Monitor Summary In operating systems architecture, a reference monitor is a tamperproof, always-invoked, and small-enough-to-be-fully-tested-and-analyzed module that controls all software access to data objects or devices (verifiable). The reference monitor verifies the nature of the request against a table of allowable access types for each process on the system. For example, Windows 3.x and 9x operating systems were not built with a reference monitor, whereas the Windows NT line, which also includes Windows 2000 and Windows XP, was designed with an entirely different architecture and does contain a reference monitor
  36. When processes interact with one another two fundamental requirements must be satisfied: Message Passing is one approach to providing both of these functions works with distributed systems and shared memory multiprocessor and uniprocessor systems Message Passing
  37. The actual function is normally provided in the form of a pair of primitives: send (destination, message) receive (source, message) A process sends information in the form of a message to another process designated by a destination A process receives information by executing the receive primitive, indicating the source and the message Message Passing
  38. Message Passing Design Characteristics of Message Systems for Interprocess Communication and Synchronization
  39. Synchronization
  40. Both sender and receiver are blocked until the message is delivered Sometimes referred to as a rendezvous Allows for tight synchronization between processes Blocking Send, Blocking Receive
  41. Nonblocking Send
  42. Deadlock Another problem that can cause congestion in a network is that of deadlock. Imagine two nodes, P and Q, are transmitting packets to one another. Now imagine that both their buffers are full. Both are unable to receive packets so no packets move. This is a simple example of deadlock. There may be many nodes involved (typically in cycles) where each is unable to transmit packets to the next. We can overcome this by having separate buffers for different priority packets.
  43. Deadlock The more links a packet travels over, the greater its priority. Deadlocks do not occur because it is unlikely that packets will overflow all the buffers. If a packet gets a priority greater than n-1 then the packet is discarded (it is assumed to have travelled in a loop). This also ensures that a deadlock cannot occur. Another (less certain method) is to allow deadlocks to occur and regularly discard very old packets to try to clear them.
  44. Part II: Memory Management
  45. Memory Management Terms
  46. Memory management is intended to satisfy the following requirements: Relocation Protection Sharing Logical organization Physical organization Memory Management Requirements
  47. Programmers typically do not know in advance which other programs will be resident in main memory at the time of execution of their program Active processes need to be able to be swapped in and out of main memory in order to maximize processor utilization Specifying that a process must be placed in the same memory region when it is swapped back in would be limiting may need to relocate the process to a different area of memory Relocation
  48. Addressing Requirements
  49. Processes need to acquire permission to reference memory locations for reading or writing purposes Location of a program in main memory is unpredictable Memory references generated by a process must be checked at run time Mechanisms that support relocation also support protection Protection
  50. Advantageous to allow each process access to the same copy of the program rather than have their own separate copy Memory management must allow controlled access to shared areas of memory without compromising protection Mechanisms used to support relocation support sharing capabilities Sharing
  51. Memory is organized as linear Segmentation is the tool that most readily satisfies requirements Logical Organization
  52. Physical Organization
  53. Memory management brings processes into main memory for execution by the processor involves virtual memory based on segmentation and paging Partitioning used in several variations in some now-obsolete operating systems does not involve virtual memory Memory Partitioning
  54. Memory Management Techniques
  55. Equal-size partitions any process whose size is less than or equal to the partition size can be loaded into an available partition The operating system can swap out a process if all partitions are full and no process is in the Ready or Running state Fixed Partitioning
  56. A program may be too big to fit in a partition program needs to be designed with the use of overlays Main memory utilization is inefficient any program, regardless of size, occupies an entire partition internal fragmentation wasted space due to the block of data loaded being smaller than the partition Disadvantages
  57. Using unequal size partitions helps lessen the problems programs up to 16M can be accommodated without overlays partitions smaller than 8M allow smaller programs to be accommodated with less internal fragmentation Unequal Size Partitions
  58. Memory Assignment Fixed Partitioning
  59. The number of partitions specified at system generation time limits the number of active processes in the system Small jobs will not utilize partition space efficiently Disadvantages
  60. Partitions are of variable length and number Process is allocated exactly as much memory as it requires This technique was used by IBM’s mainframe operating system, OS/MVT Dynamic Partitioning
  61. Effect of Dynamic Partitioning
  62. Dynamic Partitioning
  63. Placement Algorithms
  64. Memory Configuration Example
  65. Comprised of fixed and dynamic partitioning schemes Space available for allocation is treated as a single block Memory blocks are available of size 2K words, L ≤ K ≤ U, where 2L = smallest size block that is allocated 2U = largest size block that is allocated; generally 2U is the size of the entire memory available for allocation Buddy System
  66. Buddy System Example
  67. Addresses
  68. Relocation
  69. Partition memory into equal fixed-size chunks that are relatively small Process is also divided into small fixed-size chunks of the same size Paging
  70. Assignment of Process to Free Frames
  71. Maintained by operating system for each process Contains the frame location for each page in the process Processor must know how to access for the current process Used by processor to produce a physical address Page Table
  72. Data Structures
  73. A program can be subdivided into segments may vary in length there is a maximum length Addressing consists of two parts: segment number an offset Similar to dynamic partitioning Eliminates internal fragmentation Segmentation
  74. Security Issues
  75. Security threat related to memory management Also known as a buffer overrun Can occur when a process attempts to store data beyond the limits of a fixed-sized buffer One of the most prevalent and dangerous types of security attacks Buffer Overflow Attacks
  76. Buffer Overflow A buffer overflow, or buffer overrun, is an anomaly where a process stores data in a buffer outside the memory the programmer set aside for it. The extra data overwrites adjacent memory, which may contain other data, including program variables and program flow control data. This may result in erratic program behaviour, including memory access errors, incorrect results, program termination or a breach of system security.
  77. Buffer Overflow Buffer overflows can be triggered by inputs that are designed to execute code, or alter the way the program operates. They are thus the basis of many software vulnerabilities and can be maliciously exploited. Bounds checking can prevent buffer overflows. Programming languages commonly associated with buffer overflows include C and C++, which provide no built-in protection against accessing or overwriting data in any part of memory and do not automatically check that data written to an array (the built-in buffer type) is within the boundaries of that array.
  78. Buffer Overflow Example
  79. Buffer Overflow Stack Values
  80. Prevention Detecting and aborting Countermeasure categories: Defending Against Buffer Overflows
  81. Part III: Uniprocessor Scheduling
  82. Processor Scheduling Aim is to assign processes to be executed by the processor in a way that meets system objectives, such as response time, throughput, and processor efficiency Broken down into three separate functions:
  83. Types of Scheduling
  84. Scheduling and Process State Transitions
  85. Nesting of Scheduling Functions
  86. Long-Term Scheduler Determines which programs are admitted to the system for processing Controls the degree of multiprogramming the more processes that are created, the smaller the percentage of time that each process can be executed may limit to provide satisfactory service to the current set of processes
  87. Medium-Term Scheduling Part of the swapping function Swapping-in decisions are based on the need to manage the degree of multiprogramming considers the memory requirements of the swapped-out processes
  88. Short-Term Scheduling Known as the dispatcher Executes most frequently Makes the fine-grained decision of which process to execute next Invoked when an event occurs that may lead to the blocking of the current process or that may provide an opportunity to preempt a currently running process in favor of another
  89. Short Term Scheduling Criteria Main objective is to allocate processor time to optimize certain aspects of system behavior A set of criteria is needed to evaluate the scheduling policy
  90. Short-Term Scheduling Criteria: Performance
  91. Scheduling Criteria
  92. Priority Queuing
  93. Alternative Scheduling Policies
  94. Selection Function Determines which process, among ready processes, is selected next for execution May be based on priority, resource requirements, or the execution characteristics of the process If based on execution characteristics then important quantities are: w = time spent in system so far, waiting e = time spent in execution so far s = total service time required by the process, including e; generally, this quantity must be estimated or supplied by the user
  95. Decision Mode Specifies the instants in time at which the selection function is exercised Two categories: Nonpreemptive Preemptive
  96. Nonpreemptive vs Preemptive Nonpreemptive Preemptive currently running process may be interrupted and moved to ready state by the OS preemption may occur when new process arrives, on an interrupt, or periodically once a process is in the running state, it will continue until it terminates or blocks itself for I/O
  97. Process Scheduling
  98. Comparison of Scheduling Policies
  99. First-Come-First-Served (FCFS) Simplest scheduling policy Also known as first-in-first-out (FIFO) or a strict queuing scheme When the current process ceases to execute, the longest process in the Ready queue is selected Performs much better for long processes than short ones Tends to favor processor-bound processes over I/O-bound processes
  100. Round Robin Uses preemption based on a clock Also known as time slicing because each process is given a slice of time before being preempted Principal design issue is the length of the time quantum, or slice, to be used Particularly effective in a general-purpose time-sharing system or transaction processing system One drawback is its relative treatment of processor-bound and I/O-bound processes
  101. Effect of Size of Preemption Time Quantum
  102. Effect of Size of Preemption Time Quantum
  103. Shortest Process Next(SPN) Nonpreemptive policy in which the process with the shortest expected processing time is selected next A short process will jump to the head of the queue Possibility of starvation for longer processes One difficulty is the need to know, or at least estimate, the required processing time of each process If the programmer’s estimate is substantially under the actual running time, the system may abort the job
  104. Shortest Remaining Time (SRT) Preemptive version of SPN Scheduler always chooses the process that has the shortest expected remaining processing time Risk of starvation of longer processes Should give superior turnaround time performance to SPN because a short job is given immediate preference to a running longer job
  105. Highest Response Ratio Next (HRRN) Chooses next process with the greatest ratio Attractive because it accounts for the age of the process While shorter jobs are favored, aging without service increases the ratio so that a longer process will eventually get past competing shorter jobs
  106. Memory Management one of the most important and complex tasks of an operating system needs to be treated as a resource to be allocated to and shared among a number of active processes desirable to maintain as many processes in main memory as possible desirable to free programmers from size restriction in program development basic tools are paging and segmentation (possible to combine) Scheduling The OS must make three types of scheduling decisions with respect to the execution of processes: Long-term – determines when new processes are admitted to the system Medium-term – part of the swapping function and determines when a program is brought into main memory so that it may be executed Short-term – determines which ready process will be executed next by the processor From a user’s point of view, response time is generally the most important characteristic of a system; from a system point of view, throughput or processor utilization is important Algorithms: FCFS, Round Robin, SPN, SRT, HRRN Summary
More Related