1 / 20

CS444/CS544 Operating Systems

CS444/CS544 Operating Systems. Introduction to Synchronization 2/07/2007 Prof. Searleman jets@clarkson.edu. CS444/CS544 Spring 2007. CPU Scheduling Synchronization Need for synchronization primitives Locks and building locks from HW primitives Reading assignment: Chapter 6

mab
Télécharger la présentation

CS444/CS544 Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS444/CS544Operating Systems Introduction to Synchronization 2/07/2007 Prof. Searleman jets@clarkson.edu

  2. CS444/CS544 Spring 2007 • CPU Scheduling • Synchronization • Need for synchronization primitives • Locks and building locks from HW primitives Reading assignment: • Chapter 6 HW#4 posted, due: 2-7-2007 Exam#1: Thurs. Feb. 15th, 7:00 pm, Snell 213

  3. Multi-level Feedback Queues (MLFQ) • Multiple queues representing different types of jobs • Example: I/O bound, CPU bound • Queues have different priorities • Jobs can move between queues based on execution history • If any job can be guaranteed to eventually reach the top priority queue given enough waiting time, then MLFQ is starvation free

  4. Multi-level Queues (MLQ) 5.06

  5. Multi-level Feedback Queues (MLFQ) 5.07

  6. Real Time Scheduling • Real time processes have timing constraints • Expressed as deadlines or rate requirements • Common Real Time Scheduling Algorithms • Rate Monotonic • Priority = 1/RequiredRate • Things that need to be scheduled more often have highest priority • Earliest Deadline First • Schedule the job with the earliest deadline • Scheduling homework?  • To provide service guarantees, neither algorithm is sufficient • Need admission control so that system can refuse to accept a job if it cannot honor its constraints

  7. Multiprocessor Scheduling • Can either schedule each processor separately or together • One line all feeding multiple tellers or one line for each teller • Some issues • Want to schedule the same process again on the same processor (processor affinity) • Why? Caches • Want to schedule cooperating processes/threads together (gang scheduling) • Why? Don’t block when need to communicate with each other

  8. Algorithm Evaluation: Deterministic Modeling • Deterministic Modeling • Specifies algorithm *and* workload • Example : • Process 1 arrives at time 1 and has a running time of 10 and a priority of 2 • Process 2 arrives at time 5, has a running time of 2 and a priority of 1 • … • What is the average waiting time if we use preemptive priority scheduling with FIFO among processes of the same priority?

  9. Algorithm Evaluation: Queueing Models • Distribution of CPU and I/O bursts, arrival times, service times are all modeled as a probability distribution • Mathematical analysis of these systems • To make analysis tractable, model as well behaved but unrealistic distributions

  10. Algorithm Evaluation: Simulation • Implement a scheduler as a user process • Drive scheduler with a workload that is either • randomly chosen according to some distribution • measured on a real system and replayed • Simulations can be just as complex as actual implementations • At some level of effort, should just implement in real system and test with “real” workloads • What is your benchmark/ common case?

  11. Synchronization

  12. Concurrency is a good thing • So far we have mostly been talking about constructs to enable concurrency • Multiple processes, inter-process communication • Multiple threads in a process • Concurrency critical to using the hardware devices to full capacity • Always something that needs to be running on the CPU, using each device, etc. • We don’t want to restrict concurrency unless we absolutely have to

  13. Restricting Concurrency When might we *have* to restrict concurrency? • Some resource so heavily utilized that no one is getting any benefit from their small piece • too many processes wanting to use the CPU (while (1) fork) • “thrashing” • Solution: Access control (Starvation?) • Two processes/threads we would like to execute concurrently are going to access the same data • One writing the data while the other is reading; two writing over top at the same time • Solution: Synchronization (Deadlock?) • Synchronization primitives enable SAFE concurrency

  14. Correctness • Two concurrent processes/threads must be able to execute correctly with *any* interleaving of their instructions • Scheduling is not under the control of the application writer • Note: instructions != line of code in high level programming language • If two processes/threads are operating on completely independent data, then no problem • If they share data, then application programmer may need to introduce synchronization primitives to safely coordinate their access to the shared data/resources • If shared data/resources are read only, then also no problem

  15. Illustrate the problem • Suppose we have multiple processes/threads sharing a database of bank account balances. Consider the deposit and withdraw functions int withdraw (int account, int amount) { balance = readBalance(account); balance = balance – amount; updateBalance(account, balance); return balance; } int deposit (int account, int amount) { balance = readBalance(account); balance = balance + amount; updateBalance(account, balance); return balance; } • What happens if multiple threads execute these functions for the same account at the “same” time? • Notice this is not read-only access

  16. Example • Balance starts at $500 and then two processes withdraw $100 at the same time • Two people at different ATMs; Update runs on the same back-end computer at the bank • What could go wrong? • Different Interleavings => Different Final Balances !!! int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; } int withdraw(int acct, int amount) { balance = readBalance(acct); balance = balance – amount; updateBalance(acct,balance); return balance; }

  17. balance = readBalance(account); balance = readBalance(account); balance = readBalance(account); balance = readBalance(account); balance = balance - amount; updateBalance(account, balance); balance = balance - amount; updateBalance(account, balance); balance = balance - amount; updateBalance(account, balance); balance = balance - amount; updateBalance(account, balance); $500 - $100 - $100 = $400 ! • If the second does readBalance before the first does writeBalance……. • Two examples: • Before you get too happy, deposits can be lost just as easily! $500 $500 $400

  18. Race condition • When the correct output depends on the scheduling or relative timings of operations, you call that a race condition. • Output is non-deterministic • To prevent this we need mechanisms for controlling access to shared resources • Enforce determinism

  19. Synchronization Required • Synchronization required for all shared data structures like • Shared databases (like of account balances) • Global variables • Dynamically allocated structures (off the heap) like queues, lists, trees, etc. • OS data structures like the running queue, the process table, … • What are not shared data structures? • Variables that are local to a procedure (on the stack) • Other bad things happen if try to share pointer to a variable that is local to a procedure

  20. Critical Section Problem do { ENTRY_SECTION critical section /* access shared data */ EXIT_SECTION remainder section /* safe */ } • Model processes/threads as alternating between code that accesses shared data (critical section) and code that does not (remainder section) • ENTRY_SECTION requests access to shared data ; EXIT_SECTION notifies of completion of critical section

More Related