final review n.
Skip this Video
Loading SlideShow in 5 Seconds..
Final Review PowerPoint Presentation
Download Presentation
Final Review

Final Review

135 Views Download Presentation
Download Presentation

Final Review

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Final Review Bernard Chen Spring 2007

  2. 1.1 What is Operating System? • An operating system is a program that manages the computer hardware. • It also provides a basis for application programs and acts as an intermediary between the computer user and computer hardware.

  3. Process Concept • Early computer systems allow only one program running at a time. In contrast, current day computer systems allow multiple programs to be loaded into memory and executed concurrently. • Process concept makes it happen • Process: a program in execution

  4. Process State • new: The process is being created • running: Instructions are being executed • waiting: The process is waiting for some event to occur • ready: The process is waiting to be assigned to a process • terminated: The process has finished execution

  5. Diagram of Process State

  6. Schedulers • Long-term scheduler (or job scheduler) –selects which processes should be brought into the ready queue • Short-term scheduler (or CPU scheduler) –selects which process should be executed next and allocates CPU

  7. Schedulers • Sometimes it can be advantage to remove process from memory and thus decrease the degree of multiprogrammimg • This scheme is called swapping

  8. Addition of Medium Term Scheduling

  9. Interprocess Cpmmunication (IPC) • Two fundamental models • Share Memory • Message Passing

  10. Communication Models • (a): MPI (b): Share memory

  11. Share Memory Parallelization System Example m_set_procs(number): prepare number of child for execution m_fork(function): childes execute “function” m_kill_procs(); terminate childs

  12. Real Example int array_size=1000 int global_array[array_size] main(argc , argv) { int nprocs=4; m_set_procs(nprocs); /* prepare to launch this many processes */ m_fork(sum); /* fork out processes */ m_kill_procs(); /* kill activated processes */ } void sum() { int id; id = m_get_myid(); for (i=id*(array_size/nprocs); i<(id+1)*(array_size/nprocs); i++) global_array[id*array_size/nprocs]+=global_array[i]; }

  13. Shared-Memory Systems • The producer and consumer must be synchronized, so that consumer does not try to consume an item that has not yet been produced. • Two types of buffer can be used: • Unbounded buffer • Bounded buffer

  14. Shared-Memory Systems • Unbounded Buffer: the consumer may have to wait for new items, but producer can always produce new items. • Bounded Buffer: the consumer have to wait if buffer is empty, the producer have to wait if buffer is full

  15. Message-Passing Systems • A message passing facility provides at least two operations: send(message),receive(message)

  16. MPI Program example #include "mpi.h" #include <math.h> #include <stdio.h> #include <stdlib.h> int main (int argc, char *argv[]) { int id; /* Process rank */ int p; /* Number of processes */ int i,j; int array_size=100; int array[array_size]; /* or *array and then use malloc or vector to increase the size */ int local_array[array_size/p]; int sum=0; MPI_Status stat; MPI_Comm_rank (MPI_COMM_WORLD, &id); MPI_Comm_size (MPI_COMM_WORLD, &p);

  17. MPI Program example if (id==0) { for(i=0; i<array_size; i++) array[i]=i; /* initialize array*/ for(i=0; i<p; i++) MPI_Send(&array[i*array_size/p], /* Start from*/ array_size/p, /* Message size*/ MPI_INT, /* Data type*/ i, /* Send to which process*/ MPI_COMM_WORLD); } else MPI_Recv(&local_array[0],array_size/p,MPI_INT,0,0,MPI_COMM_WORLD,&stat);

  18. MPI Program example for(i=0;i<array_size/p;i++) sum+=array[i]; MPI_Reduce (&sum, &sum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD); if (id==0) printf("%d ",sum); }

  19. Thread Overview • A thread is a basic unit of CPU utilization. • Traditional (single-thread) process has only one single thread control • Multithreaded process can perform more than one task at a time example: word may have a thread for displaying graphics, another respond for key strokes and a third for performing spelling and grammar checking

  20. Multithreading Models • Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads • User threads are supported above kernel and are managed without kernel support • Kernel threads are supported and managed directly by the operating system

  21. Multithreading Models • Ultimately, there must exist a relationship between user thread and kernel thread • User-level threads are managed by a thread library, and the kernel is unaware of them • To run in a CPU, user-level thread must be mapped to an associated kernel-level thread

  22. Many-to-one Model User Threads Kernel thread

  23. One-to-one Model User Threads Kernel threads

  24. Many-to-many Model User Threads Kernel threads

  25. CPU Scheduler • Whenever the CPU becomes idle, the OS must select one of the processes in the ready queue to be executed • The selection process is carried out by the short-term scheduler

  26. Preemptive scheduling vs. non-preemptive scheduling • When scheduling takes place only under circumstances 1 and 2, we say that the scheduling scheme is non-preemptive; otherwise, its called preemptive • Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keep the CPU until it release the CPU either by terminating or by switching to waiting state. (Windows 95 and earlier)

  27. Scheduling Criteria CPU utilization – keep the CPU as busy as possible (from 0% to 100%) Throughput – # of processes that complete their execution per time unit Turnaround time – amount of time to execute a particular Process Waiting time – amount of time a process has been waiting in the ready queue Response time – amount of time it takes from when a request was submitted until the first response is produced

  28. Scheduling Algorithems • First Come First Serve Scheduling • Shortest Job First Scheduling • Priority Scheduling • Round-Robin Scheduling • Multilevel Queue Scheduling • Multilevel Feedback-Queue Scheduling

  29. First Come First Serve Scheduling (FCFS) ProcessBurst time P1 24 P2 3 P2 3

  30. First Come First Serve Scheduling • Suppose we change the order of arriving job P2, P3, P1

  31. Short job first scheduling- Non-preemptive

  32. Short job first scheduling- Preemptive

  33. Priority Scheduling A priority number (integer) is associated with each process The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority) • Preemptive • Non-preemptive SJF is a special priority scheduling where priority is the predicted next CPU burst time, so that it can decide the priority

  34. Round-Robin Scheduling

  35. Multilevel Queue Ready queue is partitioned into separate queues: • foreground (interactive) • background (batch) Each queue has its own scheduling algorithm foreground – RR background – FCFS

  36. Multilevel Queue example • Foreground P1 53 (RR interval:20) P2 17 P3 42 • Background P4 30 (FCFS) P5 20

  37. Multilevel Feedback Queue Three queues: • Q0 – RR with time quantum 8 milliseconds • Q1 – RR time quantum 16 milliseconds • Q2 – FCFS Scheduling A new job enters queue Q0 which is served FCFS. When it gains CPU, job receives 8 milliseconds. If it does not finish in 8 milliseconds, job is moved to queue Q1. At Q1 job is again served FCFS and receives 16 additional milliseconds. If it still does not complete, it is preempted and moved to queue Q2.

  38. Multilevel Feedback Queue

  39. Algorithm Evaluation • Deterministic Modeling • Simulations • Implementation

  40. Deterministic Modeling • Deterministic Modeling: • Process Burst Time P1 10 P2 29 P3 3 P4 7 P5 12

  41. Deterministic Modeling • Deterministic model is simple and fast. It gives the exact numbers, allowing us to compare the algorithms. However, it requires exact numbers for input, and its answers apply only to these cases.

  42. Simulation

  43. Implementation • Even a simulation is of limited accuracy. • The only completely accurate way to evaluate a scheduling algorithm is to code it up, put it in the operating system and see how it works.

  44. Chapter 6Process Synchronization Bernard Chen Spring 2007

  45. Bounded Buffer (producer view) while (true) { /* produce an item and put in next Produced*/ while (count == BUFFER_SIZE) ; // do nothing buffer [in] = nextProduced; in = (in + 1) % BUFFER_SIZE; count++; }

  46. Bounded Buffer (Consumer view) while (true) { while (count == 0) ; // do nothing nextConsumed= buffer[out]; out = (out + 1) % BUFFER_SIZE; count--; /* consume the item in next Consumed }

  47. Race Condition • We could have this incorrect state because we allowed both processes to manipulate the variable counter concurrently • Race Condition: several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place. • Major portion of this chapter is concerned with process synchronization and coordination

  48. 6.2 The Critical-Section Problem A solution to the critical-section problem must satisfy the following three requirements: • 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections • 2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely • 3.Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted