1 / 58

Operating Systems review

Operating Systems review. Operating Systems: Internals and Design Principles William Stallings. index 2-15 : Architecture & Process 16-25 : Concurrency 26-37 : Scheduling 36-45 : Memory Management 47-52 : File management 53-56 : Distributed Computing 57-58 : Embedded OS. Operating System.

brody
Télécharger la présentation

Operating Systems review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems review Operating Systems:Internals and Design Principles William Stallings index2-15 : Architecture & Process16-25 : Concurrency26-37 : Scheduling36-45 : Memory Management47-52 : File management53-56 : Distributed Computing57-58 : Embedded OS

  2. Operating System • Exploits the hardware resources of one or more processors • Provides a set of services to system users • Manages secondary memory and I/O devices

  3. Operating System • A program that controls the execution of application programs • An interface between applications and hardware • Main objectives of an OS: • Convenience • Efficiency • Ability to evolve

  4. A Computer’s Basic Elements • Processor • Main Memory • I/O Modules • System Bus

  5. Services Provided by the Operating System • Program development • Editors and debuggers. • Program execution • OS handles scheduling of numerous tasks required to execute a program • Access I/O devices • Each device will have unique interface • OS presents standard interface to users

  6. Microkernel Architecture • Most early OS are a monolithic kernel • Most OS functionality resides in the kernel. • A microkernel assigns only a few essential functions to the kernel • Address spaces • Interprocess communication (IPC) • Basic scheduling

  7. Benefits of aMicrokernel Organization Uniform interfaces on requests made by a process. Extensibility Flexibility Portability Reliability Distributed System Support Object Oriented Operating Systems

  8. Benefits of Microkernel (1) Uniform interfaces: Processes need not distinguish between kernel-level and userlevel services because all such services are provided by means of message passing. Extensibility: facilitates the addition of new services as well as the provision of multiple services in the same functional area. Flexibility: not only can new features be added to the operating system, but existing features can be subtracted to produce a smaller, more efficient implementation. Portability: all or at least much of the processor-specific code is in the microkernel; thus, changes needed to port the system to a new processor are fewer and tend to be arranged in logical groupings.

  9. Benefits of Microkernel (2) Reliability: A small microkernel can be rigorously tested. Its use of a small number of application programming interfaces (APIs) improves the chance of producing quality code for the operating-system services outside the kernel. Distributed system support: the message orientation of microkernel communication lends itself to extension to distributed systems. Support for object-oriented operating system (OOOS): An object-oriented approach can lend discipline to the design of the microkernel and to the development of modular extensions to the operating system.

  10. Symmetric multiprocessing (SMP) • An SMP system has • multiple processors • These processors share same main memory and I/O facilities • All processors can perform the same functions • The OS of an SMP schedules processes or threads across all of the processors.

  11. SMP Advantages • Performance • Allowing parallel processing • Availability • Failure of a single process does not halt the system • Incremental Growth • Additional processors can be added. • Scaling

  12. Multiprocessor OSDesign Considerations • The key design issues include • Simultaneous concurrent processes or threads • Scheduling • Synchronization • Memory Management • Reliability and Fault Tolerance

  13. What is a “process”? A program in execution An instance of a program running on a computer The entity that can be assigned to and executed on a processor A unit of activity characterized by the execution of a sequence of instructions, a current state, and an associated set of system instructions Competing processes VS Cooperating processes?

  14. Process Control Block Contains the info about a process Created and manage by the operating system Allows support for multiple processes

  15. Role of the Process Control Block • The most important data structure in an OS • It defines the state of the OS • Process Control Block requires protection • A faulty routine could cause damage to the block destroying the OS’s ability to manage the process • Any design change to the block could affect many modules of the OS

  16. Concurrency Concurrency arises in: • Multiple applications • Sharing time • Structured applications • Extension of modular design • Operating system structure • OS themselves implemented as a set of processes or threads

  17. Multiple Processes • Central to the design of modern Operating Systems is managing multiple processes • Multiprogramming • Multiprocessing • Distributed Processing • Big Issue is Concurrency • Managing the interaction of all of these processes

  18. Deadlock • A set of processes is deadlocked when each process in the set is blocked awaiting an event that can only be triggered by another blocked process in the set • Typically involves processes competing for the same set of resources • No efficient solution • Deadlock is permanent because none of the events is ever triggered. • Unlike other problems in concurrent process management, there is no efficient solution in the general case.

  19. The Three Conditions for Deadlock Mutual exclusion : Only one process may use a resource at a time. Hold and wait : A process may hold allocated resources while awaiting assignment of others. No preemption : No resource can be forcibly removed from a process holding it. Ref. What is Dining Philosophers Problem?

  20. Requirement for Mutual Exclusion 1. Mutual exclusion must be enforced: only one process at a time is allowed into its critical section, among all processes that have critical sections for the same resource or shared object. 2. A process that halts in its non-critical section must do so without interfering with other processes. 3. It must not be possible for a process requiring access to a critical section to be delayed indefinitely: no deadlock or starvation.

  21. Requirement for Mutual Exclusion 4. When no process is in a critical section, any process that requests entry to its critical section must be permitted to enter without delay. 5. No assumptions are made about relative process speeds or number of processors. 6. A process remains inside its critical section for a finite time only

  22. Strong/WeakSemaphore • A queue is used to hold processes waiting on the semaphore • In what order are processes removed from the queue? • Strong Semaphores use FIFO • Weak Semaphores don’t specify the order of removal from the queue

  23. Producer/Consumer Problem • General Situation: • One or more producers are generating data and placing these in a buffer • A single consumer is taking items out of the buffer one at time • Only one producer or consumer may access the buffer at any one time • The Problem: • Ensure that the Producer can’t add data into full buffer and consumer can’t remove data from empty buffer

  24. Functions • Assume an infinite buffer b with a linear array of elements

  25. Buffer

  26. Scheduling • An OS must allocate resources amongst competing processes. • The resource provided by a processor is execution time • The resource is allocated by means of a schedule

  27. Aim of Short Term Scheduling Main objective is to allocate processor time to optimize certain aspects of system behaviour. A set of criteria is needed to evaluate the scheduling policy.

  28. Short-Term Scheduling Criteria: User vs System • We can differentiate between user and system criteria • User-oriented • Response Time • Elapsed time between the submission of a request until there is output. • System-oriented • Effective and efficient utilization of the processor

  29. Short-Term Scheduling Criteria: Performance • We could differentiate between performance related criteria, and those unrelated to performance • Performance-related • Quantitative, easily measured • E.g. response time and throughput • Non-performance related • Qualitative • Hard to measure

  30. Decision Mode • Specifies the instants in time at which the selection function is exercised. • Two categories: • Nonpreemptive • Preemptive

  31. Preemptive VS Nonpreemptive Preemptive scheduling allows a process to be interrupted in the midst of its execution, taking the CPU away and allocating it to another process. Nonpreemptive scheduling ensures that a processor relinquishes control of the CPU only when it finishes with its current CPU burst

  32. Overall Aim of Scheduling • The aim of processor scheduling is to assign processes to be executed by the processor over time, • in a way that meets system objectives, such as response time, throughput, and processor efficiency.

  33. Scheduling Objectives • The scheduling function should • Share time fairly among processes • Prevent starvation of a process • Use the processor efficiently • Have low overhead • Prioritise processes when necessary (e.g. real time deadlines)

  34. Process Scheduling Example • Example set of processes, consider each a batch job • Service time represents total execution time

  35. Process Scheduling a). Using algorithms: FCFS, SJF, nonpreemptive priority. b). What is the turnaround time ? c). waiting time ? average waiting time ?

  36. The need for memory management • Memory is cheap today, and getting cheaper • But applications are demanding more and more memory, there is never enough! • Memory Management, involves swapping blocks of data from secondary storage. • Memory I/O is slow compared to a CPU • The OS must cleverly time the swapping to maximise the CPU’s efficiency

  37. Memory Management Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time - Memory Requirement Relocation Protection Sharing Logical organisation Physical organisation

  38. Page Frame The number of page frames is the size of memory divided by the page size - Page frame = size of memory/page size How many bits does the system use to maintain the displacement = as much as page size Ex: contains 128MB of main memory and has a page size of 8KB

  39. Page Buffering If a page is taken out of a resident set but is soon needed, it is still in main memory, saving a disk read. Modified page can be written out in clusters ratherthan one at a time, significantly reducing the number of I/O operations andtherefore the amount of disk access time

  40. Replacement Policiesoptimal Policy The optimal policy produces three page faults after the frame allocation has been filled.

  41. Least Recently Used (LRU) • Replaces the page that has not been referenced for the longest time • By the principle of locality, this should be the page least likely to be referenced in the near future • Difficult to implement • One approach is to tag each page with the time of last reference. • This requires a great deal of overhead.

  42. LRU Example • The LRU policy does nearly as well as the optimal policy. • In this example, there are four page faults

  43. First-in, first-out (FIFO) • Treats page frames allocated to a process as a circular buffer • Pages are removed in round-robin style • Simplest replacement policy to implement • Page that has been in memory the longest is replaced • But, these pages may be needed again very soon if it hasn’t truly fallen out of use

  44. FIFO Example • The FIFO policy results in six page faults. • Note that LRU recognizes that pages 2 and 5 are referenced more frequently than other pages, whereas FIFO does not.

  45. File Management File management system consists of system utility programs that run as privileged applications Concerned with secondary storage

  46. Criteria for File Organization • Important criteria include: • Short access time • Ease of update • Economy of storage • Simple maintenance • Reliability • Priority will differ depending on the use (e.g. read-only CD vs Hard Drive) • Some may even conflict

  47. File Organisation Types • Many exist, but usually variations of: • Pile • Sequential file • Indexed sequential file • Indexed file • Direct, or hashed file

  48. File Allocation Methods Contiguous allocation: a single contiguous set of blocks is allocated to a file at the time of file creation. Chained allocation: allocation is on an individual block basis. Each block contains a pointer to the next block in the chain. Indexed allocation: the file allocation table contains a separate one-level index for each file;the index has one entry for each portion allocated to the file.

  49. File Allocation

  50. Records-Blocking methods • Fixed-length: records are used, and an integral number of records are stored in a block. - Unused space at the end of a block is internal fragmentation • Variable-length Spanned Blocking: records are used and are packed into blocks with no unused space. Some records may span multiple blocks -Continuation is indicated by a pointer to the successor block • Variable-length unspanned Blocking: Uses variable length records without spanning • Wasted space in most blocks because of the inability to use the remainder of a block if the next record is larger than the remaining unused space

More Related