1 / 89

Course Overview Principles of Operating Systems

Introduction Computer System Structures Operating System Structures Processes Process Synchronization Deadlocks CPU Scheduling. Memory Management Virtual Memory File Management Security Networking Distributed Systems Case Studies Conclusions.

Télécharger la présentation

Course Overview Principles of Operating Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction Computer System Structures Operating System Structures Processes Process Synchronization Deadlocks CPU Scheduling MemoryManagement Virtual Memory File Management Security Networking Distributed Systems Case Studies Conclusions Course OverviewPrinciples of Operating Systems

  2. Motivation Objectives Background Address Spaces logical physical Partitioning fixed dynamic Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging Important Concepts and Terms Chapter Summary Chapter Overview Memory Management

  3. Motivation • after CPU time, memory is the second most important resource in a computer system • even with relatively large amounts of memory, the amount of available memory is often not satisfactory • getting information from hard disk instead of main memory takes orders of magnitudes longer • 60 ns access time for main memory • 10 ms (= 10,000,000 ns) average access time for hard disks • several processes must coexist in memory

  4. Objectives • understand the purpose and usage of main memory in computer systems • understand the addressing scheme used by the CPU to access information in main memory • distinguish between logical and physical addresses • understand various methods to allocate available memory sections to processes

  5. Background and Prerequisites • main memory and computer system architecture • linking and loading • program execution and processes • instruction execution

  6. Memory • main memory is used to store information required for the execution of programs • code, data, auxiliary information • the CPU can only access items that are in main memory • memory is a large array of addressable words or bytes • a process needs memory space to execute • sufficient memory can increase the utilization of resources and improve response time

  7. Hardware Architecture CPU Main Memory Control Unit I/O Devices Registers Controllers Arithmetic Logic Unit (ALU) System Bus [David Jones]

  8. MMU Main Memory Memory and CPU CPU Control Unit Registers Bus Arithmetic Logic Unit (ALU)

  9. Main Memory 7FFFF Memory Organization • memory size =memory “height” * memory width • usually measured in MByte • memory “height” • number of memory locations • = physical address space • memory width • number of bits per memory location • either one Byte or one word • memory location • each location has its own unique (physical) address • larger items may be assigned to consecutive locations 0

  10. Hierarchy of Storage Devices Registers Cache Main Memory Electronic Disk Magnetic Disk Optical Disk Magnetic Tape

  11. Storage Device Characteristics

  12. Program Program Data Data From Program to Process Process Image Load Module Process Control Block Program Loader Data User Stack Shared Address Space

  13. Linker Library Module 1 Module 2 Loader Module 3 Linking and Loading Main Memory Process Image Load Module

  14. Linking • several components are combined into one load module • modules, objects • libraries • references across modules must be resolved • the load module is passed to the loader

  15. Static vs. Dynamic Linking • static linking • done at compile or link time • does not facilitate sharing of libraries • dynamic linking • linking is deferred until load or runtime to allow integration of libraries

  16. Dynamic Runtime Linking • linking is postponed until runtime • unresolved references initiate the loading of the respective module and its linking to the calling module • easy sharing of modules • only the modules that are really needed are loaded and linked • rather complex

  17. Loading • creation of an active process from a program • process image • process image is loaded into main memory • execution can start now • symbolic addresses must be resolved into relative or absolute addresses • absolute loading • relocatable loading • dynamic run-time loading

  18. Absolute Loading • a given load module must always be given the same location in main memory • all address references must be absolute • refer to the starting address of physical memory • can be done by the programmer or compiler • severe disadvantages • inflexible: modifications require recoding or recompilation of the program • requires knowledge about memory allocation • impractical for multiprogramming, multitasking, etc.

  19. Relocatable Loading • load module can be located anywhere in main memory • addresses are relative to the start of the program • the location where the image is loaded is added to the relative address • at load time or during execution

  20. Dynamic Runtime Loading • memory references are still relative in the process image when it is loaded into main memory • the absolute address is only calculated when the instruction is executed • requires hardware support (MMU) • allows swapping out and back in of processes • the new location of a process may be different after a swap

  21. Process Process Control Block Program Data User Stack Shared Address Space Processes and Addresses Process Control Information • process image • determines logical address space • must be placed in physical memory • process execution • relative addresses: relevant information is addressed within the process image • must be converted to absolute (physical) addresses Program EntryPoint Program Execution Data Access Top of Stack

  22. mapping from logical address space (process image) to physical address space (main memory) memory size address conversions contiguous allocation the whole process image is allocated in one piece Main Memory Process in Main Memory Process Control Information Process Control Block Program EntryPoint Program Execution Program Data Access Data Top of Stack User Stack Shared Address Space

  23. Processes and Address Spaces Process 1 Process 2 Process n Process Identification Process Identification Process Identification Process Control Block Process State Information Process State Information Process State Information Process Control Information Process Control Information Process Control Information System Stack System Stack System Stack User Stack User Stack User Stack User Address Space User Address Space User Address Space Shared Address Space Shared Address Space [adapted from Stallings 98]

  24. several processes need to be accommodated OS has its own memory section simplified view larger number of processes processes do not occupy one single section in memory, but several smaller ones (non-contiguous allocation) not the whole process image is always present in memory (virtual memory) Process 2 Main Memory Process n Process 1 Operating System Processes in Memory

  25. Instruction Execution Cycle • the execution of one instruction consists of several steps • fetch an instruction from memory • decode the instruction • fetch operands from memory • execute instruction • store result in memory • the execution of one instruction may require several memory accesses • even worse with indirect addressing (pointers)

  26. Terminology • contiguous allocation • information is stored in consecutive memory addresses • a process occupies a single section of memory • non-contiguous allocation • a process is distributed over several, disjoint sections of main memory • information is not necessarily stored in consecutive addresses • real memory • memory directly available in the form of RAM chips • virtual memory • extension of real memory through the use hard disk space for less frequently used information

  27. Terminology (cont.) • logical address • address generated by the CPU • physical addresses • address applied to memory chips • block • data are transferred in units of fixed size • used for hard disks and similar devices • typical block sizes are 512 Bytes to 16 KBytes • locality of reference • there is a good chance that the next access to instructions or data will be close to the current one • fragmentation • memory or disk space is allocated in small parts, leading to inefficient utilization

  28. Memory Management Requirements • relocation • protection • sharing • logical organization • physical organization

  29. Relocation • the location of a process image in main memory may be different for different runs of the program • availability of memory areas at the start of the execution • process may be temporarily swapped out • as a consequence, the logical address is different from the physical address • the conversion is performed by the memory management • common schemes require hardware support • special registers, or • memory management unit (MMU)

  30. Static Relocation • relocation done at linking or at loading time • requires knowledge about available memory sections • a statically relocated program cannot be moved once it has its memory section assigned

  31. Dynamic Relocation • relocation done at run time • requires hardware support • relocation registers, MMU • the process image, or a part of it, can be moved around at any time • the value of the relocation registers must be changed accordingly

  32. Protection • processes may only access their own memory sections • in addition, shared memory sections may be accessible to selected processes • access to other memory areas must be restricted • sections of other processes • sections of the operating system • memory references must be checked at run time • the location and size of the memory section of a process may change during execution • requires hardware assistance • frequently relocation and limit register

  33. Sharing • code • processes executing the same program should share its code • data • data structures used by several programs • examples for sharing • code libraries • common programs (editors, mail, etc.) • producer-consumer scenarios with shared memory

  34. Shared Libraries • prewritten code for commonly used functions is stored in libraries • statically linked libraries • library code is linked at link time into an executable program • results in large executables with no sharing • dynamically linked libraries • if libraries are linked at run time they can be shared between processes

  35. Logical Organization • abstract view of memory • programs are usually composed of separate modules, procedures, objects, etc. • separate management of these components allows various optimizations • development, testing, compilation • allocation • sharing • protection • it also is more complicated • references across modules must be resolved at runtime

  36. Physical Organization • view of memory as physical device • dictated by the structure of hardware • size of the physical address space • available memory size • memory width • arrangement of memory modules • interleaved memory, banks, etc. • usually handled by the MMU

  37. Address Spaces • address binding • logical address space • physical address space

  38. Main Memory Process Process Control Block Program Data User Stack Shared Address Space Logical to Physical Address physical address logical address CPU MMU

  39. logical address is generated by the CPU physical address is loaded into the memory address register (MAR) usually part of the MMU CPU Memory Logical vs. Physical Address relocation register + MAR logical address physical address

  40. Memory Allocation • assign sections of memory to processes • section size: partitioning • contiguous • one section per process • non-contiguous • each process has several sections

  41. Partitioning • fixed/dynamic • paging/segmentation • virtual memory

  42. Fixed Partitioning • memory is divided into a number of fixed partitions • sizes of partitions may be different • chosen by the OS, developer, system administrator • maintain a queue for each partition • internal fragmentation • space in a partition not used by a process is lost • number of partitions (specified at system generation) limits number of active processes • Small jobs do not use partition space efficiently • used by older IBM OS/360 (MFT)

  43. Fixed Partitions - Multiple Queues 75K Main Memory 100K Processes 200K 50K OS

  44. Fixed Partitions - Single Queue 75K Main Memory Processes 100K 200K 50K OS

  45. Variable Partitioning • each process is assigned a partition • number, size, and location of the partition can vary • overcomes some problems of fixed partitioning • but still inefficient use of memory • higher overhead

  46. Swapping • processes are temporarily taken out of main memory to make more space available • swap space • secondary storage space provides a special area for these processes • swap time • very high compared with in-memory context switch • example: • 1 MByte process image, 10 MByte/sec transfer rate= 100 ms swap time • head seek time, latency not considered

  47. Dynamic Storage Allocation • problem: find a suitable free section of memory to accommodate a process • analogy: packing • boxes of different sizes ~ free memory sections • items to be packed ~ processes

  48. Storage Allocation Strategies • first-fit:allocate the first hole that is big enough • best-fit: allocate the smallest hole that is big enough • worst-fit: allocate the largest hole

  49. First-Fit • low overhead • no searching required • generates reasonably large holes on average

  50. Best-Fit • slower than first-fit • must search the entire list • tends to fill up memory with tiny useless holes • can be made faster by sorting the hole list by size

More Related