Operating System Principles
620 likes | 635 Vues
Understand memory management algorithms, address binding, and hardware essentials such as CPU, main memory, and cache for efficient operating system performance. Explore strategies like paging and segmentation.
Operating System Principles
E N D
Presentation Transcript
Operating System Principles Ku-Yaw Chang canseco@mail.dyu.edu.tw Assistant Professor, Department of Computer Science and Information Engineering Da-Yeh University
Chapter 8 Memory-Management Strategies • Keep several processes in memory to increase CPU utilization • Memory management algorithms • Require hardware support • Common strategies • Paging • Segmentation Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
8.1 Background • Program must be brought (loaded) into memory and placed within a process for it to be run. • Address binding • A mapping from one address space to another • A typical instruction-execution cycle • Fetch an instruction from memory • Decode the instruction • May cause operands to be fetched from memory • Execute the instruction • Store results back into memory Chapter 8 Memory-Management Strategies
8.1.1 Basic Hardware • CPU can directly access • Registers built into the processor • Generally within one cycle of the CPU clock • Main memory • May take many cycles of the CPU clock to complete • The processor normally needs to stall • Cache • A memory buffer used to accommodate a speed differential Chapter 8 Memory-Management Strategies
8.1.1 Basic Hardware • Protection must be provided by the hardware • Protect the OS from access by user processes • Protect user processes fromone another • One possible implementation • Base register • Hold the smallest legal physical memory address • Limit register • Specify the size of the range Chapter 8 Memory-Management Strategies
8.1.1 Basic Hardware • The OS is given unrestricted access to both operating system and users’ memory Chapter 8 Memory-Management Strategies
8.1.2 Address Binding • Input queue • A collection of processes on the disk that are waiting to be brought into memory to run the program. • A user program will go through several steps before being executed • Addresses in source program are symbolic • A compiler binds these symbolic addresses to relocatable addresses • A loader binds these relocatable addresses to absolute addresses Chapter 8 Memory-Management Strategies
8.1.2 Address Binding • Compile time • Absolute code can be generated • Know at compile time where the process will reside in memory • MS-DOS .COM-format programs are absolute code Chapter 8 Memory-Management Strategies
8.1.2 Address Binding • Load time • Relocatable code can be generated • Not known at compile time where the process will reside in memory • Final binding is delayed until load time Chapter 8 Memory-Management Strategies
8.1.2 Address Binding • Execution time • The process can be moved from one memory segment to another • Binding must be delayed until run time • Special hardware must be available Chapter 8 Memory-Management Strategies
8.1.3 Logical- Versus Physical-Address Space • Logical address • An address generated by the CPU • Compile-time and load-time • Also called virtual address • Logical-address space • The set of all logical addresses • Physical address • An address seen by the memory unit • The one loaded into the memory-address unit • Execution time • Logical and physical address spaces differ • Physical-address space Chapter 8 Memory-Management Strategies
8.1.3 Logical- Versus Physical-Address Space • Memory-Management Unit (MMU) • A hardware device • Run-time mapping from virtual to physical addresses • Different methods to accomplish such a mapping • Logical addresses • 0 to max • Physical addresses • R + 0 to R + max • The user program • Deal with logical addresses • Never see the real physical addresses Chapter 8 Memory-Management Strategies
Dynamic RelocationUsing a Relocation Register Chapter 8 Memory-Management Strategies
8.1.4 Dynamic Loading • The entire program and data must be in memory for its execution • The size of a process is limited to the size of physical memory. • Dynamic Loading • All routines are kept on disk in a relocatable load format • The main program is loaded into memory and is executed • A routine is not loaded until it is called • Advantage • An unused routine is never loaded Chapter 8 Memory-Management Strategies
8.1.5 Dynamic Linking andShared Libraries • Dynamic Linking • Linking is postponed until execution time • Small piece of code, called stub, used to locate the appropriate memory-resident library routine • OS checks if routine is in processes’ memory address • If yes, load the routine into memory • Stub replaces itself with the address of the routine, and executes the routine. • Dynamic linking is particularly useful for libraries • Library updates Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
Swapping • A process can be • Swapped temporarily out of memory to a backing store • Commonly a fast disk • Brought back into memory for continued execution • A process swapped back into • The same memory space • Binding is done at assembly or load time • A different memory space • Execution-time binding Chapter 8 Memory-Management Strategies
Swapping of two processes Chapter 8 Memory-Management Strategies
Swapping • Context-switch time is fairly high • User process size: 10MB • Transfer rate: 40MB per second • Actual transfer: • 10000KB / 40000 KB per second = 250 milliseconds • An average latency: 8 ms • Total swap time • 258 + 258 = 516 ms • Time quantum should be substantially larger than 0.516 seconds. Chapter 8 Memory-Management Strategies
Swapping • Major part of the swap time is transfer time • Directly proportional to the amount of memory swapped • Factors • How much memory is actually used • To reduce swap time • Be sure the process is completely idle • Pending I/O • Never swap a process with pending I/O • Execute I/O operations only into operating-system buffers Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
8.3 Contiguous Memory Allocation • Memory • One for the resident operating system • In either low or high memory • Location of the interrupt vector • One for the user processes • Contiguous memory allocation • Each process is contained in a single contiguous section of memory • For efficiency purpose Chapter 8 Memory-Management Strategies
8.3.1 Memory Mapping and Protection • Hardware support • A relocation register • The value of the smallest physical address • A limit register • The range of logical addresses Chapter 8 Memory-Management Strategies
8.3.2 Memory Allocation • Fixed-sized partitions • Simplest • Each partition contain exactly one process • Degree of multiprogramming is bounded by the number of partitions • Strategies • First fit (faster) • Best fit (better storage utilization) • Worst fit Chapter 8 Memory-Management Strategies
8.3.2 Memory Allocation • Multiple-partition allocation • Hole– block of available memory; holes of various size are scattered throughout memory • When a process arrives, it is allocated memory from a hole large enough to accommodate it • Operating system maintains information about:a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2 Chapter 8 Memory-Management Strategies
8.3.2 Memory Allocation • First fit • Allocate the first hole that is big enough • Best fit • Allocate the smallest hole that is big enough • Must search entire list, unless ordered by size • Produce the smallest leftover hole • Worst fit • Allocate the largest hole • Must search entire list • Produce the largest leftover hole Chapter 8 Memory-Management Strategies
8.3.3 Fragmentation • External Fragmentation • Enough total memory space to satisfy a request, but not contiguous • Reduced by compaction • Shuffle memory contents to place all free memory together in one large block • Compaction is possible only if relocation is dynamic, and is done at execution time • Internal Fragmentation • Allocated memory may be slightly larger than requested memory • This size difference is memory internal to a partition, but not being used Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
8.4 Paging • A memory-management scheme that permits the physical-address space of a process to be noncontiguous • Avoid the problem of fitting memory chunks of varying sizes onto the backing store Chapter 8 Memory-Management Strategies
8.4.1 Basic Method • Frames (physical memory) • Fixed-sized blocks • Pages (logical memory) • Blocks of the same size • Every address generated by the CPU is divided into • A page number (p) • An index to a page table • contain the base address of each page in the physical memory • A page offset (d) • The displacement with the page Chapter 8 Memory-Management Strategies
8.4.1 Basic Method • Page size • Defined by the hardware • Typically a power of 2 • Varying between 512 bytes to 16MB per page • Logical address space is 2m • A page size is 2n addressing units • High-order m-n bits designate the page number • n low-order bits designates the page offset page number page offset p d m-n n Chapter 8 Memory-Management Strategies
Paging hardware Chapter 8 Memory-Management Strategies
Paging Model Chapter 8 Memory-Management Strategies
Paging Example • Page size of 4 bytes • Physical memory of 32 bytes • 8 pages • Paging itself is a form of dynamic relocation • No external fragmentation • Have internal fragmentation Chapter 8 Memory-Management Strategies
8.4.1 Basic Method • Internal fragmentation • Average one-half page per process • Suggest that small page sizes are desirable • Overhead is involved in each page-table entry • Disk I/O is more efficient when the data being transferred is larger • Generally, page sizes have grown over time • Processes, data sets and main memory have become larger • Between 4KB and 8KB in size • Some CPUs and kernels support multiple page sizes Chapter 8 Memory-Management Strategies
Free Frames After Allocation Before Allocation Chapter 8 Memory-Management Strategies
8.4.2 Hardware Support • OS • Allocate a page table for each process • A pointer to the page table is stored in the PCB • Hardware implementation • The page table is implemented as a set of dedicated registers • Efficiency is a major consideration • Satisfactory if the page table is reasonably small • Page-table base register (PTBR) • Point to the page table • The page table is kept in main memory • Problem: two memory access are needed to access a byte • One for the page table entry • One for the byte Chapter 8 Memory-Management Strategies
8.4.2 Hardware Support • Translation look-aside buffer (TLB) • A special, small, fast-lookup hardware cache • The search is fast • Be used with page tales • Each entry consists of two parts: a key and a value • Number of entries: between 64 and 1,024 • A TLB miss • If the page number is not in the TLB • Entries can be wired down • Cannot be removed from the TLB Chapter 8 Memory-Management Strategies
Paging Hardware with TLB Chapter 8 Memory-Management Strategies
8.4.2 Hardware Support • Hit ratio • The percentage of times that a particular page number is found in the TLB • Example: • Hit ratio: 80% • 20 nanoseconds to search the TLB • 100 nanoseconds to access memory • If the page is in the TLB: 120 nanoseconds • If not, 220 nanoseconds • Effective access time = 0.8 * 120 + 0.2 * 220 = 140 nanoseconds Chapter 8 Memory-Management Strategies
8.4.3 Protection • Memory protection • Accomplished by protection bits associated with each frame • One bit to define a page to be read-write or read-only • An attempt to write to a read-only page causes a hardware trap to the OS • One bit – valid-invalid bit • OS sets this bit to allow or disallow access to the page Chapter 8 Memory-Management Strategies
8.4.3 Protection Chapter 8 Memory-Management Strategies
8.4.4 Shared Pages • An advantage of paging • Sharing common code • Only one copy of the code need be kept in physical memory • Example • Compilers, window systems, run-time libraries, database systems, and so on. • Reentrant code (or pure code) • Non-self-modifying code • Never change during execution Chapter 8 Memory-Management Strategies
Sharing of codein a paging environment Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
Be skipped Chapter 8 Memory-Management Strategies
Background Swapping Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium Summary Exercises Chapter 8 Memory-Management Strategies Chapter 8 Memory-Management Strategies
8.6 Segmentation • Paging • The separation of the user’s view of memory and the actual physical memory Chapter 8 Memory-Management Strategies
8.6.1 Basic Method • User’s view of memory • A collection of variable-sized segments, with no necessary ordering among segments Chapter 8 Memory-Management Strategies