1 / 66

Operating Systems (CS 340 D)

This chapter provides a detailed description of various memory management strategies in operating systems, including swapping, contiguous memory allocation, paging, and segmentation.

annettej
Télécharger la présentation

Operating Systems (CS 340 D)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Princess Nora University Faculty of Computer & Information Systems Computer science Department Operating Systems (CS 340 D) • L.Reem Al-Salih

  2. Chapter 8: Memory Management Strategies

  3. Chapter 8: Memory Management Strategies • Background • Swapping • Contiguous Memory Allocation • Paging • Segmentation

  4. Objectives • To provide a detailed description of various ways of organizing memory hardware • To discuss various memory-management techniques, including paging and segmentation

  5. Background

  6. Background • Memory is central to the operation of a modern computer system. • Memory consists of a large array of words or bytes, each with its own address • Selection of a memory-management method for a specific system depends on many factors, especially on the hardware design of the system.

  7. Basic Hardware • Main memory , registers and cash are ONLYstorage CPU can access directly • Program must be brought (from disk) into memory and placed within a process for it to be run • Direct storage access time: • Register access in one CPU clock (or less) • Main memory can take many cycles (i.e. slowly) • Solution: Cache (fast memory) sits between main memory and CPU registers Huge problem… because of the frequency of memory accesses.

  8. Basic Hardware (Cont.) • Protection of memory: • Protection of memory is necessary to ensure correct operation • Protection from what (/possible risks)???? • protect the operating system from access by user processes • protect user processes from one another. • This protection must be provided by the hardware &can be implemented in several ways. One method to implement protection: use Base and Limit Registers

  9. Basic Hardware (Cont.) • UsingBase and Limit Registers: • We need to make sure that each process has a separate memory space….How??? • Main idea: we need to determine the range of legal addresses that can be accessed only by the process. • We can provide this protection by using two registers (base & limit) • The base register>> holds the physical address of the first byte in the legal range. • The limit register >> holds the size of the range.

  10. Basic Hardware (Cont.) • Example: If the base register holds 300040 and the limit register is 120900….what is the range of legal addresses ?? • The program can legally access all addresses from 300040 through 420939 (inclusive) • NOTE: Last physical address = base +limit -1

  11. Basic Hardware (Cont.) • How base & limit registers help to provide memory protection?? • By applying (2) procedures: • Procedure (1): The CPU hardware compare every address generated in user mode with the registers. • Procedure (2): restrict the ability to load base & limit registers only to OS..

  12. Basic Hardware (Cont.) • Procedure (1): The CPU hardware compare every address generated in user mode with the registers. • If (CPU generated address ≥ base) & (CPU generated address < base +limit) … • Then …..the CPU generated address is legal and allowed to access the memory • Else…… the CPU generated address is illegal and NOT allowed to access the memory…..(causing a trap (/error) to OS) • This scheme prevents a user program from (accidentally or deliberately) modifying the code or data structures of either the operating system or other users…..(solution to protection problem)

  13. Basic Hardware (Cont.) Fig. (8.2): Hardware address protection with base and limit registers

  14. Basic Hardware (Cont.) • Procedure (2): restrict the ability to load base & limit registers ONLYto OS. • This restriction applied by using a special privileged instruction. • Since privileged instructions can be executed only in kernel mode, and since only the operating system executes in kernel mode…..So, ONLY the operating system can load the base and limit registers. • This scheme allows the operating system to change the value of the registers but prevents user programs from changing the registers’ contents.

  15. Address Binding • Address binding( or relocation): The process of associating program instructions and data to physical memory addresses • A user program will go through several steps -some of which may be optional-before being executed ….. ( steps are: compiling>>> linking>>>execution) • Addresses may be represented in different ways during these steps. • Addresses in the source program are generally symbolic (such as count, sum). • A compiler will typically bind these symbolic addresses to relocatable addresses (such as "14 bytes from the beginning of this module"). • The linkage editor or loader will in turn bind the relocatable addresses to absolute addresses (physical address) (such as 74014). • Each binding is a mapping from one address space to another.

  16. Address Binding (cont.) Figure 8.3: Multistep processing of a user program.

  17. Address Binding (cont.) • The binding of instructions and data to memory addresses can be done at any step along the way: • Compile time. The compiler translates symbolic addresses to absolute addresses. If you know at compile time where the process will reside in memory, then absolute code can be generated (Static). • Load time :When it is not known at compile time where the process will reside in memory, then The compiler translates symbolic addresses to relative (relocatable) addresses. The loader translates these to absolute addresses (Static). • Execution time. If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. The absolute addresses are generated by specialhardware (e.g.MMU) • Most general-purpose OSs use this method (Dynamic). • Static-new locations are determined before execution. • Dynamic-new locations are determined during execution.

  18. Logical vs. Physical Address Space • Logical/virtual address: address generated by CPU • Physical address: address seen by memory hardware • Compile-time / load-time binding >>> logical address = physical address • Run-time binding >>> logical address≠ physical address • Logical address space: is the set of all logical addresses generated by a program • Physical address space: isthe set of all physical addresses corresponding to these logical addresses. • MMU (Memory-Management Unit ): h/w device that maps virtual addresses to physical addresses at run time • Different methods of address mapping (/memory management strategies): • Continuous memory allocation • Paging • segmentation

  19. Simple Address Mapping Method • For NOW, we illustrate address mapping with a simple MMU scheme: • The base register is now called a relocation register. • Basic methodology>>>> Physical address= logical address + relocation register • For example: • Base register contains 14000 • If logical address =0 >>>> Physical address= 0+14000=14000 • If logical address =346 >>>> Physical address= 346+14000=14346 • The user program never sees the real physical addresses. The user program deals with logical addresses. MMU converts logical addresses into physical addresses.

  20. Example of Simple Address Mapping Method

  21. Swapping

  22. Swapping • Swapping is a mechanism in which a process can be swapped temporarily out of main memory to a backing store , and then brought back into memory for continued execution. • Backing store is a usually a hard disk drive or any other secondary storage which fast in access and large .

  23. Swapping (Cont.) • How swapping performed??? • The system maintains a ready queue consisting of all processes whose memory images are on the backing store or in memory and are ready to run. • Whenever the CPU scheduler decides to execute a process, it calls the dispatcher. • The dispatcher checks to see whether the next process in the queue is in memory. • If it is not, and if there is no free memory region, the dispatcher swaps out a process currently in memory and swaps in the desired process. • Then, Dispatcher reloads registers and transfers control to the selected process. • Major time consuming part of swapping is transfer time • Total transfer time ∝ amount of memory swapped. • Context-switch time in such a swapping system is high. • Swapping is normally disabled but will start if many processes are running and are using a threshold amount of memory. • Swapping is again halted when the load on the system is reduced.

  24. Basic Memory Management Methods

  25. Basic Memory Management schemes >>>>These strategies can also be combined.

  26. Memory Management Strategy #(1): Contiguous Memory Allocation

  27. Contiguous Memory Allocation (CMA) • The memory is usually divided into two partitions: • One for the resident OS • One for the user processes. • We can place the OS in either low memory or high memory . • Basic methodology: • Each process is contained in a single contiguous section of memory. • CMA strategy can be applied in (2) methods: • Multiple-partition method/(Fixed-partition allocation) method • variable-partition allocation method

  28. Memory Allocation • Method #(1): Multiple-partition method (/Fixed-partition method) • Main idea: divide memory into several fixed-sized partitions. Each partition may contain exactly one process. • . In this multiple-partition method: • When a partition is free, a process is selected from the input queue and is loaded into the free partition. • When the process terminates, the partition becomes available for another process. • The degree of multiprogramming is bound by the number of partitions • One of the simplest methods for allocating memory • This method is called (Multiprogramming with a Fixed number of Tasks/ MFT) • This method is no longer in use

  29. Memory Allocation (cont.) • Method #(2): variable-partition method: • Memory is divided into variable-sized partitions • OS maintains a list of allocated / free partitions (holes) • When a process arrives, it is allocated memory from a hole large enough to accommodate it • Memory is allocated to processes until requirements of next process in queue cannot be met • OS may skip down the queue to allocate memory to a smaller process that fits in available memory • When process exits, memory is returned to the set of holes and merged with adjacent holes, if any • The method is a generalization of the fixed-partition scheme (called Multiprogramming with a Variable number of Tasks (MVT))

  30. Memory Allocation (cont.) • Example:

  31. Memory Allocation (cont.) • New problem appear in variable-partition method…(how to satisfy a request of size (n) from a list of free holes??? ) • There are many solutions to this problem: • First fit. • Allocate the first hole that is big enough. • Searching can start either at the beginning of the set of holes or where the previous first-fit search ended. We can stop searching as soon as we find a free hole that is large enough. • Best fit • Allocate the smallest hole that is big enough. • We must search the entire list, unless the list is ordered by size. • This strategy produces the smallest leftover hole. • Worst fit. • Allocate the largest hole. • we must search the entire list, unless it is sorted by size. • This strategy produces the largest leftover hole, which may be more useful than the smaller leftover hole from a best-fit approach.

  32. Memory Allocation (cont.) • Exercise (8.16): Given five memory partitions of 100 KB, 500 KB, 200 KB, 300 KB, and 600 KB (in order), how would each of the first-fit, best-fit, and worst-fit algorithms place processes of 212 KB, 417 KB, 112 KB, and 426 KB (in order)?Which algorithm makes the most efficient use of memory? >>> Let p1, p2, p3 & p4 are the names of the processes • Best-fit: • P1>>> 100, 500, 200, 300, 600 • P2>>> 100, 500, 200, 88, 600 • P3>>> 100, 83, 200, 88, 600 • P4>>> 100, 83, 88, 88, 600 • 100, 83, 88, 88, 174 First-fit: P1>>> 100, 500, 200, 300, 600 P2>>> 100, 288, 200, 300, 600 P3>>> 100, 288, 200, 300, 183 100, 116, 200, 300, 183 P4 (426K) must wait final set of holes • Worst-fit: • P1>>> 100, 500, 200, 300, 600 • P2>>> 100, 500, 200, 300, 388 • P3>>> 100, 83, 200, 300, 388 • 100, 83, 200, 300, 276 << final set of hole • P4 (426K) must wait >>> In this example, Best-fit turns out to be the best because there is no wait processes.

  33. Memory Allocation (cont.) • In general witch algorithm (first-fit, best-fit, or worst-fit) is better? • Both first fit and best fit are better than worst fit in terms of decreasing time and storage utilization. • Neither first fit nor best fit is clearly better than the other in terms of storage utilization, but first fit is generally faster (because it doesn’t require full list search or sorting)

  34. Fragmentation • Fragmentation is a problem appears in memory allocation censers about unusable memory space. • Fragmentation types: • External fragmentation: memory space to satisfy a request is available, but is not contiguous…(i.e. storage is fragmented into a large number of small holes) • Both the first-fit and best-fit strategies for memory allocation suffer from external fragmentation. • (1/3) of memory may be unusable!!! • Internal Fragmentation: allocated memory may be larger than requested memory….(i.e. some memory within partition may be left unused)

  35. Fragmentation (Cont.) • External fragmentation solution: • Compaction: Memory contents shuffled to place all free memory together in one large block • Dynamic relocation (run-time binding) needed • Compaction is expensive (high overhead) • Permit the logical address space of the processes to be non-contiguous, thus allowing a process to be allocated physical memory wherever it is available. • Two techniques achieve this solution: • Paging • Segmentation

  36. Fragmentation in Continuous memory allocation scheme • fixed-partition method suffer from internal fragmentation • variable-partition method suffer from external fragmentation

  37. Memory Management Strategy #(2): Paging

  38. Paging • Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous. • Because of its advantages, paging in its various forms is commonly used in most OSs. • Recent designs have implemented paging by closely integrating the hardware and OS, especially on 64-bit microprocessors. • Paging method also used to manage backing store (e.g. hard disk) .

  39. Paging: Basic Method • The basic methodology: • Breaking physical memory into fixed-sized blocks called frames • Breaking logical memory into blocks called pages… Where (frame size= page size) • When a process is to be executed, its pages are loaded into any available memory frames from the backing store. • The backing store is divided into fixed-sized blocks that are of the same size as the memory frames. • OS keeps track of free frames. • Page table translates logical page to physical frame addresses • Frame size: • Defined by hardware • Should be power of 2 • Usually between 512 byte- 16MB

  40. Paging: Basic Method (cont.) Figure 8.8 Paging model of logical and physical memory.

  41. Paging: Basic Method (cont.) • Address generated by CPU is divided into: • Page number (p) – used as an index into a page table which contains the corresponding frame number • Page offset (d) – is the displacement within the page. • Where: • m= number of bits in logical address • n= number of bits in offset part • m-n= number of bits in page number part

  42. Paging: Basic Method (cont.) Frame #0 Page# 0 • Example: Page# 1 Frame #1 Given: n=2 bits m=4 bits Page size=4 B Physical mem. Size=32 B # of pages= 8 pages >> Convert the logical address (13) to physical address. Page# 2 Frame #2 Page# 3 Frame #3 . . . . . . . . . Base address is the physical address of first byte in the target frame >>How to convert from logical address to physical address in paging??? Physical address = base address+ offset = (frame # * frame size) + offset Frame #7

  43. Paging: Basic Method (cont.) • Example. (Cont.): Physical address = base address+ offset = (frame # * frame size) + offset Logical address= 13 (in decimal) = (1101) in binary Page # = (11) = 3 Offset= (01)= 1 By using the page table>>> frame # = 2 >>> Physical address of logical address (13) =(2* 4) + 1=9 Exercise: Convert the following logical addresses (given in decimal) to physical for the same example: (0, 4,10,15) Given: n= 2 bits m= 4 bits Page size=4 B Physical mem. Size=32 B # of pages= 8 pages >> Convert the logical address (13) to physical address.

  44. Paging Arithmetic Laws >>How to convert from logical address to physical address in paging??? Physical address = base address+ offset = (frame # * frame size) + offset • Page size = frame size • Logical address space (/size) = 2m • Physical address space (/size) = 2x (where x is the number of bits in physical address) • Logical address space (/size) = # of pages × page size • Physical address space (/size) = # of frames × frame size • Page size= frame size= 2n • # of pages= 2m-n • # of entries (records)in page table = # of pages • Required number of pages for a process = process size/ page size (Rounded up)

  45. Paging: Basic method(Cont.) Frame allocation procedure: • When a process arrives in the system to be executed: • Process size ( expressed in pages) is examined. …(i.e. if the process requires (n) pages, at least (n) frames must be available in memory) • If (n) frames are available>>> they are allocated …then, when a page is loaded into a frame, its frame number is put into the page table for this process…. and so on. Figure 8.10 Free frames (a) before allocation and (b) after allocation

  46. Paging: Basic Method (cont.) • Paging scheme has NO external fragmentation..but it has an internal fragmentation • Example: If process (p1) size= 1030 B , page size= 512 B..How many pages P1 will occupied?? • Required # of pages= 1030/512≈ 2.01 3 pages • Notice that the last page is almost empty…. (506 bytes are allocated and unused)……. (Internal fragmentation)….what is the solution?? • Make page size small to reduce the internal fragmentation<<<<<Unsuitable solution because: • Page table size will increased…(more memory consumption & moreover head) • Disk I/O is more efficient when the amount data being transferred is larger • Generally, page sizes have grown over time as processes, datasets, and main memory have become larger.

  47. Paging: Basic Method (cont.) • Page table: • Part of process control block (PCB) • Each process has one page table • Contains 1 entry per logical page • Used to translate logical addresses to physical addresses in software • OS keep track of physical memory allocation by using a frame table • Frame table: • Maintained by OS • Contains 1 entry per physical frame • Whether free or allocated • If allocated>>> it contains allocation information (Process ID, page#)

  48. Paging hardware • The hardware implementation of the page table can be done in several ways. • Different methods to implement (/store) the page table(PT): • Method (1): Storing PT in dedicated registers: • The simplest case • Page table is stored in a set of dedicated, high-speed registers • Instructions to load/modify PT registers are privileged • Acceptable solution if page table is small • Method (2): Storing whole PT in RAM : • This method is used if PT is large (common) • PT stored in main memory • Base address of PT is stored in page table base register (PTBR) • Context switch involves changing (1) register only (i.e. fast) • Two physical memory accesses are needed per user memory access (one for the page-table entry, one for the byte). • memory access is slowed by factor of 2>>> this delay is unacceptable!!

  49. Paging hardware (Cont.) • Method (3): Use a cash (TLB): • Use a special, small, fast cache, called a Translation Look-aside Buffer (TLB). • The search is fast but the hardware is expensive. • TLB holds subset of page table entries • Each entry in the TLB consists of two parts: a key (page #) and a value (frame #) • Input value (logical address(L.A.)) is compared simultaneously with all keys simultaneously. • If L.A is found (/ TLB hit ) >> corresponding value (frame #) is returned to calculate physical address (P.A.) • If L.A is NOT found (/ TLB miss )>> • Access PT in RAM to get the frame# and then access RAM • The page # and frame # is added to TLB, so that they will be found quickly on the next reference.

  50. Paging hardware (cont.) Figure 8.11 :Paging hardware with TLB.

More Related