1 / 83

CHAPTER 9: MEMORY MANAGEMENT ( 内存管理 )

CHAPTER 9: MEMORY MANAGEMENT ( 内存管理 ). Background Swapping Contiguous Allocation Paging Segmentation Segmentation with Paging. BACKGROUND. Sharing  Management CPU sharing Memory sharing Memory management Actual memory management Contiguous partition allocation Paging

neila
Télécharger la présentation

CHAPTER 9: MEMORY MANAGEMENT ( 内存管理 )

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CHAPTER 9: MEMORY MANAGEMENT (内存管理) • Background • Swapping • Contiguous Allocation • Paging • Segmentation • Segmentation with Paging

  2. BACKGROUND • Sharing  Management • CPU sharing • Memory sharing • Memory management • Actual memory management • Contiguous partition allocation • Paging • segmentation • Virtual memory management • Demand paging • demand segmentation

  3. Background : Logical vs. Physical Address Space • Program must be brought into memory and placed within a process for it to be run. • CPU  Memory • Logical address (逻辑地址) and Physical address (物理地址) • Logical address (逻辑地址)– generated by the CPU; also referred to as virtual address. • Physical address (物理地址)– address seen by the memory unit. • Logical and physical addresses are the same in compile-time and load-time address-binding schemes; logical (virtual) and physical addresses differ in execution-time address-binding scheme.

  4. Background: Memory-Management Unit (MMU) • What is seen is not always true. • Hardware device that maps virtual to physical address. • In MMU scheme, the value in the relocation register is added to every address generated by a user process at the time it is sent to memory. • The user program deals with logical addresses; it never sees the real physical addresses.

  5. Background:Dynamic relocation using a relocation register

  6. Background : Multistep Processing of a User Program

  7. Background : Binding of Instructions and Data to Memory Address binding of instructions and data to memory addresses can happen at three different stages. • Compile time: If you know at compile time where the process will reside in memory, then absolute code can be generated. Recompiling may be required. • Load time: If it is not known at compile time where the process will reside in memory, then the compiler must generate relocatable code. • Execution time: If the process can be moved during its execution from one memory segment to another, then binding must be delayed until run time. This requires hardware support for address maps (e.g., base and limit registers).

  8. Background: Dynamic Loading (v.s. static loading) • Should the entire program and data of a process be in physical memory for the process to execute? • Routine is not loaded until it is called. • Discussion • Better memory-space utilization; unused routine is never loaded. • Useful when large amounts of code are needed to handle infrequently occurring cases. • No special support from the operating system is required. • The OS can help by providing library routines to implement dynamic loading.

  9. Background: Dynamic Linking (v.s. static linking) • Static linking(静态连接)  dynamic linking(动态连接) • Linking postponed until execution time. • Small piece of code, stub(存根), used to locate the appropriate memory-resident library routine. • Stub replaces itself with the address of the routine, and executes the routine. • Operating system needed to check if routine is in processes’ memory address. • Dynamic linking is particularly useful for libraries.

  10. Background: Overlays(覆盖 ) • Keep in memory only those instructions and data that are needed at any given time. • Needed when process is larger than amount of memory allocated to it. • Implemented by user, no special support needed from operating system, programming design of overlay structure is complex.

  11. Background: Overlays for a Two-Pass Assembler

  12. SWAPPING(交换) • A process can be swapped temporarily out of memory to a backing store(备份仓库), and then brought back into memory for continued execution. • Backing store – fast disk large enough to accommodate the copies of all memory images for all users; must provide direct access to these memory images. • Roll out, roll in– swapping variant used for priority-based scheduling algorithms; lower-priority process is swapped out so higher-priority process can be loaded and executed.

  13. Swapping: Schematic View

  14. Swapping: Backing store • Swap file • Swap device • Swap device and swap file

  15. Swapping: Performance • Major part of swap time is transfer time; total transfer time is directly proportional to the amount of memory swapped. • 1000KB/5000KBS = 1/5 seconds = 200ms • Modified versions of swapping are found on many systems, i.e., UNIX, Linux, and Windows. • UNIX • Windows 3.1

  16. CONTIGUOUS MEMORY ALLOCATION • The main memory must accommodate • both the operating system • and the various user processes. • The main memory is usually divided into two partitions: • One for the resident operating system, usually held in low memory with interrupt vector, • The other for user processes, then held in high memory. • For contiguous memory allocation, each process is contained in a single contiguous section of memory.

  17. Contiguous Memory Allocation: Memory protection • Why to protect memory? • To protect the OS from user processes • And to protect user process from each other. • How to protect memory? • To use relocation registers and limit registers to protect user processes from each other, and from changing operating-system code and data. • The relocation register contains the value of smallest physical address; the limit register contains the range of logical addresses – each logical address must be less than the limit register.

  18. Contiguous Memory Allocation: Hardware Support for Relocation and Limit Registers

  19. Contiguous Memory Allocation: Types • Fixed-sized contiguous partition • Dynamic contiguous partition • Buddy system

  20. Contiguous Memory Allocation:Fixed-Sized Contiguous Partitions • Main memory is divided into a number of fixed partitions at system generation time. A process may be loaded into a partition of equal or greater size. • Equal-size partitions • Unequal-size partions • Equal-size partitions: • A program may be too big to fit into a partition. • Main-memory utilization is extremely inefficient.

  21. Contiguous Memory Allocation:Fixed-Sized Contiguous Partitions • Unequal-size partitions: • Placement algorithms for unequal-size partitions • best-fit • first-fit • worst-fit • Conclusion for Fixed-Sized Contiguous Partition • Strengths: Simple to implement; little overhead • Weaknesses: internal fragmentation; fixed number of processes

  22. Contiguous Memory Allocation:Dynamic Contiguous Partitions • Partitions are created dynamically, so that each process is loaded into a partition of exactly the same size as that process. • The OS maintains information about:a) allocated partitions b) free partitions (hole) • When a process arrives, find a hole for it • When a process terminates, return the memory and merge the holes • Holes of various size are scattered throughout memory.

  23. Contiguous Memory Allocation: Dynamic Contiguous Partitions OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

  24. Contiguous Memory Allocation:Dynamic Contiguous Partitions • Placement algorithms: first-fit, best-fit, worst-fit. • First-fit: Allocate the first hole that is big enough. • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size. Produces the smallest leftover hole. • Worst-fit: Allocate the largest hole; must also search entire list. Produces the largest leftover hole. • Replacement algorithms: • LRU, • FIFO, • and etc.

  25. Contiguous Memory Allocation: Dynamic Contiguous Partitions • Partitions are created dynamically, so that each process is loaded into a partition of exactly the same size as that process. • Dynamic Partitioning • Strengths: • no internal fragmentation; • more efficient use of main memory. • Weaknesses: • external fragmentation; (internal fragmentation); • Compaction.

  26. Contiguous Memory Allocation: Fragmentation • External Fragmentation– total memory space exists to satisfy a request, but it is not contiguous. • Internal Fragmentation– allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used. • Reduce external fragmentation by compaction • Shuffle memory contents to place all free memory together in one large block. • Compaction is possible only if relocation is dynamic, and is done at execution time. • I/O problem: batch job in memory while it is involved in I/O. Do I/O only into OS buffers.

  27. Contiguous Memory Allocation: Buddy System

  28. Contiguous Memory Allocation: Buddy System

  29. Contiguous Memory Allocation: Buddy System • Easy to partition • Easy to combine • The buddy system is used for Linux kernel memory allocation.

  30. PAGING (分页) • Contiguous memory allocation External fragmentation • Noncontiguous memory allocation  no external fragmentation. • Paging: • to divide the process into equal sized pages. (physical division) • Segmentation: • to divide the process into non-equal sized. (logical division) • Paging + segmentation

  31. Paging • Paging • Divide physical memory into fixed-sized blocks called frames(帧). • Divide logical memory into fixed-sized blocks called pages(页). And page size is equal to frame size. • Divide the backing store into blocks of same size called clusters(块、簇) • To run a program of size n pages, • to find n free frames • to load program.

  32. Paging How to translate logical address to physical address • Logical address (page number页码, page offset页偏移) • Page number(p)– used as an index into a pagetable which contains base address of each page in physical memory. • Page offset(d)– combined with base address to define the physical memory address that is sent to the memory unit. • Page table (页表)(The page table contains the base address of each page in physical memory) • Physical address (frame number帧码, page offset页偏移)

  33. Paging

  34. Paging

  35. Paging: Page size • How to partition logical addresses? • The page size is selected as a power of 2. • m bit logical address = m-n bit page size + n bit page offset • Possible page sizes: • 512B  16MB

  36. Paging: Page size

  37. Paging: Page size • Logical address 0  (5x4)+0 physical address • Logical address 1  (5x4)+1 physical address • Logical address 2  (5x4)+2 physical address • Logical address 3  (5x4)+3 physical address • Logical address 4  (6x4)+0 physical address • Logical address 5  (6x4)+1 physical address • Logical address 13  (2x4)+2 physical address

  38. Paging: Page size • How to select page sizes? • Page table size • Internal fragmentation • Disk I/O  Generally page sizes have grown over time as processes, data, and main memory have become larger.

  39. Paging:The OS Concern • What should the OS do? • Which frames are allocated? • Which frames are available? • How to allocate frames for a newly arrived process? • Placement algorithm (放置算法) • Replacement algorithms (替换算法)

  40. Paging:The OS Concern

  41. Paging: Implementation of Page Table • How to implement the page table • Small, fast registers • Main memory • Main memory + Translation look-aside buffer (联想存储器)

  42. Paging: Implementation of Page Table • Page table is kept in main memory. • Page-tablebase register (PTBR) points to the page table. • In this scheme every data/instruction access requires two memory accesses. • One for the page table and • one for the data/instruction.

  43. Paging: Implementation of Page Table • Page-table length register (PRLR) indicates size of the page table. • The two memory access problem can be solved by the use of a special fast-lookup hardware cache called associative memory or translation look-aside buffers(TLBs) • Associative memory – parallel search

  44. Paging: Implementation of Page Table : Paging Hardware With TLB

  45. Paging: Implementation of Page Table : Paging Hardware With TLB • Effective Access Time (EAT) hit ratio = 0.8 EAT = 0.80*120 + 0.20*220 = 140 ns hit ratio = 0.98 EAT = 0.98*120 + 0.02*220 = 122 ns

  46. Paging: Memory Protection • Memory protection implemented by associating protection bit with each frame. • Read-write or read-only • Read-write or read-only or executed only or what ever • Valid-invalid bit attached to each entry in the page table: • “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page. • “invalid” indicates that the page is not in the process’ logical address space.

  47. Paging: Memory Protection:Valid (v) or Invalid (i) Bit In A Page Table A system: 14bit address space(0-16383), A process: uses only address of (0-10468)

  48. Paging: Page Table Structure • The page table can be larger: For a 32 bit CPU: 4k page => 2^32/2^12=1M Words = 4M B The solution: • Hierarchical Paging • Hashed Page Tables • Inverted Page Tables

  49. Paging: Page Table Structure : Hierarchical Page Tables • Break up the logical address space into multiple page tables. • A simple technique is a two-level page table.

  50. Paging: Page Table Structure : Hierarchical Page Tables • A logical address (on 32-bit machine with 4K page size) is divided into: • a page number consisting of 20 bits. • a page offset consisting of 12 bits. • Since the page table is paged, the page number is further divided into: • a 10-bit page number. • a 10-bit page offset. • Linux uses three-level paging.

More Related