1 / 19

OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

Lecture 7, 29 October 2013. OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL. Chap. 4 Memory Management 4.1 Basic 4.2 Swapping 4.3 Paging. 4. Memory Management . Ideally programmers want memory that is large fast non volatile

Télécharger la présentation

OPERATING SYSTEMS DESIGN AND IMPLEMENTATION Third Edition ANDREW S. TANENBAUM ALBERT S. WOODHULL

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 7, 29 October 2013 OPERATING SYSTEMSDESIGN AND IMPLEMENTATIONThird EditionANDREW S. TANENBAUMALBERT S. WOODHULL Chap. 4 Memory Management 4.1 Basic 4.2 Swapping 4.3 Paging

  2. 4. Memory Management • Ideally programmers want memory that is • large • fast • non volatile • Memory hierarchy • small amount of fast, expensive memory : caches, ~ 1 MB • some medium speed, medium price : main memory, ~1 GB • gigabytes of slow, cheap disk storage, ~ 1 TB • Memory manager handles the memory hierarchy Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  3. 4.1 Basic Memory Management4.1.1 Monoprogramming without Swapping or Paging Fig. 4.1 Three simple ways of organizing memory - with an operating system and one user process Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  4. 4.1.2 Multiprogramming with Fixed Partitions Fig. 4.2 Fixed memory partitions • separate input queues for each partition • single input queue (strategies: next convenient process, biggest process, don't ignore more than k times) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  5. 4.1.2 Multiprogramming with Fixed Partitions Two solutions: static relocation dynamic relocation (with Base register) Fig. 3.2 Illustration of the relocation problem (a) A 16-KB program, (b) Another 16-KB program, (c) The two programs loaded consecutively into memory Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall

  6. 4.1.3 Relocation and Protection • Relocation: address locations of variables and code routines cannot be absolute, they have to be translated during : • loading (static relocation) or • execution (dynamic relocation, e.g. with a base register) • Protection: a process address can't exceed its allocated memory partition : • Idea: use a limit register Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  7. 4.1.3 Relocation and Protection : base and limit registers Fig. 3.3 Base and limit registers can be used to give each process a separate address space. Tanenbaum, Modern Operating Systems, 3rd ed., (c) 2009, Prentice-Hall

  8. 4.2 Swapping : memory allocation Fig. 4.3 Memory allocation changes as • processes come into memory • leave memory Shaded regions are unused memory Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  9. 4.2 Swapping : memory allocation Fig. 4.4a Allocating space for growing data segment Fig. 4.4b Allocating space for growing data & stack segment (garbage collection !) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  10. 4.2.1-2 Memory Management: bit maps and linked lists Fig. 4.5a Part of memory with 5 processes, 3 holes • tick marks show allocation units (1 in bitmaps) • shaded regions are free (0 in bitmaps) Fig. 4.5b Corresponding bit map Fig. 4.5c Same information as a list Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  11. 4.1.2 Memory Management : linked lists Fig. 4.6 Four neighbor combinations for the terminating process X Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  12. 4.1.2 Memory Management : linked lists – algorithms • First fit: first hole with sufficient size • Next fit: same but search starts at the last current hole allocated • Best fit: choose the smallest hole that is sufficient • Worst fit: tries to produces usable holes by choosing the hole for which allocation produces the biggest hole Improvements: separated lists for holes and used segments, holes must be sorted by size • Quick fit: separated lists for sizes often demanded Unfortunately, none of these algorithms are satisfactory: too much tiny holes (e.g. best fit), not enough big holes (worst fit), … Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  13. 4.3 Virtual Memory4.3.1 Paging : MMU • Idea • Each program has its own address space, which is broken up into chunks called pages (typically 4 KB) • A MMU (memory management unit) maps the virtual address onto the physical address Fig. 4.7 The position and function of the MMU Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  14. 4.3.1 Paging : virtual/physical addresses The relation between thevirtual addressesand physical memory addressesis given by the page table Fig. 4.8 Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  15. 4.3.1 Paging : page table • Purpose • map virtual pages onto • page frames • Major issues • The page table can be extremely large • The mapping must be very fast. Fig. 4.9 Internal operation of MMU with 16 4 KB pages Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  16. 4.3.2 Page Tables: multilevel Purpose Reduce the page table size Fig. 4.10a 32 bit address with 2 page table fields Fig. 4.10b Two-level page tables (b) Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  17. 4.3.2 Page Tables: structure of a page tables entry Fig. 4.11 A typical page table entry Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  18. 4.3.3 TLBs – Translation Lookaside Buffers(associative memory) Fig. 4.12 A TLB to speed up paging A hardware solution to speed up paging: all TLB’s entries are checked simultaneously ! Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

  19. 4.3.4 Inverted Page Tables 252 is the number of entries for a 264 bytes address space with 4KB pages Fig. 4.13 Comparison of a traditional page table with an inverted page table idea of inverted page table: there is only one entry per page frame in real memory Tanenbaum & Woodhull, Operating Systems: Design and Implementation, (c) 2006 Prentice-Hall

More Related