1 / 186

OMSE 510: Computing Foundations 8: The Address Space

OMSE 510: Computing Foundations 8: The Address Space. Chris Gilmore <grimjack@cs.pdx.edu> Portland State University/OMSE. Material Borrowed from Jon Walpole’s lectures. Today. Memory Management Virtual/Physical Address Translation Page Tables MMU, TLB. Memory management.

sage
Télécharger la présentation

OMSE 510: Computing Foundations 8: The Address Space

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. OMSE 510: Computing Foundations8: The Address Space Chris Gilmore <grimjack@cs.pdx.edu> Portland State University/OMSE Material Borrowed from Jon Walpole’s lectures

  2. Today • Memory Management • Virtual/Physical Address Translation • Page Tables • MMU, TLB

  3. Memory management • Memory – a linear array of bytes • Holds O.S. and programs (processes) • Each memory cell is named by a unique memory address • Recall, processes are defined by an address space, consisting of text, data, and stack regions • Process execution • CPU fetches instructions from the text region according to the value of the program counter (PC) • Each instruction may request additional operands from the data or stack region

  4. Virtual memory management overview • What have know about memory management? • Processes require memory to run • We prove the appearance that the entire process is resident during execution • We know some functions/code in processes never get invoked • Error detection and recovery routines • In a graphics package, functions like smooth, sharpen, brighten, etc... may not get invoked • Virtual Memory - allows for the execution of processes that may not be completely in memory (extension of paging technique from the last chapter)

  5. Virtual memory overview Goals: • Hides physical memory from user • Allows higher degree of multiprogramming (only bring in pages that are accessed) • Allows large processes to be run on small amounts of physical memory • Reduces I/O required to swap in/out processes (makes the system faster) • Requires: • Pager - page in /out pages as required • “Swap” space in order to hold processes that are partially complete • Hardware support to do address translation

  6. Addressing memory • Cannot know ahead of time where in memory a program will be loaded! • Compiler produces code containing embedded addresses • these addresses can’t be absolute ( physical addresses) • Linker combines pieces of the program • Assumes the program will be loaded at address 0 • We need to bind the compiler/linker generated addresses to the actual memory locations

  7. Relocatable address generation 0 100 175 1000 1100 1175 Library Routines Library Routines 0 75 P: : push ... jmp 175 : foo: ... Prog P : : foo() : : End P P: : push ... jmp _foo : foo: ... P: : push ... jmp 75 : foo: ... P: : push ... jmp 1175 : foo: ... Compilation Assembly Linking Loading

  8. Address binding • Address binding • fixing a physical address to the logical address of a process’ address space • Compile time binding • if program location is fixed and known ahead of time • Load time binding • if program location in memory is unknown until run-time AND location is fixed • Execution time binding • if processes can be moved in memory during execution • Requires hardware support!

  9. 1000 1100 1175 Library Routines 0 100 175 Library Routines Compile Time Address Binding P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Load Time Address Binding Execution Time Address Binding 1000 1100 1175 Library Routines 0 100 175 Library Routines P: : push ... jmp 1175 : foo: ... P: : push ... jmp 175 : foo: ... Base register 1000

  10. Memory management architectures • Fixed size allocation • Memory is divided into fixed partitions • Dynamically sized allocation • Memory allocated to fit processes exactly

  11. Runtime binding – base & limit registers • Simple runtime relocation scheme • Use 2 registers to describe a partition • For every address generated, at runtime... • Compare to the limit register (& abort if larger) • Add to the base register to give physical memory address

  12. Dynamic relocation with a base register • Memory Management Unit (MMU) - dynamically converts logical addresses into physical address • MMU contains base address register for running process Relocation register for process i Max Mem 1000 Max addr process i 0 Program generated address + Physical memory address MMU Operating system 0

  13. Protection using base & limit registers • Memory protection • Base register gives starting address for process • Limit register limits the offset accessible from the relocation register limit base register register Physical memory logical address address yes + < no addressing error

  14. Multiprogramming with base and limit registers • Multiprogramming: a separate partition per process • What happens on a context switch? • Store process A’s base and limit register values • Load new values into base and limitregisters for process B Partition E limit Partition D Partition C base Partition B Partition A OS

  15. 896K 128K O.S.

  16. 576K 896K P 320K 1 128K O.S. O.S. 128K

  17. 576K 352K 896K P 224K 2 P P 320K 320K 1 1 128K O.S. O.S. O.S. 128K 128K

  18. 64K P 3 576K 352K 288K 896K P P 224K 224K 2 2 P P P 320K 320K 320K 1 1 1 128K O.S. O.S. O.S. O.S. 128K 128K 128K

  19. 64K 64K P P 3 3 576K 352K 288K 288K 896K P P 224K 224K 224K 2 2 P P P P 320K 320K 320K 320K 1 1 1 1 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K

  20. 64K 64K P P 3 3 576K 352K 288K 288K 896K P P 224K 224K 224K 2 2 P P P P 320K 320K 320K 320K 1 1 1 1 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K 64K P 3 288K 96K P 128K 4 P 320K 1 O.S. 128K

  21. 64K 64K P P 3 3 576K 352K 288K 288K 896K P P 224K 224K 224K 2 2 P P P P 320K 320K 320K 320K 1 1 1 1 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K 64K 64K P P 3 3 288K 288K 96K 96K P P 128K 128K 4 4 P 320K 320K 1 O.S. O.S. 128K 128K

  22. 64K 64K P P 3 3 576K 352K 288K 288K 896K P P 224K 224K 224K 2 2 P P P P 320K 320K 320K 320K 1 1 1 1 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K 64K 64K 64K P P P 3 3 3 288K 288K 288K 96K 96K 96K P P P 128K 128K 128K 4 4 4 96K P 320K 320K 1 P 224K 5 O.S. O.S. O.S. 128K 128K 128K

  23. 64K 64K P P 3 3 576K 352K 288K 288K 896K P P 224K 224K 224K 2 2 P P P P 320K 320K 320K 320K 1 1 1 1 128K O.S. O.S. O.S. O.S. O.S. 128K 128K 128K 128K 64K 64K 64K 64K P P P P 3 3 3 3 288K 288K 288K 288K 96K 96K 96K 96K ??? 128K P P P P 128K 128K 128K 128K P 4 4 4 4 6 96K 96K P 320K 320K 1 P 224K P 224K 5 5 O.S. O.S. O.S. O.S. 128K 128K 128K 128K

  24. Swapping • When a program is running... • The entire program must be in memory • Each program is put into a single partition • When the program is not running... • May remain resident in memory • May get “swapped” out to disk • Over time... • Programs come into memory when they get swapped in • Programs leave memory when they get swapped out

  25. Basics - swapping • Benefits of swapping: • Allows multiple programs to be run concurrently • … more than will fit in memory at once Max mem Process i Swap in Process m Process j Process k Swap out Operating system 0

  26. Swapping can also lead to fragmentation

  27. Dealing with fragmentation • Compaction – from time to time shift processes around to collect all free space into one contiguous block • Placement algorithms: First-fit, best-fit, worst-fit 64K 256K P 3 288K 96K P 288K 3 ??? P 128K P 128K 4 P 6 6 P 128K 96K 4 P 224K P 224K 5 5 O.S. O.S. 128K 128K

  28. Influence of allocation policy

  29. How big should partitions be? • Programs may want to grow during execution • More room for stack, heap allocation, etc • Problem: • If the partition is too small programs must be moved • Requires modification of base and limit regs • Why not make the partitions a little larger than necessary to accommodate “some” growth? • Fragmentation: • External fragmentation = unused space between partitions • Internal fragmentation = unused space within partitions

  30. Allocating extra space within partitions

  31. Managing memory • Each chunk of memory is either • Used by some process or unused (“free”) • Operations • Allocate a chunk of unused memory big enough to hold a new process • Free a chunk of memory by returning it to the free pool after a process terminates or is swapped out

  32. Managing memory with bit maps • Problem - how to keep track of used and unused memory? • Technique 1 - Bit Maps • A long bit string • One bit for every chunk of memory 1 = in use 0 = free • Size of allocation unit influences space required • Example: unit size = 32 bits • overhead for bit map: 1/33 = 3% • Example: unit size = 4Kbytes • overhead for bit map: 1/32,769

  33. Managing memory with bit maps

  34. Managing memory with linked lists • Technique 2 - Linked List • Keep a list of elements • Each element describes one unit of memory • Free / in-use Bit (“P=process, H=hole”) • Starting address • Length • Pointer to next element

  35. Managing memory with linked lists 0

  36. Merging holes • Whenever a unit of memory is freed we want to merge adjacent holes!

  37. Merging holes

  38. Merging holes

  39. Merging holes

  40. Merging holes

  41. Managing memory with linked lists • Searching the list for space for a new process • First Fit • Next Fit • Start from current location in the list • Not as good as first fit • Best Fit • Find the smallest hole that will work • Tends to create lots of little holes • Worst Fit • Find the largest hole • Remainder will be big • Quick Fit • Keep separate lists for common sizes

  42. Fragmentation • Memory is divided into partitions • Each partition has a different size • Processes are allocated space and later freed • After a while memory will be full of small holes! • No free space large enough for a new process even though there is enough free memory in total • This is external fragmentation • If we allow free space within a partition we have internal fragmentation

  43. Solution to fragmentation? • Allocate memory in equal fixed size units? • Reduces external fragmentation problems • But what about wasted space inside a unit due to internal fragmentation? • How big should the units be? • The smaller the better for internal fragmentation • The larger the better for management overhead • Can we use a unit size smaller than the memory needed by a process? • Ie, allocate non-contiguous units to the same process? • … but how would the base and limit registers work?

  44. Using pages for non-contiguous allocation • Memory divided into fixed size page frames • Page frame size = 2n bytes • Lowest n bits of an address specify byte offset in page • But how do we associate page frames with processes? • And how do we map memory addresses within a process to the correct memory byte in a page frame? • Solution • Processes use virtual addresses • Hardware uses physical addresses • hardware support for virtual to physical address translation

  45. Virtual addresses • Virtual memory addresses (what the process uses) • Page number plus byte offset in page • Low order n bits are the byte offset • Remaining high order bits are the page number bit 31 bit n-1 bit 0 20 bits 12 bits page number offset Example: 32 bit virtual address Page size = 212 = 4KB Address space size = 232 bytes = 4GB

  46. Physical addresses • Physical memory addresses (what the CPU uses) • Page frame number plus byte offset in page • Low order n bits are the byte offset • Remaining high order bits are the page frame number bit 24 bit n-1 bit 0 12 bits 12 bits Page frame number offset Example: 24 bit physical address Page frame size = 212 = 4KB Max physical memory size = 224 bytes = 16MB

  47. Address translation • Hardware maps page numbers to page frame numbers • Memory management unit (MMU) has multiple registers for multiple pages • Like a base register except its value is substituted for the page number rather than added to it • Why don’t we need a limit register for each page?

  48. Memory Management Unit (MMU)

  49. Virtual address spaces • Here is the virtual address space • (as seen by the process) Lowest address Highest address Virtual Addr Space

  50. Virtual address spaces • The address space is divided into “pages” • In x86, the page size is 4K Page 0 0 1 2 3 4 5 6 7 N Page 1 A Page Page N Virtual Addr Space

More Related