html5-img
1 / 67

Main Memory

Main Memory. CISC3595, Fall 09. Memory Management. Background Contiguous Memory Allocation Paging Structure of the Page Table Segmentation Example: The Intel Pentium. Background. Main memory and registers are only storage CPU can access directly Register access: one CPU clock (or less)

jalila
Télécharger la présentation

Main Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Main Memory CISC3595, Fall 09

  2. Memory Management • Background • Contiguous Memory Allocation • Paging • Structure of the Page Table • Segmentation • Example: The Intel Pentium

  3. Background • Main memory and registers are only storage CPU can access directly • Register access: one CPU clock (or less) • Main memory access: many CPU cycles • Cache sits between main memory and CPU registers • Memory is cheap today, and getting cheaper • But applications are demanding more and more memory, there is never enough!

  4. Review: Logic View of a Computer Main memory: a large array of words or bytes. Each words has its own address. 4

  5. Review: Main (Primary) Memory • Main memory – the only large storage media that CPU can access directly • Capacity: • Bandwidth • Peak: one word per bus cycle • CPU access main memory directly through local bus, i.e., system bus 5

  6. Review: Performance of different storage < 16 GB ns: nanosecond (10-9 seconds), 1 billionth second 6

  7. Program execution • Program must be brought (from disk) into memory and placed within a process for it to be run • A program: a sequence of instructions stored in memory (main memory or disk) • Multiprogramming: multiple processes are loaded into memory to run simultaneously • Increase CPU utilization • Swapping: a process might be temporarily swapped to disk, and reloaded back into memory 7

  8. The need for memory management • Memory Management, involves swapping blocks of data from secondary storage • Bring processes into main memory for execution by processor • Protection of memory required to ensure correct operation • Memory needs to be allocated to ensure a reasonable supply of ready processes to consume available processor time

  9. Memory Management Requirements • Relocation • Protection • Sharing • Logical organisation • Physical organisation

  10. Requirements: Relocation • Programmer does not know where the program will be placed in memory when it is executed • it may be swapped to disk and return to main memory at a different location (relocated) • Memory references in the program must be translated to actual physical memory address

  11. Multistep Processing of a User Program Programmer’s view of a program/process? 11

  12. Multistep Processing of a User Program Compiler seldom knows where an object will reside, it assumes a fixed base location (for example, zero).

  13. Multistep Processing of a User Program • Linker/Linkage editor • * combines multiple object • files into a executable program, • resolving symbols as it goes along. • arranges objects in a program's • address space. Relocating code • that assumes a specific base address • to another base

  14. Dynamic Linking • Linking postponed until execution time • Small piece of code, stub, used to locate appropriate memory-resident library routine • Stub replaces itself with the address of the routine, and executes routine • Operating system needed to check if routine is in other processes’ memory address • Dynamic linking is particularly useful for libraries • also known as shared libraries

  15. Multistep Processing of a User Program • Loaderstart up programs: • read executable files into memory, • carry out other required preparatory • tasks so that the process can run • Unix loader, i.e., handler for execve(): * validation (permissions, memory • requirements etc.); • Copy program image from disk into memor • Copy command-line arguments on stack • Initialize registers (e.g., stack pointer) • Jump to program entry point (_start)

  16. Dynamic Loading Mechanism • Allow a program to load, execute and unload library (or binary code) at run time • A program, at runtime, • loads a library (or other binary) into memory • retrieves addresses of functions and variables contained in the library • executes those functions or access those variables • unloads library from memory. • Better memory-space utilization: unused routine is never loaded • Useful when large amounts of code are needed to handle infrequently occurring cases

  17. Dynamic Loading (cont’d) • Standard POSIX/UNIX API • #include <dlfcn.h> • Loading the library: dlopen() • Extracting contents: dlsym() • Unload: dlclose() • Microsoft Windows API • #include <windows.h> • LoadLibrary:LoadLibraryEx • Extracting contents: GetProcAddress • Unloading the library: FreeLibrary

  18. Address Binding • A program must be brought to main memory. • instructions that use addresses in a program, must be bound to proper address space in main memory • Address binding: a mapping from one address space to another • Complier: bind symbolic (variable names) to relocatable address, i.e., 14 byte from beginning of this module • Linker: bind relocatable addresse to absolute address (74014) space of a process address space

  19. Final Address Binding • Final address binding: the final mapping of instructions/data to physical memory address • Compile time: If memory location known a priori, absolute code can be generated; must recompile code if starting location changes • Load time: must generate relocatable code if memory location is not known at compile time • Execution time: Binding delayed until run time if the process can be moved during its execution from one memory segment to another. Need hardware support for address maps (e.g., base and limitregisters) 19

  20. Logical vs. Physical Address Space • Logical address – generated by CPU; also referred to as virtual address • User program deals with logical addresses; it never sees the real physical addresses • Physical address – address seen by memory unit • Memory-management Unit (MMU): hardware for run time mapping from logical address space to a physical address space • Many different schemes for the mapping • First let’s see an simple example…

  21. Memory-Management Unit: an example • Value in relocation register is added to every address generated by a user process at the time it is sent to memory Example of a simple MMU scheme

  22. Requirements: Protection • Processes should not be able to reference memory locations in another process without permission • Unless special arrangement is made, e.g., shared memory segment is setup • Memory protection checking • performed at run time • Impossible to check absolute addresses to ensure protection at compile time • Program might calculate address at run time, e.g., access array element, pointer to a data structure • By the processor (hardware) • OS cannot anticipate all memory reference of a program • Too time consuming if done by software

  23. Address protection: base and limit registers

  24. Requirements: Sharing • Allow several cooperating processes to access same portion of memory • Allow multiple processes running same program to access same copy of the program rather than have their own separate copy

  25. Requirements: Logical Organization • Programs are written in modules • Modules can be written and compiled independently • Different degrees of protection given to modules: read-only, readable/writable • Share modules among processes • OS and hardware provide support for deal with such requirement through segmentation

  26. Requirements: Physical Organization • Keep track of available memory blocks, allocate and reclaim them as processes come and go… • Cannot leave programmer with the responsibility to manage memory • Memory available for a program plus its data may be insufficient • Overlaying: program and data are organized in such a way that various modules can be assigned same region of memory, main program responsible for switching modules in and out => costly for programmer • Programmer does not know how much space will be available, and where

  27. Roadmap: Memory Allocation Schemes • Contiguous Allocation • Paging • Segmentation

  28. Contiguous Allocation • Main memory usually divided into two partitions: • Resident operating system, usually held in low memory with interrupt vector • User processes held in high memory • Relocation registers used to protect user processes from each other, and from changing operating-system code and data • Base register contains value of smallest physical address • Limit register contains range of logical addresses – each logical address must be less than the limit register • MMU maps logical address dynamically

  29. Contiguous Allocation (Cont.) • Multiple-partition allocation • Hole – block of available memory; holes of various size are scattered throughout memory • When a process arrives, it is allocated memory from a hole large enough to accommodate it • Operating system maintains information about:a) allocated partitions b) free partitions (hole) OS OS OS OS process 5 process 5 process 5 process 5 process 9 process 9 process 8 process 10 process 2 process 2 process 2 process 2

  30. Dynamic Storage-Allocation Problem • First-fit: Allocate the first hole that is big enough • Best-fit: Allocate the smallesthole that is big enough; must search entire list, unless ordered by size • Produces the smallest leftover hole • Worst-fit: Allocate the largesthole; must also search entire list • Produces the largest leftover hole How to satisfy a request of size n from a list of free holes First-fit and best-fit better than worst-fit in terms of speed and storage utilization

  31. Fragmentation problem • External Fragmentation – total memory space exists to satisfy a request, but it is not contiguous • Internal Fragmentation – allocated memory may be slightly larger than requested memory; this size difference is memory internal to a partition, but not being used • Reduce external fragmentation by compaction • Shuffle memory contents to place all free memory together in one large block • Possible only if relocation (final binding to memory address) is done at execution time • To solve external fragmentation problem • Allocate noncontiguous memory to process • Paging, Segmentation, and Combined

  32. Paging • Physical address space of a process can be noncontiguous • process is allocated physical memory wherever it is available • Logical address space of a process is still contiguous • Physical memory divided into fixed-sized blocks called frames • Frame size is power of 2, between 512 bytes and 8,192 bytes • Logical memory divided into blocks of same size called pages • System keeps track of all free frames • To run a program of size n pages, need to find n free frames • Set up a page table for logical => physical addresses translation • Paging scheme address external fragmentation • internal fragmentation is still possible

  33. Paging Model of Logical and Physical Memory A process’s page table: for logical => physical addresses translation * contains base address of each page in physical memory

  34. Logic address under Paging scheme • Logical address (address generated by CPU) is divided into: • Page number (p) –an index into a pagetablewhich contains base address of each page in physical memory • Page offset (d) – relative address (within page), combined with base address to define physical memory address that is sent to the memory unit • For given logical address space 2m and page size2n page number page offset p d m - n n

  35. MMU: Paging

  36. Paging Example 32-byte memory and 4-byte pages

  37. Free Frames After allocation Before allocation

  38. Implementation of Page Table • Where to store the page tables ? • Register? too small • Page table is kept in main memory • Page-table base register (PTBR)points to the page table • Page-table length register (PRLR) indicates size of the page table • Every data/instruction access requires two memory accesses • One for page table and one for the actual data/instruction. • To avoid two memory access problem….

  39. Associative Memory • a special fast-lookup hardware cache called associative memory or translation look-aside buffers (TLBs) • Some TLBs store address-space identifiers (ASIDs) in each entry – uniquely identifies each process to provide address-space protection • Associative memory – parallel search for all entries • Address translation (p, d) • If p is in associative register, get frame # out • Otherwise get frame # from page table in memory Page # Frame #

  40. Paging Hardware With TLB

  41. Effective Access Time • Associative Lookup =  time unit (e.g. 20 ns) • Hit ratio – percentage of times that a page number is found in the associative registers; ratio related to number of associative registers • Hit ratio =  (e.g. 80%) • Time to access memory (e.g. 100 ns) • Effective Access Time (EAT) e.g. EAT = 0.80 x 120 + 0.20 x (100+100+20) (found in TLB) (not found in TLB) =140 ns

  42. Memory Protection • Memory protection implemented by associating protection bit with each frame • Valid-invalid bit attached to each entry in the page table: • “valid” indicates that the associated page is in the process’ logical address space, and is thus a legal page • “invalid” indicates that the page is not in the process’ logical address space

  43. Valid (v) or Invalid (i) Bit In A Page Table Most processes do not use whole logic address space Address space: 214 (0, …, 16383), Process use only 0 to 10468 Page size: 2KB Alternative: page-table length register (PTLR)

  44. Shared Pages • Useful for shared code (reentrant code, pure code) • One copy of read-only code shared among processes (i.e., text editors, compilers, window systems). • Each process has its own copy of registers and data storage to hold its data • Shared code page and private data page

  45. Shared Pages Example

  46. Structure of the Page Table • To support large logic address space • up to 232 or 264 • page table too large to put in a contiguous memory • Solutions • Hierarchical Paging • Hashed Page Tables • Inverted Page Tables

  47. Hierarchical Page Tables • Break up the logical address space into multiple page tables • A simple technique is a two-level page table e.g. for large logical address spaces 232 to 264

  48. Two-Level Page-Table Scheme

  49. Two-Level Paging Example • A logical address (on 32-bit machine with 1K page size) is divided into: • a page number consisting of 22 bits • a page offset consisting of 10 bits • Since the page table is paged, the page number is further divided into: • a 12-bit page number • a 10-bit page offset • Thus, a logical address is as follows:where pi is an index into the outer page table, and p2 is the displacement within the page of the outer page table page number page offset p2 pi d 10 10 12

  50. Address-Translation Scheme

More Related