unit 5 virtual memory n.
Skip this Video
Loading SlideShow in 5 Seconds..
UNIT 5 Virtual Memory PowerPoint Presentation
Download Presentation
UNIT 5 Virtual Memory

UNIT 5 Virtual Memory

347 Views Download Presentation
Download Presentation

UNIT 5 Virtual Memory

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. UNIT 5Virtual Memory

  2. Introduction • Virtual Memory Basics • Demand Paging • The Virtual Memory Manager • Page Replacement Policies • Controlling Memory Allocation to a Process • Shared Pages • Memory-Mapped Files • Case Studies of Virtual Memory Using Paging • Virtual Memory Using Segmentation

  3. Virtual Memory Basics • MMU translates logical address into physical one • Virtual memory manager is a software component • Uses demand loading • Exploits locality of reference to improve performance

  4. Virtual Memory Basics (continued)

  5. Virtual Memory Using Paging • MMU performs address translation using page table Effective memory address of logical address (pi, bi) = start address of the page frame containing page pi + bi

  6. Demand Paging Preliminaries

  7. Demand Paging Preliminaries (continued) • Memory Management Unit (MMU) raises a page fault interrupt if page containing logical address not in memory

  8. Demand Paging Preliminaries (continued) A page fault interrupt is raised because Valid bit of page 3 is 0

  9. Demand Paging Preliminaries (continued) • At a page fault, the required page is loaded in a free page frame • If no page frame is free, virtual memory manager performs a page replacement operation • Page replacement algorithm • Page-out initiated if page is dirty (modified bit is set) • Page-in and page-out: page I/O or page traffic • Effective memory access time in demand paging:

  10. Page Replacement • (Empirical) law of locality of reference: logical addresses used by process in a short interval tend to be grouped in certain portions of its logical address space

  11. Memory Allocation to a Process • How much memory to allocate to a process

  12. Optimal Page Size • Size of a page is defined by computer hardware • Page size determines: • No of bits required to represent byte number in a page • Memory wastage due to internal fragmentation • Size of the page table for a process • Page fault rates when a fixed amount of memory is allocated to a process • Use of larger page sizes than optimal value implies somewhat higher page fault rates for a process • Tradeoff between HW cost and efficient operation

  13. Paging Hardware • Page-table-address-register (PTAR) points to the start of a page table

  14. Paging Hardware (continued)

  15. Memory Protection • Memory protection violation raised if: • Process tries to access a nonexistent page • Process exceeds its (page) access privileges • It is implemented through: • Page table size register (PTSR) of MMU • Kernel records number of pages contained in a process in its PCB • Loads number from PCB in PTSR when process is scheduled • Prot info field of the page’s entry in the page table

  16. Address Translation and Page Fault Generation • Translation look-aside buffer (TLB): small and fast associative memory used to speed up address translation

  17. Address Translation and Page Fault Generation (continued) • TLBs can be HW or SW managed

  18. TLB hit ratio Address Translation and Page Fault Generation (continued) • Some mechanisms used to improve performance: • Wired TLB entries for kernel pages: never replaced • Superpages

  19. Superpages • TLB reach is stagnant even though memory sizes increase rapidly as technology advances • TLB reach = page size x no of entries in TLB • It affects performance of virtual memory • Superpages are used to increase the TLB reach • A superpage is a power of 2 multiple of page size • Its start address (both logical and physical) is aligned on a multiple of its own size • Max TLB reach = max superpage size x no of entries in TLB • Size of a superpage is adapted to execution behavior of a process through promotions and demotions

  20. Support for Page Replacement • Virtual memory manager needs following information for minimizing page faults and number of page-in and page-out operations: • The time when a page was last used • Expensive to provide enough bits for this purpose • Solution: use a single reference bit • Whether a page is dirty • A page is clean if it is not dirty • Solution: modified bit in page table entry

  21. Practical Page Table Organizations • A process with a large address space requires a large page table, which occupies too much memory • Solutions: • Inverted page table • Describes contents of each page frame • Size governed by size of memory • Independent of number and sizes of processes • Contains pairs of the form (program id, page #) • Con: information about a page must be searched • Multilevel page table • Page table of process is paged

  22. Inverted Page Tables Use of hash table Speeds up search

  23. Multilevel Page Tables • If size of a table entry is 2ebytes, number of page table entries in one PT page is 2nb/2e • Logical address (pi, bi) is regrouped into three fields: • PT page with the number pi1contains entry for pi • pi2 is entry number for piin PT page • bi

  24. I/O Operations in a Paged Environment • Process makes system call for I/O operations • Parameters include: number of bytes to transfer, logical address of the data area • Call activates I/O handler in kernel • I/O subsystem does not contain an MMU, so I/O handler replaces logical address of data area with physical address, using information from process page table • I/O fix (bit in misc info field) ensures pages of data area are not paged out • Scatter/gather feature can deposit parts of I/O operation’s data in noncontiguous memory areas • Alternatively, data area pages put in contiguous areas

  25. Example: I/O Operations in Virtual Memory

  26. The Virtual Memory Manager

  27. Example: Page Replacement

  28. Overview of Operation of the Virtual Memory Manager • Virtual memory manager makes two important decisions during its operation: • Upon a page fault, decides which page to replace • Periodically decides how many page frames should be allocated to a process

  29. Page Replacement Policies • A page replacement policy should replace a page not likely to be referenced in the immediate future • Examples: • Optimal page replacement policy • Minimizes total number of page faults; infeasible in practice • First-in first-out (FIFO) page replacement policy • Least recently used (LRU) page replacement policy • Basis: locality of reference • Page reference strings • Trace of pages accessed by a process during its operation • We associate a reference time string with each

  30. Example: Page Reference String • A computer supports instructions that are 4 bytes in length • Uses a page size of 1KB • Symbols A and B are in pages 2 and 5

  31. Page Replacement Policies (continued) • To achieve desirable page fault characteristics, faults shouldn’t increase when memory allocation is increased • Policy must have stack (or inclusion) property

  32. FIFO page replacement policy does not exhibit stack property.

  33. Page Replacement Policies (continued) • Virtual memory manager cannot use FIFO policy • Increasing allocation to a process may increase page fault frequency of process • Would make it impossible to control thrashing

  34. Practical Page Replacement Policies • Virtual memory manager has two threads • Free frames manager implements page replacement policy • Page I/O manager performs page-in/out operations

  35. Practical Page Replacement Policies (continued) • LRU replacement is not feasible • Computers do not provide sufficient bits in the ref info field to store the time of last reference • Most computers provide a single reference bit • Not recently used (NRU) policies use this bit • Simplest NRU policy: Replace an unreferenced page and reset all reference bits if all pages have been referenced • Clock algorithms provide better discrimination between pages by resetting reference bits periodically • One-handed clock algorithm • Two-handed clock algorithm • Resetting pointer (RP) and examining pointer (EP)

  36. Example: Two-Handed Clock Algorithm • Both pointers are advanced simultaneously • Algorithm properties defined by pointer distance: • If pointers are close together, only recently used pages will survive in memory • If pointers are far apart, only pages that have not been used in a long time would be removed

  37. Controlling Memory Allocation to a Process • Process Pi is allocated alloci number of page frames • Fixed memory allocation • Fixes alloc statically; uses local page replacement • Variable memory allocation • Uses local and/or global page replacement • If local replacement is used, handler periodically determines correct alloc value for a process • May use working set model • Sets alloc to size of the working set

  38. Implementation of a Working Set Memory Allocator • Swap out a process if alloc page frames cannot be allocated • Expensive to determine WSi(t,∆) and allociat every time instant t • Solution: Determine working sets periodically • Sets determined at end of an interval are used to decide values of alloc for use during the next interval

  39. Shared Pages • Static sharing results from static binding performed by a linker/loader before execution of program • Dynamic binding conserves memory by binding same copy of a program/data to several processes • Program or data shared retains its identity • Two conditions should be satisfied: • Shared program should be coded as reentrant • Can be invoked by many processes at the same time • Program should be bound to identical logical addresses in every process that shared it

  40. Shared pages should have same page numbers in all processes

  41. Copy-on-Write • Feature used to conserve memory when data in shared pages could be modified • Copy-on-write flag in page table entries Memory allocation decisions are performed statically A private copy of page k is made when A modifies it

  42. Memory-Mapped Files • Memory mapping of a file by a process binds file to a part of the logical address space of the process • Binding is performed when process makes a memory map system call • Analogous to dynamic binding of programs and data

  43. Memory-Mapped Files (continued)

  44. Case Studies of Virtual Memory Using Paging • Unix Virtual Memory

  45. Unix Virtual Memory • Paging hardware differs in architectures • Pages can be: resident, unaccessed, swapped-out • Allocation of as little swap space as possible • Copy-on-write for fork • Lack reference bit in some HW architectures; compensated using valid bit in interesting manner • Process can fix some pages in memory • Pageout daemon uses a clock algorithm • Swaps out a process if all required pages cannot be in memory • A swap-in priority is used to avoid starvation

  46. Summary • Basic actions in virtual memory using paging: address translation and demand loading of pages • Implemented jointly by • Memory Management Unit (MMU): Hardware • Virtual memory manager: Software • Memory is divided into page frames • Virtual memory manager maintains a page table • Inverted and multilevel page tables use less memory but are less efficient • A fast TLB is used to speed up address translation

  47. Summary (continued) • Which page should VM manager remove from memory to make space for a new page? • Page replacement algorithms exploit locality of reference • LRU has stack property, but is expensive • NRU algorithms are used in practice • E.g., clock algorithms • How much memory should manager allocate? • Use working set model to avoid thrashing • Copy-on-write can be used for shared pages • Memory mapping of files speeds up access to data