1 / 30

“Virtual” Memory

“Virtual” Memory. Goals Allow process to use more memory than the size of main memory Achieve close to the speed of main memory with the size and cost of secondary storage (disk) Free programmer of need to know details of storage allocation System overlays rather than programmer overlays!!!

aminia
Télécharger la présentation

“Virtual” Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “Virtual” Memory • Goals • Allow process to use more memory than the size of main memory • Achieve close to the speed of main memory with the size and cost of secondary storage (disk) • Free programmer of need to know details of storage allocation • System overlays rather than programmer overlays!!! • Programmers can dynamically request more memory, perhaps more than exists in main memory, without fear of resource deadlock • We preempt memory if necessary • Not all of a process’s pages need be in memory for it to execute • There are some efficiency savings • Code & data that are never referenced will not waste main memory or CPU/device time for loading. Yet CPU cost for implementing virtual memory and storage needs of OS routines are huge Chapt 8 virtual memory

  2. Demand Paging • Pager (page swapper) brings pages into memory on demand • A few pages may be brought in together (called prepaging or anticipatory paging) to cut down on i/o time • Unused pages will quickly be overwritten • Locality of reference makes this effective • Locality of space • Majority of references are in main memory. • Locality of time and space Chapt 8 virtual memory

  3. Virtual memory with paging & TLB Logical address  physical address |TLB---------------| P // d F // d Page map table swap disk _____________ ____________i_ _____________ ________F#___ ______________ page fault interrupt Page table register Chapt 8 virtual memory

  4. Hardware support for virtual memory in addition to support for real memory with paging • Valid/invalid bit for resident/nonresident pages • Interrupt generated for nonresident page • ISR asks swapper to bring page in (assuming page is valid) • What if process is in the middle of an instruction when nonresident page is referenced? • Add A, B result placed in C • Interruption causes no problem; just repeat instruction • Move Block (spanning multiple pages) to location • Write results in temp location (in “locked” storage) until all data are “fetched” –may require repeating instruction • MOV +(R2), (R3)- instruction in PDP/11 architecture • + , – , similar to C’s ++, -- • Suppose page fault occurs as we try to fetch value pointed to by R2; R2 has already been incremented • Must either look ahead or else restore if page fault occurs, causing overhead Chapt 8 virtual memory

  5. Question • Give examples of hardware support for virtual memory with paging • Page map table register • Circuitry to step into (compute entry of) page map table • TLB or page table in cache/ registers • Interrupt circuitry for access violation • Support for instructions that span multiple pages • Interrupt circuitry for nonresident pages Chapt 8 virtual memory

  6. Performance of demand paging • Probability of page fault (p) Assume Hit probability is 1 - p ma = memory access time (varies from 10ns to 200ns) we’ll ignore TLB & assume page table isn’t in cache or registers • Access time: T = (1-p)*2ma + p * page fault time • Page fault time includes (in ms not ns): • ISR time (was reference valid? If so, find a free frame, schedule a read) • Page swapper time: write page from disk to frame (perhaps 2ms) • If replaced page was altered, schedule page to be copied to disk • Update of tables/resident bit • Memory access time to read page map table and to get item from memory or write item into memory (the 2 ma above) Chapt 8 virtual memory

  7. Question on performance • Assume that ma = 100 nsec and probability of page fault is 1%. Page fault handling, etc. takes 100 msec. What is the cost of virtual memory? (ignore TLB) • T = (1-p)*2ma + p * fault handling time = .99 * 200 nsec + .01 * 100 msec = 198 nsec + 1.00 msec = 1000198 nsec. Cost is 1000198/200 = 5001 times as slow Chapt 8 virtual memory

  8. Page Replacement • Frames are reclaimed when process terminates • Some compilers, languages (COBOL) mark pages when no longer needed • Periodically OS checks # of available frames and (if needed) marks some pages for swapping • Typically swapping daemon doesn’t swap out a single page when OS needs a new frame – assigning frames and swapping out pages are done by different OS processes • new frame for faulting pages are taken from list of marked pages • Several pages are copied to swap space as a unit Chapt 8 virtual memory

  9. Page Replacement (cont.) • After selecting a victim • Schedule the page to be written to disk if it was modified • A modified bit (dirty bit) is used to tell OS if a page must be copied back • Page may still be reclaimed (“soft fault”) • Update page table; free frame table • Which page to replace? • Optimal replacement – replace page that will not be used for the longest period of time • Generally which is the optimal page to replace is unknown, but simulations using optimal page replacements are useful as benchmarks in testing algorithms Chapt 8 virtual memory

  10. Page Replacement (continued) • Which page to swap out? • FIFO page replacement algorithm • Reference string: 70120304230321201701 • Three available frames • 7*0*1*2*(7out)03*(0out)0*(1out)4*(2out)2*(3out)3*(0out)0*(4out)321*(2out)2*(3out)017*(0out)0*(1out)1*(2out) – 15 faults • Optimal replacement algorithm • 7*0*1*2*(7out)0,3*(1out)0,4*(0out)2,3,0*(4out)3,2,1*(3out),2,0,1,7*(2out),0,1 – 9 page faults Chapt 8 virtual memory

  11. LRU as heuristic for optimal replacement • Principle of locality of time – page that has been recently referenced is likely to be referenced again (the basis for all caches, TLB) • Reference string: 70120304230321201701 • Assume 3 available frames • 7*0*1*2*(7out)03*(1out)04*(2out)2*(3out)3*(0out)0*(4out)321*((0out)20*(3out)17*(2out)01 • Implementation with counters or stack/hardware support Chapt 8 virtual memory

  12. Clock Policy (Second Chance FIFO) • Maintain circular linked list of pages • Use bit (byte) set by hardware for each page • Reference string (use bits) provides a history of references (shifted periodically) • Replacement policy gives first page a second chance if use bit(byte) is set, and clears bit • Enhanced second chance FIFO • Secondary choice made by whether pages have been modified (dirty bit set) • In case of a “tie” pick unmodified page – need only be overwritten • “Dirty pages” are copied to disk when spooler is free • Soft page fault – page is reclaimed before written out Chapt 8 virtual memory

  13. Question on page replacement • Given 3 frames and page string 32311423 • Show # of page faults for Optimal, FIFO, LRU, LFU • Optimal:3*2*3,1*1,4*(1 out), 2, 3 • FIFO 3*2*3,1*1,4*(3 out),2, 3*(2out) • LRU 3*2*3,1*1,4*(2 out),2*(3out),3*(1out) Chapt 8 virtual memory

  14. Allocation of frames • Frames are fungible for demand paging • Demand segmentation creates some problems • Global or local page allocation policy • Local page allocation policy protects processes from poorly behaved process • This is a multiple queue system and engenders longer waits • Global policy may result in processes performing differently each time they execute • Reserve a minimum number of frames per process • Lower bound is the min. # needed by hardware instructions • Depends on levels of indirection (discuss) Chapt 8 virtual memory

  15. Thrashing • High # of page faults • Performance decreases dramatically • Characterized by an increase in swapping to paging disk • Characterized by long waits • Too many processes in system • OS may suspend some processes temporarily • Remove them from main memory • Change state to non-resident Chapt 8 virtual memory

  16. How to prevent thrashing • Processes tend to reference adjacent pages • Write programs (if possible) to stay within a page • Programmer should not use global variables/gotos • Programmer should not use many small subroutines • Unless they can be macros • C++, Ada allows you to define subroutines as macros • Programmer should use row major, column major order for two dimensional arrays (depending on the language and on how smart the compiler is) Chapt 8 virtual memory

  17. C++ inline capability • class Stack { public:   int pop();  /*avoid inline definition here*/ … }; inline int pop ()   … /* discuss when appropriate */ Chapt 8 virtual memory

  18. Working-Set Model • OS should prevent thrashing • Use page replacement to change # of frames allocated to process • Process has some number of pages that it needs to maintain in memory for a 90% (or better) hit ratio • This working set changes over time • OS uses reference bit (byte) to try to adjust to changes in working set Chapt 8 virtual memory

  19. Page-Fault Frequency • OS monitors page fault frequency • Can also monitor activity to swapping disk (or swapping area of a system disk) • If thrashing appears to occur • Can increase # of frames for a poorly behaved process to see if that helps • Else suspend process until more frames are available • Maximum and minimum limits for pff Chapt 8 virtual memory

  20. Page size • Small pages (512 bytes – VAX) • Large pages (4Mbyte to 1Gbyte–Solaris2, Intel i7) • Growing size of processes in multi-media apps especially • Advantages of small pages • Less internal fragmentation • Better use of memory since unused units are overwritten • OS brings in multiple small pages at a time so there is no extra I/O overhead • Higher hit ratio with more small pages rather than 1 large page • Disadvantages of small pages • Large size of page map table as # of pages increases • Multi-media processes require a great deal of memory • Less entries in page table generally means more hits in TLB • Multiple page sizes (Alpha) require software support Chapt 8 virtual memory

  21. Lock bit • OS sets bit on pages to prevent replacement • Necessary for kernel processes (if not wired down) • Dispatcher • Page replacer • Very important for I/O interlock pages • Process waiting for I/O has its waiting frame deallocated • Important for real-time processes • Useful for newly brought in pages • Useful for shared pages • Available to users in MacOS Chapt 8 virtual memory

  22. Question • Which of the following is a sign of page thrashing? a. High CPU activity b. Swap disk almost full c. High page fault frequency d. High activity to swap disk/ swap area e. High activity to main memory • Answer: c and d Chapt 8 virtual memory

  23. Question • What hit ratio do we need so that virtual memory does not involve more than a 10% overhead? Assume that memory access time is 20 ns and cost of handling page fault is 20 ms. Ignore TLB. Assume p is probability of fault. p (20ms + 40ns) + (1-p) (20ns+20ns) <= 1.10 (40ns) p (20000000ns) + 40 ns <= 44ns p(20000000ns) <= 4 ns p < = 2*10-7 hit ratio must be greater than .99999 Chapt 8 virtual memory

  24. Question • What are advantages of virtual memory? • User friendly • No problems with resource deadlock over memory • Process memory requests can increase dynamically • Process memory usage can be larger than actual memory • Memory is used transparently • In the old days, programmers worried a lot about memory usage • Error routines, etc. may never be brought into memory – saves time and space • Assists sharing (so does real memory with paging) Chapt 8 virtual memory

  25. Question • What are disadvantages of virtual memory? • Size and complexity of memory management routines • Increase in processor and i/o time in order to handle page faults and virtual memory accesses • Vulnerability to thrashing Chapt 8 virtual memory

  26. Virtual memory using segmentation combined with paging • Implementation • Allocation and deallocation with pages • Sharing with segments • How does virtual memory implementation with segmentation and paging differ from virtual memory with paging? • Circuitry for delimiting segment size, etc. • Bi-level (at least two accesses to memory to obtain frame# unless there is a TLB hit) • Note that many programs today are so large that paging systems would, in any case, be multi-level Chapt 8 virtual memory

  27. Question • Which is faster? Why? • Fixed partitions • Variable size partitions • Virtual memory with Paging • Virtual memory with Segmentation • Virtual memory/Segmentation with paging • Fixed partitions – fastest allocation and deallocation of memory Chapt 8 virtual memory

  28. Question • Which scheme is vulnerable to internal fragmentation? • Fixed partitions • Variable size partitions • Virtual memory with Paging • Virtual memory with Segmentation • Virtual memory /Segmentation with paging • Fixed partitions; paging- segmentation/paging (last page only) Chapt 8 virtual memory

  29. Question • Which is vulnerable to external fragmentation? • Fixed partitions • Variable size partitions • Virtual memory with Paging • Virtual memory with Segmentation • Virtual memory/ Segmentation with paging • Variable size partitions, segmentation Chapt 8 virtual memory

  30. Question • Which memory schemes have delayed binding? • Fixed size partitions • Variable size partitions • Paging • Segmentation • Segmentation with paging • They all have delayed binding if they use base registers • Delayed binding also occurs if process is swapped in and out of memory dynamically for storage compaction (variable size partitions, segmentation with or without paging) or pages or segments are swapped in and out with virtual memory Chapt 8 virtual memory

More Related