1 / 27

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Virtual Memory. Can be implemented via: Demand paging Demand segmentation temp. Demand Paging. Bring a page into memory only when it is needed Page table used for tracking which pages are in memory Page fault if not in memory Get empty frame

Télécharger la présentation

Chapter 9: Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9: Virtual Memory

  2. Virtual Memory • Can be implemented via: • Demand paging • Demand segmentation temp

  3. Demand Paging • Bring a page into memory only when it is needed • Page table used for tracking which pages are in memory • Page fault if not in memory • Get empty frame • Swap page into frame • Reset tables • Set validation bit = v • Restart the instruction that caused the page fault

  4. Performance of Demand Paging • Page Fault Rate p • Overhead: two context switches • switch to a different process while waiting for page to come in Effective Access Time (EAT) EAT = (1 – p) x memory access time + p (page fault overhead + write page out + read page in + restart overhead )

  5. Demand Paging Example • Memory access time = 200 nanoseconds • Average page-fault service time = 8 milliseconds • EAT = (1 – p) x 200 + p (8 milliseconds) = (1 – p x 200 + p x 8,000,000 = 200 + p x 7,999,800 • If one access out of 1,000 causes a page fault, then EAT = 8.2 microseconds. This is a slowdown by a factor of 40!!

  6. Process Creation • Virtual memory allows other benefits during process creation: • ProcessCreate or fork • Copy-on-Write

  7. What if make a change to Frame • Remember Frame = page in memory • If change data in frame, should it be written immediately to the backing store? • Instead keep track • Then when not busy… • Or if need some space and must replace it… Dirty Bit: In Memory But Different From Disk

  8. Which pages are in memory?Which to replace? • Which page to replace

  9. Page Replacement Algorithms • Want lowest page-fault rate • Don’t replace a page that will need • Algorithms • FIFO • LRU (least recently used) • LFU (least frequently used) • MFU (most frequently used) FIFO LRU LFU MFU

  10. Performance: FIFO • Can compare by examining a series of page references • In all our examples, the reference string is 7 0 1 2 0 3 0 4 2 3 0 3 2 1 2 0 1 7 0 1 • How many hits? Misses?

  11. If could see in the future could devise an optimal algorithm Replace the page that will not be used for the longest period Same string of page references but 11 hits instead of 5 Optimal Page Replacement

  12. Least Recently Used (LRU) Algorithm • Popular algorithm • Several approaches, two true LRU algorithms, and two approximation • The two true implementations • Time stamp in page table for each page • Known as a counter implementation • Contents of the clock register • Stack • Keep list of page references in a stack to determine oldest Trick is how to implement LRU

  13. Stack • Page number pushed on stack at first reference • When referenced move to top • Replace page at bottom of stack

  14. 12 page faults (FIFO had 15) Stack implementation A little slower managing hits (requires 6 pointers to be changed) Faster at picking replacement page (no search) LRU Performance

  15. LRU Approximation Algorithms • Searching clock times or managing stacks: too much overhead • With less overhead can do a fairly good job • Reference bit • Second chance (clock) algorithm True LRU  too much overhead

  16. Reference bit • When a page is referenced a bit set • Periodically cleared • When searching for a page to replace, target pages that have not been referenced • Order unknown: just know that pages with bit set have been accessed at some point since the last time cleared • Can improve accuracy with more bits • Periodically shift reference bit into a register • Gives snapshot of accesses over time shift Page Table Entry 0 1 0 0 1 Page number ref bit

  17. Combination of FIFO and Reference bit Next victim pointer cycles through pages (FIFO) If reference bit set, has been referenced, clear bit and move on Second-Chance (clock) Algorithm

  18. Allocation of Frames • Another efficiency issue has to do with how many frames each process should be allocated • Too many  wasteful • Too few  inefficient lots, of page faults Page Page Page Page Page Page Page Page Page Page Page Page Page Page Page or Page Page Page

  19. There is a minimum • The maximum number of frames that an instruction can possibly access • Example: IBM 370 – 6 pages to handle SS MOVE instruction: • instruction is 6 bytes, might span 2 pages • 2 pages to handle from • 2 pages to handle to

  20. Allocation Algorithms • Fixed (set at start of process) or variable (can change over time) • Global (select replacement from set of all frames) or local (select replacement from own pages)

  21. Fixed Allocation • Equal allocation – For example, if there are 100 frames and 5 processes, give each process 20 frames. • Proportional allocation – Allocate according to the size of process

  22. Why Paging Works • Locality model: locality of reference • References tend to be to memory locations near recent references What happens when don’t allocate enough to handle locality? Instruction references Data references

  23. Thrashing • If a process does not have “enough” pages, the page-fault rate is very high. This leads to: • low CPU utilization • operating system thinks that it needs to increase the degree of multiprogramming • another process added to the system • Thrashing a process is busy swapping pages in and out

  24. Working-Set Model • Allocation • Set a process’s number of allocated frames based upon the number of pages referenced over some period of time

  25. Keeping Track of the Working Set • Ways to lower the overhead • Multiple reference bits with a timer shift Page Table Entry 0 1 0 0 1 Page number ref bit

  26. Page-Fault Frequency Scheme • Alternative: measure PFF directly • Establish “acceptable” page-fault rate • If actual rate too low, process loses frame • If actual rate too high, process gains frame

  27. End of Chapter 9

More Related