1 / 34

Chapter 9: Virtual Memory

Chapter 9: Virtual Memory. Objectives Demand Paging Copy-on-Write Page Replacement Allocation of Frames Thrashing Memory-Mapped Files. A Virtual view - per process!. Main Memory (physical).

Télécharger la présentation

Chapter 9: Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 9: Virtual Memory Objectives • Demand Paging • Copy-on-Write • Page Replacement • Allocation of Frames • Thrashing • Memory-Mapped Files

  2. A Virtual view - per process! Main Memory (physical) Note that from this point forward, the notes will assume a page-based system unless otherwise stated! Free Paging traffic Virtual Address Space (logical) for 1 process Paging Device

  3. Background • Using virtual memory, a program can be run with only part of its logical address space resident in memory • Creates illusion of a “virtual” memory that can be larger than real memory but still nearly as fast • potentially as big as a disk, but normally constrained by the address size [232=4GB] • Logical address space can therefore be much larger than physical address space • Allows address spaces to be shared by several processes • Allows for more efficient process creation • Virtual memory can be implemented via: • Demand paging • Demand segmentation (Not covered)

  4. Demand Paging • Bring a page into memory only when it is needed (lazy swapper) • Less I/O needed • Less memory needed • Faster response • More users • A swapper manipulates the entire process, whereas a pager manipulates just individual pages associated with a process • Page is needed  reference to it • invalid reference  abort • not-in-memory  bring to memory

  5. Transfer of a Paged Memory to Contiguous Disk Space

  6. Demand Paging • Each page table entry has a valid–invalid bit is associated(1  in-memory, 0 not-in-memory) • Example of a page table snapshot Frame # valid-invalid bit 1 1 1 1 0  0 0 page table

  7. Steps in Handling a Page Fault

  8. Demand Paging • Page replacement: find some page in memory, but not really in use, swap it out • algorithm • Performance: we want an algorithm which will result in minimum number of page faults • Same page may be brought into memory several times • Page Fault Rate 0  p  1.0 • if p = 0 no page faults • if p = 1, every reference is a fault • Effective Access Time (EAT) EAT = (1 –p) x memory access + p (page fault overhead + [swap page out ] + swap page in + restart overhead)

  9. Demand Paging Example • Memory access time = 100 nanoseconds • Average page-fault service = 25 microseconds • Then: EAT = (1 – p) x 100 + p x 25,000,000 = 100 + 24,999,900 p

  10. Copy-on-Write • Copy-on-Write (COW) allows both parent and child processes to initially share the same pages in memoryIf either process modifies a shared page, only then is the page copied • COW allows more efficient process creation as only modified pages are copied • Free pages are allocated from a pool of zeroed-out pages

  11. Page Replacement • Prevent over-allocation of memory by modifying page-fault service routine to include page replacement • Use modify (dirty) bit to reduce overhead of page transfers – only modified pages are written to disk • Page replacement completes separation between logical memory and physical memory – large virtual memory can be provided on a smaller physical memory

  12. Basic Page Replacement • Find the location of the desired page on disk • Find a free frame: - If there is a free frame, use it - If there is no free frame, use a page replacement algorithm to select a victim frame • Read the desired page into the (newly) free frame Update the page and frame tables • Restart the process

  13. Page Replacement Algorithms • Want lowest page-fault rate • Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string • We will look at some page replacement schemes such as • FIFO Page Replacement • Optimal Page Replacement • Least Recently Used (LRU) Page Replacement • LRU Approximation Page Replacement • Counting algorithms

  14. First-In-First-Out (FIFO) Algorithm • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 • 3 frames (3 pages can be in memory at a time per process) • 4 frames • FIFO Replacement suffers from Belady’s Anomaly • more frames does not guarantee less page faults!!! 1 1 4 5 2 2 9 page faults 1 3 3 3 2 4 1 1 5 4 2 2 1 10 page faults 5 3 3 2 4 4 3

  15. FIFO Page Replacement Example

  16. Optimal Algorithm • Replace page that will not be used for longest period of time • 4 frames example 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 • How do you know this? • Used for measuring how well your algorithm performs 1 4 2 6 page faults 3 4 5

  17. Optimal Page Replacement Example

  18. Optimal Algorithm • An optimal page-replacement algorithm: • Will never suffer from Belady’s anomaly • Will have the lowest page-faults • Is difficult to implement because it requires future knowledge of the reference string • Similar to the situation of the SJF CPU-scheduling algorithm • As a result, the optimal algorithm is as a benchmark

  19. Least Recently Used (LRU) Page Replacement Algorithm • Reference string: 1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5 • Counter implementation • Every page entry has a counter; every time the page is referenced through this entry, copy the clock into the counter • When a page needs to be changed, look at the counters to determine which are to change 1 5 2 3 5 4 4 3

  20. LRU Page Replacement Example

  21. LRU Algorithm (Cont.) • Stack implementation: keep a stack of page numbers in a double link form. If a page is referenced, then move it to the top. At worst, we require 6 pointers to be changed • No search for replacement and does not suffer from Belady’s anomaly

  22. Second-Chance (Clock) Page-Replacement Algorithm • A pointer indicates which page needs to be replaced next • When a frame is needed, the pointer is advanced until it finds a page with reference bit = 0 • As the pointer advances, it clears the reference bits

  23. Counting Algorithms • Keep a counter of the number of references that have been made to each page: • Least Frequently Used (LFU) Algorithm: replaces page with smallest count • An actively used page should have a large reference number • Most Frequently Used (MFU) Algorithm • based on the argument that the page with the smallest count was probably just brought in and has yet to be used

  24. Frame Allocation: Fixed Allocation • Equal allocation: e.g., if 100 frames and 5 processes, give each 20 pages • Proportional allocation: Allocate according to size of process

  25. Frame Allocation: Priority Allocation • Use a proportional allocation scheme using priorities rather than size • If process Pi generates a page fault • Select for replacement one of its frames • Select for replacement a frame from a process with lower priority number

  26. Global vs. Local Allocation • Global replacement: • Process selects a replacement frame from the set of all frames • one process can take a frame from another • A process can increase its frames on the expense of other unfortunate processes • Local replacement: • Each process selects from only its own set of allocated frames • The number of frames allocated to each process does not change

  27. Thrashing • If a process does not have “enough” pages, the page-fault rate is very high. This leads to: • low CPU utilization • operating system thinks that it needs to increase the degree of multiprogramming • another process added to the system • Thrashing a process is busy swapping pages in and out • Can we limit the effects of thrashing? • Use local replacement algorithm • Provide a process as many frames as possible?? How??

  28. Working-Set Model • The working set strategy prevents thrashing while keeping the degree of multiprogramming as high as possible •   working-set window  a fixed number of page references Example: 10,000 instruction • WSSi (working set size of Process Pi) = total number of pages referenced in the most recent  (varies in time) • if  too small will not encompass entire locality • if  too large will encompass several localities • if  =   will encompass entire program • D =  WSSi  total demand frames • if D > m  Thrashing, where m is the number of total frames available • Policy if D > m, then suspend one of the processes

  29. Working-set model (Con.)

  30. Working-set model (Con.) • The working set concept suggest the following strategy to determine the resident set size • Monitor the working set for each process • Periodically remove from the resident set of a process those pages that are not in the working set • When the resident set of a process is smaller than its working set, allocate more frames to it • If not enough free frames are available, suspend the process (until more frames are available) • i.e: a process may execute only if its working set is in main memory • Practical problems with this working set strategy • measurement of the working set for each process is impractical • necessary to time stamp the referenced page at every memory reference • necessary to maintain a time-ordered queue of referenced pages for each process • the optimal value for Δ is unknown and time varying • Solution: rather than monitor the working set, monitor the page fault rate!

  31. The Page-Fault Frequency (PFF) Strategy • Define an upper bound U and lower bound L for page fault rates • Allocate more frames to a process if fault rate is higher than U • Allocate less frames if fault rate is < L • The resident set size should be close to the working set size W • We suspend the process if the PFF > U and no more free frames are available

  32. Memory-Mapped Files • Memory-Mapped Files • Reading and writing to and from files , so that the information can be shared between processes, can be a pain to do all those fseek()s • It would be easier if we just map a section of the file to memory, and get a pointer to it • Then we could simply use pointer arithmetic to get (and set) data in the file

  33. Memory-Mapped Files

  34. Other Considerations • A program structure example: • Array A[1024, 1024] of integer. Each row is stored in one page • Program 1 for j := 1 to 1024 do for i := 1 to 1024 do A[i,j] := 0; • 1024 x 1024 page faults • Program 2 for i := 1 to 1024 do for j := 1 to 1024 do A[i,j] := 0; • 1024 page faults A[1,1], A[1,2],…,A[1,1024] A[2,1], A[2,2],…,A[2,1024] . . . A[1024,1],A[1024,2],…,A[1024,1024]

More Related