1 / 50

Page Replacement Algorithm

Page Replacement Algorithm. Performance of Demand Paging. Three Major Components of the Page-Fault Service Time Service the Page-Fault Interrupt. Read in the Page. Restart the Process.  Disk I/O is do expensive. Page Replacement(1). Need for Page Replacement Fig 8.5 Page Replacement

zahi
Télécharger la présentation

Page Replacement Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Page Replacement Algorithm SunMoon university

  2. Performance of Demand Paging • Three Major Components of the Page-Fault Service Time • Service the Page-Fault Interrupt. • Read in the Page. • Restart the Process.  Disk I/O is do expensive. SunMoon university

  3. Page Replacement(1) • Need for Page Replacement • Fig 8.5 • Page Replacement • fig 8.6 • p.249 • double the page-service time • dirty-bit, (modify) bit • Read-only pages • Reduce I/O time by one-half SunMoon university

  4. SunMoon university

  5. SunMoon university

  6. Page Replacement(2) • To implement Demand Paging • frame-allocation algorithm • how many frame to allocate to each process • page replacement algorithm SunMoon university

  7. Page Replacement Algorithm(1) [1] lowest page-fault rate [2] reference string - random number - tracing a given system and recording the addr. to consider only the page number [3] the number of page frames available per processes SunMoon university

  8. Page Replacement Algorithm(2) 1) Random Page Replacement • low overhead • simplest • equal lielihood of being selected for replacement • rarely used SunMoon university

  9. Page Replacement Algorithm(3) 2) FIFO Page Replacement • time-stamp each as it enters primary storage • choose the page that has been in storage the longest • fig 8.8 • advantages • easy, simple to understand and program • disadvantages • Replace heveily used pages SunMoon university

  10. SunMoon university

  11. Page Replacement Algorithm(4) • bad replacement choice ( ex. active pages) • Increase the page fault, solves the process execution. • but, not incorrect execution • FIFO Anomaly or Beladely’s Anomaly • under FIFO page replacement, certain page reference patterns actually cause more page faults when the number of page frames allocated to a process is increased. • fig 8.9 • Dfig 9.1 SunMoon university

  12. SunMoon university

  13. SunMoon university

  14. Page Replacement Algorithm(5) 3) The Principle of Optimality( OPT or MIN ) • The page to replace is the one that will not be used again for the furthest time into the future. • fig 8.10 • requires future knowledge of the reference string. SunMoon university

  15. SunMoon university

  16. Page Replacement Algorithm(6) 4) LRU ( Least Recently Used • select the page for replacement that has not been used for the longest time • When the page was referenced. cf.) FIFO : when the page was coming into the memory. • each page be time-stamped whenever it is referenced • substantial overhead • approximate LRU are used • fig 8.11 SunMoon university

  17. SunMoon university

  18. Page Replacement Algorithm(7) • two implementation • counters • a logical clock, require search • stack • fig 8.12 • doubly-linked list, micro-code SunMoon university

  19. SunMoon university

  20. Page Replacement Algorithm(8) 5) LFU (Least Frequently Used ) • that is least frequently used or least intensively referenced • approximation to LRU • counting algorithm • not common • expensive • not approximate on OPT replacement • a possibility to replace the just brought in pages. • Goal  resonable decision  low overhead SunMoon university

  21. Page Replacement Algorithm(9) 6) MFU ( Most Frequently Used ) • the page with smallest count was probably just brought in and has yet to be used. 7) LRU Approximation Algorithm [1] Additional-Reference-Bits Algorithm • reference-bit • additional an “8-bits” byte for each page • at regular interval( say every 100ms ) R 7 … 0  • R is reference bit. • R=1, if referenced. Otherwise, R=0. SunMoon university

  22. Page Replacement Algorithm(10) [2]Sencond-Chance Algorithm - basically, FIFO - When a page has been selected, inspect its reference bit. if 0,  replace if 1,  give a second chance clear reference bit set current time in FIFO queue move on the next FIFO page -fig 8.13 -circular queue : “second chance 를 부여 받은 페이지가 다시 검사되기 이전에 ref. bit가 set된다면?”  never replace ?? -if all bits are clear until second chance same as FIFO replacement worst case SunMoon university

  23. SunMoon university

  24. Page Replacement Algorithm(11) [3] NUR (Not Used Recently), Enhanced Second-Chance Alg. • a page that has not been changed while in primary storage • the addition of two hardware bits per page • reference bit • modified bit (=dirty bit) • low overhead • periodically set all the referenced bits to 0 to get a fresh start. • similar to Page 259 (5). SunMoon university

  25. Working-Set Model (1) •   working-set window  a fixed number of page references Example: 10,000 instruction • WSSi (working set of Process Pi) =total number of pages referenced in the most recent  (varies in time) • if  too small will not encompass entire locality. • if  too large will encompass several localities. • if  =   will encompass entire program. SunMoon university

  26. Working-Set Model (2) • D =  WSSi  total demand frames • if D > m  Thrashing • Policy if D > m, then suspend one of the processes. SunMoon university

  27. Keeping Track of the Working Set • Approximate with interval timer + a reference bit • Example:  = 10,000 • Timer interrupts after every 5000 time units. • Keep in memory 2 bits for each page. • Whenever a timer interrupts copy and sets the values of all reference bits to 0. • If one of the bits in memory = 1  page in working set. • Why is this not completely accurate? • Improvement = 10 bits and interrupt every 1000 time units. SunMoon university

  28. Page Replacement Algorithm Based on Locality or Working Set(1) • Locality • Processes tend to reference storage in nonuniform, highly localized patterns. • Temporal locality (Time) • storage locations referenced recently are likely to be referenced in the near future • looping, subroutines, stack SunMoon university

  29. Page Replacement Algorithm Based on Locality or Working Set(2) • Spacial locality (Space) • Storage references tend to be clustered. • Once a location is referenced, it is highly likely that nearby locations will be referenced. • array traversals, sequential code executions, etc. • fig 8.15 SunMoon university

  30. SunMoon university

  31. Page Replacement Algorithm Based on Locality or Working Set(3) • Working set theory of program behavior • By Denning • a collection of pages a process is actively referenced. SunMoon university

  32. Page Replacement Algorithm Based on Locality or Working Set(4) • its working set of pages must be maintained in primary storage. • for a program to run efficiently • minimize page faults • Otherwise, Cause of“Thrashing” • the program repeatedly requests pages from secondary storage • maximize page faults • a process is thrashing if it is spending more time paging than executing. • fig 8.14 SunMoon university

  33. Thrashing (1) • If a process does not have “enough” pages, the page-fault rate is very high. This leads to: • low CPU utilization. • operating system thinks that it needs to increase the degree of multiprogramming. • another process added to the system. • Thrashing a process is busy swapping pages in and out. SunMoon university

  34. Thrashing (2) • Why does paging work?Locality model • Process migrates from one locality to another. • Localities may overlap. • Why does thrashing occur? size of locality > total memory size SunMoon university

  35. SunMoon university

  36. Working Set Storage Management Police(1) • Maintain the working sets of active programs in primary storage. • Working Set W (t, w) at time t • the set of pages referenced by the process during the process time interval t-w to t. • w : working set window size • t : the time during which a process has the CPU SunMoon university

  37. Working Set Storage Management Police(2) • Allocation at t = | w (t,w) | • Replacement : replace page p at t if p  w (t,w) && p  w (t-1,w) • fetch on demand • Placement : don’t care in paging systems SunMoon university

  38. Working Set Storage Management Police(3) • Dfig 9.2 / 9.3 / 9.4 / 9.5 • Example) SunMoon university

  39. SunMoon university

  40. Working-set model SunMoon university

  41. SunMoon university

  42. SunMoon university

  43. Working Set Storage Management Police(4) • program 1. window size 2. page fault frequency police 3. optimal police SunMoon university

  44. Other Considerations (1) [1] Page Size • Smaller the page size • the more pages and page frames • the larger page tables  table fragementation • Larger • memory waste • I/O transfers from disk • Need large page SunMoon university

  45. Other Considerations (2) • the property of locality of reference • need samaller  tighter working set • Internal fragementetion • the samaller the page size, the less the internal fragementetion SunMoon university

  46. Other Considerations (3) [2] Global versus Local Allocation SunMoon university

  47. Other Considerations (4) [3] Prepaging vs. Anticipatory Paging • Pure demand-paging system • the larger number of page faults when a process is started • Prepaging • an attempt to prevent this high level page fault of initial paging. • bring into memory at one time all the pages that will be need. SunMoon university

  48. Other Considerations (5) • In working-set model, • suspend a process ( due to I/O, a lack of free frames ) • remember Working Set • when resumed, ( I/O completion, enough free frames ) • Advantages • cost of prepaging < cost of servicing the corresponding page fault • Disadvantages • the cost that many of the prepaged pages are not used. SunMoon university

  49. Other Considerations (6) • Anticipatory Paging • Advantages • if correct decision, reduce execution time. • otherwise, low H/W cost. SunMoon university

  50. Other Considerations (7) [4] Program Structure • Demand paging is designed to be transparent to the user program. • Pascal Program • Stack –“top”, high locality • Hash Table – bad locality • Pointer – tend to randomize access to ` memory.3399 SunMoon university

More Related