Page Replacement Policies in Virtual Memory Systems
190 likes | 229 Vues
Explore optimal, FIFO, random, and least-recently-used page replacement policies in virtual memory systems along with their impact on hit rates and performance. Learn about approximation algorithms, implementation, and strategies to prevent thrashing.
Page Replacement Policies in Virtual Memory Systems
E N D
Presentation Transcript
Beyond Physical Memory: Policies Ansu Na(asna@archi.snu.ac.kr) School of Computer Science and Engineering Seoul National University
Introduction • Introduction • Page Replacement Policies • Optimal Replacement Policy • FIFO Policy • Random Policy • Least-Recently-Used Policy • Approximation : Clock Algorithm • Other Kinds of Policies for Virtual Machine • Page Selection Policy • Cleaning Policy
Optimal Replacement Policy Impossible To Implement Page Access 0 1 2 0 1 3 0 3 1 2 1 Miss Miss Miss Hit Hit Miss Hit Hit Hit Miss Hit Memory 0 1 3 0 1 2 0 0 1 0 1 2 0 1 2 0 1 3 0 1 3 0 1 2 0 1 3 0 1 2 Compulsory Misses Hit Rate with compulsory misses : 54.6% Hit Rate without compulsory misses : 85.7% • Leads to the fewest number of misses • Replaces the page accessed furthest in the future • Can be used as a comparison point
FIFO Policy Page Access 0 1 2 0 1 3 0 3 1 2 1 Miss Miss Miss Hit Hit Miss Miss Hit Miss Miss Hit Memory First-in Page 2 3 0 0 1 2 0 0 1 0 1 2 0 1 2 1 2 3 2 3 0 3 0 1 0 1 2 0 1 2 Compulsory Misses Hit Rate with compulsory misses : 36.4% Hit Rate without compulsory misses : 57.1% • Is simple to implement • Replaces the first-in page
Random Policy (%) Page Access 0 1 2 0 1 3 0 3 1 2 1 Miss Miss Miss Hit Hit Miss Miss Hit Miss Hit Hit Memory 3 0 2 0 1 2 0 0 1 0 1 2 3 1 2 0 1 2 3 0 2 1 0 2 0 1 2 0 1 2 Compulsory Misses • Is simple to implement • Replaces the randomly selected page
Least Recently Used(LRU) Policy Page Access 0 1 2 0 1 3 0 3 1 2 1 Miss Miss Miss Hit Hit Miss Hit Hit Hit Miss Hit Memory Least Recently Accessed Page 1 3 0 3 1 2 0 0 1 0 1 2 0 1 3 2 0 1 1 0 3 0 3 1 3 2 1 1 2 0 Compulsory Misses • Tries to guess the future accesses • Utilizes the locality • Replaces the least recently accessed page
Implementing LRU Policy Logical page numbers Time stamps 4GB Memory Physical page numbers OS Access 0 1 2 0 3 Scan all 100 4KB Page Current Time 0 1 3 2 98 80 92 112 112 100 1 million entries HW • Hardware updates the time stamp on every memory reference • OS selects the victim on replacement by scanning all entries • The cost is very expensive
Approximation of LRU : Clock Algorithm Page Access 0 1 2 0 1 3 0 3 1 2 1 Miss Miss Miss Hit Hit Miss Hit Hit Hit Miss Hit 0 0 - 1 2 0 1 0 : Reference bit - 3 2 - 1 1 • Uses the reference bit for checking recent access • Page access : HW sets the reference bit • Page fault : OS starts to find the unreferenced page clearing the reference bit • Does not scan through
Dirty Bit 0 0 1 : Reference bit : Dirty bit 2 1 1 1 0 0 Unreferenced & Clean! • Represents that the page is modified or not • Prefer to evict clean page • Page replacement of dirty page is expensive
Workload : No-Locality • Page access pattern is random • Number of pages : 100 • Number of accesses : 10000 • Optimal policy provides much better hit ratio • All of realistic policies provide similar hit ratio
Workload : 80-20 Hot Cold 20% 80% # of pages # of accesses 80% 20% AMAT with 40 blocks 82% 77% • There is difference on access hotness of pages • LRU provides higher hit ratio than FIFO & Random • Improvement is dependent on miss penalty
Workload : Looping Sequential Random policy does not have corner case 49 • 50 Pages are accessed in looping sequential manner • Access pattern : 0, 1, 2, 3, … 48, 49, 0, 1, 2, 3, … • Database system shows similar access patterns • Hit rates of LRU and FIFO are miserable
Other Policies for Virtual Machine • Page selection policy • Demand paging • Bring the page when it is accessed • Prefetching • Bring some pages speculatively • Writing to disk Policy • One at a time • Immediately write out to disk • Clustering • Collect some writes and then write them together
Thrashing • Paging constantly and rapidly • If working sets exceed physical memory • And if workload locality cannot held well Physical Memory Process A Process B Process C
Thrashing • Paging constantly and rapidly • If working sets exceed physical memory • And if workload locality cannot held well Physical Memory Process A Process B Process C
Thrashing • Paging constantly and rapidly • If working sets exceed physical memory • And if workload locality cannot held well Physical Memory Process A Process B Process C
Thrashing Physical Memory Pending Process A Process B Process C Physical Memory KILL! Process A Process B Process C • Thrashing solutions • Admission control • Keeps working set smaller than physical memory • Imposes some overhead • Out-of-memory killer • Kills the memory intensive process • Can be problematic
Thrashing Physical Memory Pending Process A Process B Process C Physical Memory Process A Process B Process C • Thrashing solutions • Admission control • Keeps working set smaller than physical memory • Imposes some overhead • Out-of-memory killer • Kills the memory intensive process • Can be problematic
Summary Source : http://www.fusionio.com/load/-media-/302wu5/docsLibrary/PX600_DS_Final_v3.pdf (Retrieved on 2015.05.11) • Page replacement policy • It is critical for performance • Approximation is realistic approach • LRU & Clock algorithm • Scan-resistance is important • The best solution of author • Buy more memory • Page fault penalty is too expensive • Disk is too slow • The new trends • Storages are getting faster • Need to revisit the policies