1 / 70

Chapter 3 Memory Management —— Page Management

Chapter 3 Memory Management —— Page Management. Li Wensheng wenshli@bupt.edu.cn. Outline. Data Structure Page Scanner Operation Page-out Algorithm Hardware Address Translation Layer. Pages — The Basic Unit of Solaris Memory. Physical memory is divided into pages.

koto
Télécharger la présentation

Chapter 3 Memory Management —— Page Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Memory Management—— Page Management • Li Wensheng • wenshli@bupt.edu.cn

  2. Outline • Data Structure • Page Scanner Operation • Page-out Algorithm • Hardware Address Translation Layer

  3. Pages—The Basic Unit of Solaris Memory • Physical memory is divided into pages. • A page’s identity is its vnode/offset pair. • The hardware address translation (HAT) and address space layers manage the mapping between a physical page and its virtual address space.

  4. The Page Structure

  5. The Page Hash List • global hash list -- an array of pointers to linked lists of pages • VM system hashes pages with identity onto a global hash list so that they can be located by vnode/offset. • Three page functions search the global page hash list: • page_find() • page_lookup() • page_lookup_nowait()

  6. Locating Pages by Vnode/Offset Identity

  7. MMU-Specific Page Structures • need to keep machine-specific data about every page, e.g. the HAT information that describes how the page is mapped by the MMU. • struct machpage • The contents of the machine-specific page structure are hidden from the generic kernel. • only the HAT machine-specific layer can see or manipulate its contents

  8. Machine-Specific Page Structures: sun4u Example

  9. Physical Page Lists • a segmented global physical page list, consisting of segments of contiguous physical memory. • Contiguous physical memory segments are added during system boot. • Can also added and deleted dynamically when physical memory is added and removed while the system is running.

  10. arrangement of the physical page lists

  11. Free List and Cache List • hold pages that are not mapped into any address space and that have been freed by page_free(). • free list • Does not have a vnode/offset associated • Pages are put on the free list at process exits • is generally very small • cache list • still have a vnode/offset • Seg_map free-behind and seg_vn executables and libraries (for reuse)

  12. The Page-Level Interfaces

  13. The Page-Level Interfaces (Cont.)

  14. The Page Throttle • implemented in the page_create() and page_create_va() functions • causes page creates to block when the PG_WAIT flag is specified, that is, when available is less than the system global, throttlefree. • throttlefree is set to the same value as minfree. • memory allocated through the kernel memory allocator specifies PG_WAIT and is subject to the page-created throttle.

  15. Page Sizes

  16. Page Coloring • page placement policy affects processor performance • The optimal placement of pages often depends on the memory access patterns of the application. • in a random order • in some sort of stridden ordered • How page placement can affect performance? • The UltraSPARC-I & -II implementations • The L1 cache is 16 Kbytes • The L2 (external) cache can vary between 512 Kbytes and 8 Mbytes • The L2 cache is arranged in lines of 64 bytes, and transfers are done to and from physical memory in 64-byte units.

  17. Page Coloring (Cont.) • Assume: • we have a 32-Kbyte L2 cache • page size of 8 Kbytes • four page-sized slots on the L2 cache • The cache does not necessarily read and write 8-Kbyte units from memory; it does that in 64-byte chunks, so 32-Kbyte cache has 1024 addressable slots.

  18. Page Coloring (Cont.) offsets 0 and 32678 map to the same cache line. If we were now to access these two addresses, cache ping-pong effect occurs. we program to virtual memory rather than physical memory.The OS must provide a sensible mapping between virtual memory and physical memory

  19. Page Coloring (Cont.) • physical pages are assigned to an address space from the order they appear in the free list. • page coloring algorithm • the free list of physical pages is organized into specifically colored bins, one color bin for each slot in the physical cache. • When a page is put on the free list, the page_free() algorithms assign it to a color bin. • When a page is consumed from the free list (page_create_va() function ), the virtual-to-physical algorithm takes the page from a physical color bin.

  20. Page Coloring (Cont.) • The kernel supports a default algorithm and two optional algorithms. • The default algorithm was chosen according to the following criteria: • Fairly consistent, repeatable results • Good overall performance for the majority of applications • Acceptable performance across a wide range of applications

  21. Solaris Page Coloring Algorithms

  22. Outline • Data Structure • Page Scanner Operation • Page-out Algorithm • Hardware Address Translation Layer

  23. Page Scanner • Is the memory management daemon that manages system wide physical memory • When there is a memory shortage, the page scanner runs to steal memory from address spaces, by: • taking pages that haven’t been used recently • syncing them up with their backing store • freeing them • If paged-out virtual memory is required again, a memory page fault occurs.

  24. Page Scanner (Cont.) • The balancing of page stealing and page faults determines which parts of virtual memory will be backed and which will be moved out to swap. • global page replacement / local page replacement • The subtleties of which pages are stolen govern the memory allocation policies and can affect different workloads in different ways. • Enhancements to minimize page stealing from extensively shared libraries and executables • Priority paging to prevent application, shared library, and executable paging on systems with ample memory.

  25. Page Scanner Operation • tracks page usage by reading a per-page hardware bit from the MMU for each page • Two bits for each page: Reference bit & modify bit • awakened when the amount of memory on the free-page list falls below a system threshold • typically 1/64th of total physical memory. • scans through pages in physical page order • looking for pages that haven’t been used recently to page out to the swap device and free

  26. front hand clears the referenced and modified bits for each page back hand inspects the referenced and modified bits some time later Pages haven’t been referenced or modified are swapped out and freed scan rate is controlled by the amount of free memory on the system The gap between the front and back hand is fixed by a boot-time parameter, handspreadpages. Two-handed Clock Algorithm

  27. Outline • Data Structure • Page Scanner Operation • Page-out Algorithm • Hardware Address Translation Layer

  28. Introduction to page-out algorithm • Steals pages when memory is lower than lotsfree • Scanner runs • Starts scanning at slowscan (pages/sec) • Four times/second when memory is short • Awoken by page allocator if very low • Puts memory out to “backing store” • Uses a Least Recently Used process • Kernel threads does the scanning

  29. Page Scanner Parameters

  30. Default ½ physical memory 1/64 of memory Default 100 Scan Rate Parameters (Assuming No Priority Paging) Stsrts scanning at slowscan Scans faster as the amount of free memory approaches 0

  31. Scan Rate Parameters calculation • lotsfree is calculated at startup as 1/64th of memory • slowscan parameter is 100 by default on Solaris systems • fastscan is set to total physicalmemory/2 • If total physical memory is 1G, then • Lotsfree=2048 pages/sec fastscan=8192 pages/sec • If free memory falls to 12 Mbytes (1536 pages)

  32. Not Recently Used Time • The time between the front hand and back hand • short time  the most active pages remain intact • long time  only the largely unused pages are stolen • varies from just a few seconds to several hours,according to: • the number of pages between front and back hand • the scan rate • Example • Scan rate: 2000pages/sec • hand spread: 8192 pages/sec • Clear/check time: 4 seconds

  33. Shared Library Optimizations • prevents scanner from stealing pages from extensively shared libraries • looks at the share reference count for each page • if the page is shared more than a certain amount, then it is skipped during the page scan operation. • threshold parameter: po_share • 8 ~ 134217728, By default, starts at 8 • A page shared by more than po_share processes will be skipped • Each time around, it is decremented ?

  34. The Priority Paging Algorithm • Purpose: overcome adverse behavior that results from the memory pressure caused by the file system. • puts a higher priority on a process’s pages • its heap, stack, shared libraries, and executables. • permits scanner to • pick file system cache pages only when ample memory is available • only steal application pages when there is a true memory shortage.

  35. The Priority Paging Algorithm • a new paging parameter, cachefree • When the amount of free memory lies between cachefree and lotsfree, the page scanner steals only file system cache pages • scanner wakes up when memory falls below cachefree rather than below lotsfree

  36. pages only the file system cache Scan Rate Interpolation with the Priority Paging Algorithm

  37. Page Scanner CPU Utilization Clamp • Purpose: to prevent the page-out daemon from using too much processor time • Two parameters: • min_percent_cpu, default 4% of a single CPU • max_percent_cpu, default 80% of a single CPU • CPU time can be used: • From min_percent_cpu to max_percent_cpu • min_percent_cpu when free memory is at lotsfree (cachefree with priority paging enabled) • max_percent_cpu if free memory were to fall to zero

  38. Parameters That Limit Pages Paged Out • Maxpgio • limits the rate at which I/O is queued to the swap devices • defaults to 40 or 60 I/Os per second • Often set to 100 times the number of swap spindles • Maxpgio can also indirectly affect file system throughput

  39. Page Scanner Implementation • implemented as two kernel threads • Page scanner thread: scans pages • Page-out thread: pushes the dirty pages queued for I/O

  40. Page Scanner Architecture

  41. Scanner Schedpaging() • waken up • called four times per second by a callout, • triggered by the clock() thread if memory falls below minfree • triggered by the page allocator if memory falls below throttlefree • calculates two setup parameters for the page scanner thread • the number of pages to scan • the number of CPU ticks that the scanner thread can consume • triggers the scanner through a condition variable

  42. Page scanner thread • cycles through the physical page list • The front and back hand each have a page pointer • front hand is incremented first to clear the referenced and modified bits for pointed page • back hand is then incremented to check the status of the pointed page (using check_page() function) • If modified, placed in the dirty page queue • If not referenced, freed

  43. Page-out thread • uses a preinitialized list of async buffer headers as the queue for I/O requests • The number of entries is controlled by parameter async_request_size, initialized with 256 • Requests to queue more I/Os will be blocked • if the entire queue is full • if the rate of pages queued has exceeded the maxpgio • removes I/O entries from the queue • initiates I/O by calling the vnode putpage()

  44. The Memory Scheduler • swap out entire processes to conserve memory • removing all of a process’s thread structures and private pages • setting flags in the process table to indicate that this process has been swapped out • Not expensive but affects process’s performance • launched at boot time • does nothing unless memory is less than desfree • looking for processes that can completely swap out • soft-swap out / hard-swap out

  45. Soft Swapping • takes place when the 30-second average for free memory is below desfree • memory scheduler looks for processes that have been inactive for at least maxslp seconds • If found: • swaps out the thread structures for each thread • pages out all of the private pages of memory for that process

  46. Hard Swapping • takes place when all of the following are true: • At least two processes are on the run queue, waiting for CPU. • The average free memory over 30 seconds is consistently less than desfree. • Excessive paging is going on • determined to be true if page-out + page-in > maxpgio • Use a much more aggressive approach to find memory • First, the kernel is requested to unload all modules and cache memory that are not currently active • Then, processes are sequentially swapped out until the desired amount of free memory is returned

  47. Memory Scheduler Parameters

  48. Outline • Data Structure • Page Scanner Operation • Page-out Algorithm • Hardware Address Translation Layer

  49. Introduction to HAT • Hardware Address Translation (HAT) • controls the hardware that manages mapping of virtual to physical memory • provides interfaces that implement the creation and destruction of mappings between virtual and physical memory • provides a set of interfaces to probe and control the MMU • implements all of the low-level trap handlers to manage page faults and memory exceptions

  50. Solaris Virtual Memory Layers

More Related