1 / 16

Memory Management

Memory Management. Memory allocation Garbage collection. Memory Allocation. Memory pool : large block of contiguous memory Memory manager allocates memory by returning a handle to the user Use the term heap to refer to free memory accessed by a dynamic memory management scheme.

stillwell
Télécharger la présentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management Memory allocation Garbage collection

  2. Memory Allocation • Memory pool: large block of contiguous memory • Memory manager allocates memory by returning a handle to the user • Use the term heap to refer to free memory accessed by a dynamic memory management scheme

  3. Dynamic Allocation • Blocks of any size may be requested in any order from the free-list • For a request of m words, and a block size k, between m and k space is used for the request • This can result in fragmentation if m != k

  4. Fragmentation in Dynamic Allocation • External fragmentation: lots of small free blocks • Internal fragmentation: when all of block of size k is allocated for m words. This type of allocation is easier, if less efficient

  5. Sequential Fit Method • Attempt to find a “good” block • The free-list is organized as a doubly-linked list • Tag bit and block size fields • The memory manager searches the free-list for a block of “suitable” size

  6. Three sequential fit methods • First fit • Start from the beginning (or middle) • May waste larger blocks by breaking them up • Best fit • Examines entire list • Maximizes external fragmentation but will be more likely to be able to service large requests

  7. Three sequential fit methods • Worst fit • Allocates largest block through a sequential search • Minimizes external fragmentation • Which is best? Depends on the expected types of memory requests

  8. Sequential Fit • A search of the free-list is in Ө(n) in the worst case • Want to merge adjacent free blocks • Need additional space to support the memory manager operations/linked list • Is there anything that can be improved?

  9. The Buddy Method • Assume that memory is of size 2ⁿ for some n • Both free and reserved blocks will be of size 2kfor k ≤ n • The buddy system keeps a separate list of free blocks for each size

  10. The Buddy Method • For a request of size m, find the smallest k where 2k ≥ m • If such a k exists, allocate a block of size 2k from the list • If such a k does not exist, allocate the next larger block on the list, and split it in half until a block of size 2k is created

  11. The Buddy Method • Advantages • Less external fragmentation • Cheaper search than a linked list • Merging adjacent blocks is easy (the buddy for any block of size 2k is another block of the same size with the same address except the kth bit is reversed) • Disadvantage • Allows internal fragmentation

  12. Other memory allocation methods • Segregated storage method: break available memory into several memory zones, each with its own management method • Impose a standard size (cluster scheme) • Example: disk file management • Leads to internal fragmentation • Does not need to be contiguous

  13. Failure Policies • Happens when a memory request for a certain size cannot be serviced • Due to external fragmentation → compact memory, which physically moves data • Use a handle if the application relies on absolute positions of the data • Can defer the memory request (for example, when several processes are running at once)

  14. Failure policies: Garbage Collection • When no program variable points to a block of space, it is considered garbage (or a memory leak) • Garbage collection involves determining which memory is garbage and recovering it • Two common methods: • Reference count • Mark/sweep strategy

  15. Garbage Collection • Reference Count: each dynamically allocated memory block has a count field that is incremented and decremented for each pointer pointing to it (or away from it) • When the count reaches zero, the memory becomes garbage and is immediately placed in free store • Used by the UNIX file system, where the memory objects are linked together without cycles • Useful when the objects are large, such as a file

  16. Garbage Collection • Mark/sweep strategy • Uses a single bit marker instead of a count field • Works for cycles, but DFS is recursive • Garbage collection phase occurs when free store is exhausted: • Clear all mark bits • Perform DFS from each pointer on the variable list, turning on the bits • Sweep through the memory pool for unmarked elements (which are garbage and placed in free store)

More Related