1 / 22

Course Title: O.S Chap No: 05 “Memory Management”

Course Title: O.S Chap No: 05 “Memory Management”. Course Instructor: ILTAF MEHDI IT Lecturer. Memory Manager. The purpose of the memory manager is to allocate primary memory space to processes to move the process address space into the allocated portion of the primary memory

Télécharger la présentation

Course Title: O.S Chap No: 05 “Memory Management”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Course Title:O.SChap No: 05“Memory Management” Course Instructor: ILTAF MEHDI IT Lecturer

  2. Memory Manager • The purpose of the memory manager is • to allocate primary memory space to processes • to move the process address space into the allocated portion of the primary memory • to minimize access times using a cost-effective amount of primary memory

  3. Memory Management • In an environment that supports dynamic memory allocation, the memory manager must keep a record of the usage of each allocatable block of memory. This record could be kept by using almost any data structure that implements linked lists. An obvious implementation is to define a free list of block descriptors, with each descriptor containing a pointer to the next descriptor, a pointer to the block, and the length of the block.

  4. Memory Management Algorithms • A number of strategies are used to allocate space to the processes that are competing for memory. • Best Fit • Worst Fit • First Fit • Next Fit

  5. Memory Management Algorithms Best Fit:- • The allocator places a process in the smallest block of unallocated memory in which it will fit. Worst Fit:- • The memory manager places process in the largest block of unallocated memory available. The ides is that this placement will create the largest hole after the allocations, thus increasing the possibility that, compared to best fit, another process can use the hole created as a result of external fragmentation.

  6. Memory Management Algorithms • First Fit:- • Another strategy is first fit, which simply scans the free list until a large enough hole is found. Despite the name, first-fit is generally better than best-fit because it leads to less fragmentation. • Next Fit:- • The first fit approach tends to fragment the blocks near the beginning of the list without considering blocks further down the list. Next fit is a variant of the first-fit strategy. The problem of small holes accumulating is solved with next fit algorithm, which starts each search where the last one left off, wrapping around to the beginning when the end of the list is reached (a form of one-way elevator)

  7. Virtual Memory • If your computer lacks the random access memory (RAM) needed to run a program or operation, Windows uses virtual memory to compensate. • Virtualmemory combines your computer’s RAM with temporary space on your hard disk. When RAM runs low, virtual memory moves data from RAM to a space called a paging file. Moving data to and from the paging file frees up RAM so your computer can complete its work.

  8. Virtual Memory • The more RAM your computer has, the faster your programs will generally run. • If a lack of RAM is slowing your computer, you might be tempted to increase virtualmemory to compensate. • However, your computer can read data from RAM much more quickly than from a hard disk, so adding RAM is a better solution.

  9. Paging File

  10. Managing Virtual Memory • The most common ways of determining the sizes of the blocks to be moved into and out of memory are: • Swapping • Paging • Segmentation

  11. Swapping • "Swapping" is the act of using this swap file. • A swapping is a mechanism in which a process can be swapped temporarily out of memory to a backing store and then brought back into memory for continued execution. • When you load a file or program, the file is stored in the random access memory (RAM). • Since RAM is finite, some files cannot fit on it. These files are stored in a special section of the hard drive called the "swap file".

  12. Swapping • Swapping is a useful technique that enables a computer to execute programs and manipulate data files larger than main memory. The operating system copies as much data as possible into main memory, and leaves the rest on the disk. • When the operating system needs data from the disk, it exchanges a portion of data (called a page or segment ) in main memory with a portion of data on the disk. • DOS does not perform swapping, but most other operating systems, including OS/2, Windows, and UNIX, do.

  13. Swapping • Swapping is the one of the efficient regular and authentic approach of memory management. It is the process of swapping of higher priority process on the lower priority process. Advantages of swapping are as follows :- 1. higher degree of multiprogramming. 2. dynamic relocation. 3. greater memory utilization. 4. priority based scheduling. 5. less wastage of CPU time. 6. higher performance.

  14. Paging • paging in computer terms means that a file if created on your hard drive to act as extra RAM memory when your RAM memory is low. • It is memory management technique. • In which our Program Size is Big and our RAM size are small than OS divide theprogram into smaller pages and store into secondary storage.

  15. Page Table • A page table is the data structure used by a virtual memory system in a computeroperating system to store the mapping between virtual addresses (VA) and physical addresses (PA). • Virtual addresses are those unique to the accessing process. • Physical addresses are those unique to the hardware, i.e., RAM.

  16. Memory Management Unit (MMU) • Sometimes called paged memory management unit (PMMU), is a computer hardware component responsible for handling accesses to memory requested by the CPU. • Its functions include translation of virtual addresses to physical addresses (i.e., virtual memory management), memory protection, cache control etc.

  17. Fragmentation • Fragmentation means isolated or incomplete part. • Fragmentation means something is broken into parts that are detached, isolated or incomplete.

  18. Types of Fragmentation • There are two types of Fragmentation :-1) External Fragmentation2) Internal Fragmentation

  19. External Fragmentation • It exists when there us enough total memory space available to satisfy a request, but available memory space are not contiguous. • Storage space is fragmented into large number of small holes. • Both first fit and best fit strategies suffer from this. • First fit is better in some systems, whereas best fit is better for other. • Depending on the total amount of memory storage, size, external fragmentation may be minor or major problem.

  20. Internal Fragmentation • Consider a multiple partition allocation scheme with a hole of 18,462 bytes. • The next process request with 18,462 bytes. If we allocate, we are left with a hole of 2 bytes. Thegeneralapproachtoavoidthisproblemisto:-a) Break physical memory into fixed sized blocks and allocate memory in units based on block size.b) Memory allocated to a process may be slightly large than the requested memory.

  21. Cache Memory • A CPU cache is a cache used by the central processing unit of a computer to reduce the average time to access memory. • The cache is a smaller, faster memory which stores copies of the data from the most frequently used main memory locations. • As long as most memory accesses are cached memory locations, the average latency of memory accesses will be closer to the cache latency than to the latency of main memory.

  22. Cache Memory • When the processor needs to read from or write to a location in main memory, it first checks whether a copy of that data is in the cache. If so, the processor immediately reads from or writes to the cache, which is much faster than reading from or writing to main memory.

More Related