1 / 33

Chapter 8 Virtual Memory

Chapter 8 Virtual Memory. Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory  Size of virtual storage is limited by the amount of secondary memory available

Télécharger la présentation

Chapter 8 Virtual Memory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 8Virtual Memory • Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory Size of virtual storage is limited by the amount of secondary memory available • Virtual address is the address assigned to a location in virtual memory

  2. Keys to Virtual Memory 1) Memory references are logical addresses dynamically translated into physical addresses at run time • A process may be swapped in and out of main memory, occupying different regions at different times during execution 2) A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memory

  3. Breakthrough in Memory Management • If both of those two characteristics are present, • then it is not necessary that all of the pages or all of the segments of a process be in main memory during execution. • If the next instruction, and the next data location are in memory then execution can proceed • at least for a time

  4. Execution of a Process • OS brings into main memory a few pieces of the program • Resident set: portion of process that is in main memory • Execution proceeds smoothly as long as all memory references are to locations that are in the resident set • An interrupt (memory access fault) is generated when an address is needed that is not in main memory

  5. Execution of a Process • OS places the process in a blocking state • Piece of process that contains the logical address is brought into main memory • OS issues a disk I/O Read request • Another process is dispatched to run while the disk I/O takes place • An interrupt is issued when disk I/O complete which causes OS to place the affected process in the Ready state

  6. Implications of this new strategy • More efficient processor utilization • More processes may be maintained in main memory because only load in some of the pieces of each process • More likely a process will be in the Ready state at any particular time • A process may be larger than main memory • Restriction in programming is lifted • OS automatically loads pieces of a process into main memory as required

  7. Real and Virtual Memory • Real memory • Main memory, the actual RAM, where a process executes • Virtual memory • Memory on disk • Allows for effective multiprogramming and relieves the user of tight constraints of main memory

  8. Thrashing • A condition in which the system spends most of its time swapping pieces rather than executing instructions. • It happens when OS frequently throws out a piece just before it is used • To avoid this, OS tries to guess, based on recent history, which pieces are least likely to be used in the near future.

  9. Principle of Locality • Program and data references within a process tend to cluster  only a few pieces of a process will be needed over a short period of time • It is possible to make intelligent guesses about which pieces will be needed in the future • This suggests that virtual memory may work efficiently

  10. A Processes Performance in VM Environment • During the lifetime of the process, references are confined to a subset of pages.

  11. Support Needed for Virtual Memory • Hardware must support paging and segmentation • OS must be able to manage the movement of pages and/or segments between secondary memory and main memory

  12. Paging • Each process has its own page table • Each page table entry contains the frame number of the corresponding page in main memory • Two extra bits are needed to indicate: • P: whether the page is in main memory or not • M: whether the contents of the page has been altered since it was last loaded

  13. Paging Table • It is not necessary to write an unmodified page out when it comes to time to replace the page in the frame that it currently occupies

  14. Address Translation The frame no. is combined with the offset to produce the real address The page no. is used to index the page table and look up the frame no.

  15. Page Tables • Page tables can be very large • Consider a system that supports 231=2Gbytes virtual memory with 29=512-byte pages. The number of entries in a page table can be as many as 222 • Most virtual memory schemes store page tables in virtual memory • Page tables are subject to paging

  16. Two-Level Hierarchical Page Table Composed of 210 4-byte page table entries Composed of 220 4-byte page table entries, occupying 210 pages is composed of 220 4-kbyte (212) pages

  17. Address Translation for Hierarchical page table The root page always remains in main memory

  18. Translation LookasideBuffer Each virtual memory reference can cause two physical memory accesses One to fetch the page table One to fetch the data To overcome this problem a high-speed cache is set up for page table entries Called a Translation Lookaside Buffer (TLB) Contains page table entries that have been most recently used

  19. Translation Lookaside Buffer

  20. TLB operation By the principle of locality, most virtual memory references will be to locations in recently used pages. Therefore, most references will involve page table entries in the cache. TLB hit TLB miss

  21. Page Size • Page size is an important hardware design decision • Smaller page size •  less amount of internal fragmentation •  more pages required per process • larger page tables • some portion of page tables must be in virtual memory • there may be double page fault (first to bring in the needed portion of the page table and second to bring in the process page)

  22. Page Size • Large page size is better because • Secondary memory is designed to efficiently transfer large blocks of data

  23. Further complications to Page Size • Small page size a large number of pages will be available in main memory for a process • as time goes on during execution, the pages in memory will all contain portions of the process near recent references •  low page fault rate

  24. Further complications to Page Size • Increased page size causes pages to contain locations further from any recent reference the effect of the principle of locality is weakened  page fault rate rises

  25. Example Page Size • The design issue of page size is related to the size of physical main memory and program size. • At the same time that main memory is getting larger, the address space used by applications is also growing. architectures that support multiple page sizes

  26. Segmentation • Segmentation allows the programmer to view memory as consisting of multiple address spaces or segments. • Segments may be of unequal size. • Each process has its own segment table.

  27. Segment Table • A bit is needed to determine if segment is already in main memory, if present, • Segment base is the starting address corresponding segment in main memory • Length is the length of the segment • Another bit is needed to determine if the segment has been modified since it was loaded in main memory

  28. Address Translation in Segmentation The segment base is added to the offset to produce the real address The segment no. is used to index into the segment table and look up the segment base

  29. Protection and sharing • Segmentation lends itself to the implementation of protection and sharing policies. • As each entry has a base address and length, a program cannot access a main memory location beyond the limits of a segment • Sharing can be achieved by referencing a segment in multiple segment tables

  30. Protection Relationships

  31. Combined Paging and Segmentation • A user’s address space is broken up into a number of segments and each segment is broken into fixed-size pages • From the programmer’s point of view, a logical address still consists of a segment number and a segment offset. • From the system’s point of view, the segment offset is viewed as a page number and page offset

  32. Combined Paging and Segmentation • The base now refers to a page table.

  33. Address Translation The frame no. is combined with the offset to produce the real address The page no. is used to index the page table and look up the frame no. The segment no. is used to index into the segment table to find the page table for that segment

More Related