1 / 48

Memory Management

Memory Management. Chapter 7. Memory Management. Subdividing memory to accommodate multiple processes Memory needs to allocated efficiently to pack as many processes into memory as possible. Memory Management Requirements (1). Relocation Protection Sharing Logical organization

onawa
Télécharger la présentation

Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Memory Management Chapter 7

  2. Memory Management • Subdividing memory to accommodate multiple processes • Memory needs to allocated efficiently to pack as many processes into memory as possible

  3. Memory Management Requirements (1) • Relocation • Protection • Sharing • Logical organization • Physical organization

  4. Memory Management Requirements (2) • Relocation • programmer does not know where the program will be placed in memory when it is executed • while the program is executing, it may be swapped to disk and returned to main memory at a different location • the processor hardware and operating system software must be able to translate the memory references found in the code of the program into actual physical memory address,reflecting the current location of the program in main memory

  5. Memory Management Requirements (3) • Protection • processes should not be able to reference memory locations in another process without permission • because the location of a program in main memory is unknown, it is impossible to check absolute addresses in programs since the program could be relocated • all memory references generated by a process must be checked during execution to ensure that they refer only to the memory space allocated to that process.

  6. Memory Management Requirements (4) • Sharing • any protection mechanism must have the flexibility to allow several processes to access the same portion of memory • If a number of process are executing the same program, it is advantageous to allow each process (person) access to the same copy of the program rather than have their own separate copy

  7. Memory Management Requirements (5) • Logical Organization • memory is organized as a linear address space. While this organization closely mirrors the actual machine hardware, it does not correspond t the way in which programs are typically constructed. • most programs are written in modules • different degrees of protection given to modules (read-only, execute-only) • Segmentation is a tool to satisfies these requirement.

  8. Memory Management Requirements (6) • Physical Organization • memory available for a program plus its data may be insufficient • overlaying allows various modules to be assigned the same region of memory • secondary memory cheaper, larger capacity, and permanent

  9. Memory Partitioning • Fixed Partitioning • Dynamic Partitioning • Simple Paging • Simple Segmentation • Virtual-Memory Paging • Virtual-Memory Segmentation

  10. Fixed Partitioning • Partition available memory into regions with fixed boundaries • Equal-size partitions • any process whose size is less than or equal to the partition size can be loaded into an available partition • if all partitions are full, the operating system can swap a process out of a partition • a program may not fit in a partition. The programmer must design the program with overlays

  11. Fixed Partitioning • Main memory use is inefficient. Any program, no matter how small, occupies an entire partition. This is called internal fragmentation. Operating System 8 M 8 M 8 M 8 M 8 M

  12. Fixed Partitioning • Unequal-size partitions • lessens the problem with equal-size partitions Operating System 8 M 2 M 4 M 6 M 8 M 8 M 12 M

  13. Placement Algorithm with Partitions • Equal-size partitions • because all partitions are of equal size, it does not matter which partition is used • Unequal-size partitions • can assign each process to the smallest partition within which it will fit • queue for each partition • processes are assigned in such a way as to minimize wasted memory within a partition

  14. One Process Queue per Partition Operating System New Processes

  15. One Process Queue per Partition • When its time to load a process into main memory the smallest available partition that will hold the process is selected Operating System New Processes

  16. Dynamic Partitioning • Partitions are of variable length and number • Process is allocated exactly as much memory as required • Eventually get holes in the memory. This is called external fragmentation • Must use compaction to shift processes so they are contiguous and all free memory is in one block

  17. Operating System Operating System 320 K 320 K Process 1 Process 1 Process 2 576 K Example Dynamic Partitioning Operating System 128 K 224 K 896 K 352 K

  18. Operating System 320 K Process 1 128 K Process 4 96 K Process 3 288 K 64 K Example Dynamic Partitioning Operating System Operating System 320 K 320 K Process 1 Process 1 224 K 224 K Process 2 Process 3 288 K Process 3 288 K 64 K 64 K

  19. Example Dynamic Partitioning Operating System Operating System 320 K Process 2 224 k 96 K 128 K 128 K Process 4 Process 4 96 K 96 K Process 3 288 K Process 3 288 K 64 K 64 K

  20. Dynamic Partitioning Placement Algorithm • Operating system must decide which free block to allocate to a process • Best-fit algorithm • chooses the block that is closest in size to the request • worst performer overall • since smallest block is found for process, the smallest amount of fragmentation is left memory compaction must be done more often

  21. Dynamic Partitioning Placement Algorithm • First-fit algorithm • starts scanning memory from the beginning and chooses the first available block that is large enough. • fastest • may have many process loaded in the front end of memory that must be searched over when trying to find a free block

  22. Dynamic Partitioning Placement Algorithm • Next-fit • starts scanning memory from the location of the last placement and chooses the next available block that is large enough • more often allocate a block of memory at the end of memory where the largest block is found • the largest block of memory is broken up into smaller blocks • compaction is required to obtain a large block at the end of memory

  23. Dynamic Partitioning Placement Algorithm 8K 8K 12K 12K First Fit 22K 6K Best Fit Last allocated block (14K) 18K 2K 8K 8K 6K 6K Allocated block Free block 14K 14K Next Fit 36K 20K Before After

  24. Why Buddy System? - Buddy Systems (1) • A fixed partitioning scheme limits the number of active processes and may use space inefficiently if there is a poor match between available partition size and process size • A dynamic partitioning scheme is more complex to maintain and includes the overhead of compaction.

  25. Memory Blocks in a Buddy System- Buddy Systems (2) • In a buddy system, memory blocks are available of size 2k, L <= K <= U, where: 2L = smallest block that is allocated. 2U = largest size block that is allocated. • Generally, 2U is the size of the entire memory available for allocation.

  26. Allocation Method - Buddy Systems (3) • To begin, the entire space available for allocation is treated as a single block of size 2U. If a request of size that 2 U-1 < s <= 2U is made, then the entire block is allocated. • Otherwise, the block is split into two equal buddies of size 2 U-1. If 2 U-2 < s<= 2U-1, then the request is allocated to one of the two buddies. • Otherwise, one of the buddies is split in half again. • This process continues until the smallest block greater than or equal to s is generated and allocated to the request.

  27. Remove Unallocated Blocks - Buddy Systems (4) • At any time, the buddy system maintains a list of holes (unallocated blocks) of each size 2 i. • A hole may be removed from the (i + 1) list by splitting it in half to create two buddies of size 2i in the i list. • Whenever a pair of buddies on the i list both become unallocated, they are removed from that list and coalesced into a single block on the • (i + 1) list.

  28. Example of Buddy System- Buddy Systems (5) 1 Megabyte Block 1M Request 100K (A) A=128K 128K 256K 512K Request 240K (B) A=128K 128K B=256K 512K Request 256K (D) A=128K 128K B=256KD=256K 256K Release B A=128K 128K 256K D=256K 256K Release A 512K D=256K 256K Request 75K (E) E=128K 128K 256K D=256K 256K Release E 512K D=256K 256K Release D 1M

  29. Tree Representation of Buddy System- Buddy Systems (6) 1M 512K 256K 128K A=128K 128K B=256KD=256K 256K

  30. Fibonacci Buddy System (1) • The Fibonacci sequence is defined as follow: F0 = 0, F1 = 1, F n+2 = F n+1 + Fn (n>=0) 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, ….

  31. Fibonacci Buddy System (2) Show the results of the following sequence in a figure using Fibonacci Buddy system: Request 60 (A); Request 135 (B); Request 70 (C); Return A; Request 20 (D); Return B; Return C; Return D.

  32. Fibonacci Buddy System (3) 377 Byte Block 377(144+233) Request 60 (A) 55 A= 89 233 Request 135 (B) 55 A=89 89 B=144 Request 70 (C) 55A=89 C=89 B=144 Return A 144 C=89B=144 Request 60 (D) 55D=89C=89B=144 Return B 55 D=89C=89 144 Return D 144 C=89 144 Return C 377

  33. Fibonacci Buddy System (4) 377 144 + 233 (55+89)+ (89+144) 128K 64K 55 D=89C=89 144

  34. Relocation • When program loaded into memory the actual (absolute) memory locations are determined • A process may occupy different partitions which means different absolute memory locations during execution (from swapping) • Compaction will also cause a program to occupy a different partition which means different absolute memory locations

  35. Addresses • Logical • reference to a memory location independent of the current assignment of data to memory • translation must be made to the physical address • Relative • address expressed as a location relative to some known point, usually the beginning of the program • Physical • the absolute address or actual location

  36. Hardware Support for Relocation Relative address Process Control Block Base Register Adder Program Absolute address Bounds Register Comparator Data Interrupt to operating system Stack Process image in main memory

  37. Registers Used during Execution • Base register • starting address for the process • Bounds register • ending location of the process • These values are set when the process is loaded and when the process is swapped in

  38. Registers Used during Execution • The value of the base register is added to a relative address to produce an absolute address • The resulting address is compared with the value in the bounds register • If the address is not within bounds, an interrupt is generated to the operating system

  39. Paging • Partition memory into small equal-size chunks and divide each process into the same size chunks • The chunks of a process are called pages and chunks of memory are called frames • Operating system maintains a page table for each process • contains the frame location for each page in the process • memory address consist of a page number and offset within the page

  40. 0 0 A.0 0 A.0 1 1 1 A.1 A.1 2 2 A.2 2 A.2 3 3 A.3 3 A.3 4 4 4 B.0 5 5 5 B.1 6 6 6 B.2 7 7 7 8 8 8 9 9 9 10 10 10 11 11 11 12 12 12 13 13 13 14 14 14 Paging Frame Number (a) (b) (c)

  41. 0 A.0 0 0 1 1 1 A.1 2 A.2 2 2 3 A.3 3 3 4 B.0 4 4 5 B.1 5 5 6 B.2 6 6 7 C.0 7 7 8 C.1 8 8 9 C.2 9 9 10 C.3 10 10 11 11 11 12 12 12 13 13 13 14 14 14 Paging A.0 A.0 A.1 A.1 A.2 A.2 A.3 A.3 D.0 D.1 D.2 C.0 C.0 C.1 C.1 C.2 C.2 C.3 C.3 D.3 D.4 (d) (e) (f)

  42. 0 1 2 3 4 Page Tables for Example 0 0 0 0 --- 7 4 13 1 1 1 --- 1 8 5 14 2 2 2 --- 2 9 6 Free Frame List 3 3 3 10 11 Process B 12 Process C Process A Process D Data Structures for the Example at Time Epoch (f)

  43. Logical Address of Paging Relative Address = 1502 Logical Address = User Process (2700 Bytes) Page# =1, Offset = 478 0000010111011110 0000010111011110 Page 0 1024 Page 1 478 Page 3 Internal Fragmentation (a) Partitioning (b) Paging (Page Size = 1K)

  44. Segmentation • All segments of all programs do not have to be of the same length • There is a maximum segment length • Addressing consist of two parts - a segment number and an offset • Since segments are not equal, segmentation is similar to dynamic partitioning

  45. Logical Address of Segmentation Relative Address = 1502 Logical Address = User Process (2700 Bytes) Segment# =1, Offset = 752 0000010111011110 0001001011110000 Segment 0 750 Bytes 752 Segment 1 1950 Bytes (a) Partitioning (b) Segmentation

  46. Question 1 - Exercise/Home Work (1) • A 1 Megabyte block of memory is allocated using buddy system. A. Show the results of the following sequence in a figure: Request 70K (A); Request 35K (B); Request 80K (C); Return A; Request 60K (D); Return B; Return C; Return D. B. Show the binary tree representation following Return B.

  47. Question 2 - Exercise/Home Work (1) • The Fibonacci sequence is defined as follow: F0 = 0, F1 = 1, F n+2 = F n+1 + Fn (n>=0) 0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, …. A. Could this sequence be used to establish a buddy system? B. What would be the advantage of this system over the binary buddy system?

  48. Question 2 - Exercise/Home Work (2) C. Show the results of the following sequence in a figure using Fibonacci buddy system: Request 70 (A); Request 35 (B); Request 80 (C); Return A; Request 60 (D); Return B; Return C; Return D. D. Show the binary tree representation following Return B.

More Related