1 / 19

Chapter 10 - Memory Management

Chapter 10 - Memory Management. Memory management can have a large influence on the performance of a program. Operating system handles allocation of memory to processes. Processes typically manage the OS-supplied memory (e.g., new or malloc ()) - Figure 10.1.

illias
Télécharger la présentation

Chapter 10 - Memory Management

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 10 - Memory Management • Memory management can have a large influence on the performance of a program. • Operating system handles allocation of memory to processes. • Processes typically manage the OS-supplied memory (e.g., new or malloc()) - Figure 10.1. • Helpful to study the layout of memory within a process before discussing OS memory functions.

  2. Linking & Loading a Process • Various steps are required to take the source code of a program and make it runnable (Figure 10.2). • Compiler produces object modules (in UNIX: “.o” files; in DOS/WIN: “.obj” files). • Object module formats are varied; Figures 10.3, 10.4 & 10.4 demonstrate a typical format containing: • Header - directory of the rest of the file contents. • Machine code - compiler-generated machine code; the code references in here are relative. • Initialized data - globals that have compile-time values. • Uninitialized data - globals that do not have compile-time values (and thus no space allocated in object file).

  3. Linking & Loading a Process • Symbol table - defined external symbols (code/data defined in this object file that can be called from elsewhere) and undefined external symbols (referenced in this module, but found elsewhere). • Relocation information - information about the object file that permits the linker to connect object files into a coherent, executable unit (akaload module). • UNIX notes: • Many UNIX object files are in either COFF (Common Object File Format) or ELF (Executable and Linking Format). • Unix file command can tell you something about the object file and/or executable. • Unix ld command is used to combine object files into an executable; it is sometimes called implicitly.

  4. Linking & Loading a Process • More UNIX notes: • Unix nm command will print part of the symbol table of an object file and/or executable binary. • UNIX ar command used to manage libraries of object files (“.a” files on UNIX; “.lib” files on DOS/WIN). • For more information, see the following man pages: ld, nm, a.out, ar, strip & elf. • The linker is responsible for combining one or more object files along with zero or more libraries into a load module (executable binary). • Linker steps are quite involved (page 382-383) but boil down to two steps: relocation and linking.

  5. Linking & Loading a Process • Relocation - The correction of addresses within the object modules relative to the linker’s placement of other object modules within the binary (Figure 10.6). • Relocation can be static (done once by the linker at link time) or dynamic (a base register is added to the address in the binary continually at run time). • Relocation is also called binding. • Linking - Modification of addresses where one object module references code/data in another object module (also called resolution of unsatisfied external references). Figure 10.7. • Libraries are used to store common functions (accessed via “-lm” on cc/ld = /usr/lib/libm.a).

  6. Loading a Binary for Execution • The load module (executable binary) is loaded into the process’ memory area using OS-specific memory layouts (such as Figure 10.8). • Note how some areas are not stored in the binary but have to be created for execution in memory (uninitialized & stack data). • OS calculates the memory required for the particular binary, including a default stack size. The UNIX size command will show you the expected memory “footprint” from a binary (try “size a.out”). • Notice how the memory areas of a process are laid out to permit dynamic growth (Figure 10.9) via any future new or malloc() calls.

  7. Variations in Program Loading • Larger programs result in large object files and libraries. The resulting binary (load module) can be huge. • One technique to cut down the size is load time dynamic linking - delay the linking in of library routines at process creation time instead of at binary creation time. The resulting process image in memory will have all the externals satisfied (compare Figure 10.10 with 10.11). • Another technique is run time dynamic linking - rather than deferring linking at binary load time you delay it until the last possible moment -- at the time of reference by the program (Figure 10.12).

  8. Variations in Program Loading • Figure 10.13 summarizes the three linking methods (static, load time dynamic, run time dynamic). • Dynamic linking is also called late binding. • Interesting comparison of the costs involved with the three methods on page 393. This is an example of the classic time/space tradeoff. Decreasing space requirements will usually increase time requirements and vice versa. • Book doesn’t mention a forth very popular type of late binding -- use of shared libraries. • With the three techniques above each process ends up requiring memory space allocated for all of the object modules the program uses.

  9. Variations in Program Loading - Shared Libraries • Rather than having each process load up it’s own private copy of common library routines you can keep only one copy of a common routine in memory and link each process to a block of shared memory containing the common routine. • For instance, rather than 100 processes each loading up the object module for the printf() routine you have each one call a single copy of printf(). • Thus, the linking happens at runtime and rather than copying in the code from a common library the executable is routed to the shared library routine.

  10. Variations in Program Loading - Shared Libraries • The shared library routine must be written such that it does not use any private global data of any one particular process, else you couldn’t have more than one process sharing the code. • This is called reentrant, pure or PIC (Position Independent Code) code. From the “CC” man page: -pic Produces position-independent code. Use this option to compile source files when building a shared library. Each reference to a global datum is generated as a dereference of a pointer in the global offset table. Each function call is generated in pc-relative addressing mode through a procedure linkage table.

  11. Variations in Program Loading - Shared Libraries • Shared library code resides in special “.so” files. For example, “ls -l /lib/libc.*” on xi shows: -rw-r--r-- 1 bin bin 1153120 Dec 14 1996 /lib/libc.a lrwxrwxrwx 1 root root 11 Aug 7 1996 /lib/libc.so -> ./libc.so.1 -rwxr-xr-x 1 bin bin 663460 Dec 14 1996 /lib/libc.so.1 • libc.a contains the statically-linked object modules. • libc.so.1 contains the shared library object modules that are linked dynamically at runtime to a single copy of the routines in memory shared between all processes. • Result is decrease in overall memory usage. • Shared lib support requires OS intervention!

  12. Variations in Program Loading - Shared Libraries • Shared libraries are named by version numbers, so you can be sure a program compiled against a particular version of a shared library will run with the correct version (if it is installed). • The ldd command will show you what shared libraries a particular binary expects to be available. • The UNIX environment variable LD_LIBRARY_PATH is used to indicate where the runtime linker can find the “.a” and “.so” files. • DLLs under Windows-based operating system serve a similar function (Dynamically Linked Library). Windows uses the PATH variable to find DLLs.

  13. Skip 10.5, 10.6, 10.8, 10.9, 10.10 • Section 10.7: Dynamic Memory Allocation • Static allocation of memory within an operating system is not a good idea, since processes are dynamic within their own behavior and in their life cycles. • OS has to allocate blocks of memory depending on demand. OS has to figure out how to: • Keep track of blocks in use and free. • Allocate blocks when a request comes in. • Process memory patterns can lead to memory fragmentation as different sized blocks are allocated and released (Figure 10.19).

  14. Logical & Physical Memory • Review: physical addresses on the CRA-1 are used while in system mode so the processor has access to all of memory. • When in user mode, the processor is limited by the value of base and limit. This is called logical addressing. • The hardware and operating system create multiple logical address spaces within the single physical address space (Figure 10.27). • At this point we are still considering the logical address space to be contiguous within physical memory.

  15. Allocating Contiguous Memory to Processes • SOS & JavaSOS divide memory into even-sized chunks (static allocation). • Not a very flexible situation if you have processes dynamically changing their size and number over time (Figure 10.28). • The next step would be to dynamically assign memory as processes change in size and enter/exit the system. This is a Difficult Problem (we skipped this in section 10.7). • Only makes sense to bother with dynamic memory allocation if it is desirable to share the machine between multiple processes (multiprogramming).

  16. Allocating Contiguous Memory to Processes • OS & Hardware must provide: • Memory allocation scheme - various algorithms (again which we skipped) mentioned in earlier sections. • Memory protection scheme - can use ye olde base & bound registers (requires contiguous memory allocation) or keyed memory (permits non-contiguous memory allocation) or as-yet not discussed techniques (Figure 10.29). • Memory Management System Calls • A process that does dynamic memory programming requires OS services to adjust it’s memory boundaries. • One simple SOS solution would be to add yet another system call for memory requests (Figure 10.30).

  17. Memory Management System Calls • UNIX uses the brk() call (named so as not to conflict with the C reserved word “break”) to extend the upper bound of the process: int brk (char *addr); // 0 == worked, -1 == failed • Execution of the brk() call results in the extension of the dynamic data section of the process memory map (Figure 10.31). • Notice the unused logical address space -- this is memory addresses that are not mapped to physical memory. This requires a non-contiguous memory allocation scheme within the OS to support. • Usually, new & malloc() indirectly result in brk() calls, depending on the size of the requests.

  18. Memory Management System Calls • An internal process memory manager (typically part of the runtime support in a language) takes care of intra-process memory requests. • The internal process memory manager calls the operating system only if the processes memory limit isn’t large enough to satisfy the program’s needs. • The two levels of memory management result in most of the malloc()/new/free()/delete operations being handled within the process. • Note that the behavior of most programs means that their memory demands increase over time. • Figure 10.32 & 10.33 show these two levels of memory management at work.

  19. Memory Management System Calls • The semantics of the brk() system call come from an era where the mapping of logical addresses was in a contiguous physical address space (since brk() grows the process from one of the ends and not in the middle). • A proposed SOS call acknowledges that modern memory managers can use non-contiguous schemes (such as paging, presented in the next chapter): char *AllocateMemory(int length); • Notice how it looks a lot like malloc(). • Skip section 10.16.

More Related