1 / 42

Virtual Memory October 2, 2000

Virtual Memory October 2, 2000. CS740. Topics Motivations for VM Address Translation Accelerating with TLBs Alpha 21X64 memory system. 8 GB: ~$400. 256 MB: ~$400. 4 MB: ~$400. SRAM. DRAM. Disk. Motivation 1: DRAM a “Cache” for Disk. The full address space is quite large:

Télécharger la présentation

Virtual Memory October 2, 2000

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual MemoryOctober 2, 2000 CS740 • Topics • Motivations for VM • Address Translation • Accelerating with TLBs • Alpha 21X64 memory system

  2. 8 GB: ~$400 256 MB: ~$400 4 MB: ~$400 SRAM DRAM Disk Motivation 1: DRAM a “Cache” for Disk • The full address space is quite large: • 32-bit addresses: ~4,000,000,000 (4 billion) bytes • 64-bit addresses: ~16,000,000,000,000,000,000 (16 quintillion) bytes • Disk storage is ~30X cheaper than DRAM storage • 8 GB of DRAM: ~ $12,000 • 8 GB of disk: ~ $400 • To access large amounts of data in a cost-effective manner, the bulk of the data must be stored on disk

  3. CPU C a c h e regs Levels in Memory Hierarchy cache virtual memory Memory disk 8 B 32 B 8 KB Register Cache Memory Disk Memory size: speed: $/Mbyte: block size: 200 B 3 ns 8 B 32 KB / 4MB 6 ns $100/MB 32 B 128 MB 60 ns $1.50/MB 8 KB 20 GB 8 ms $0.05/MB larger, slower, cheaper

  4. SRAM DRAM Disk DRAM vs. SRAM as a “Cache” • DRAM vs. disk is more extreme than SRAM vs. DRAM • access latencies: • DRAM is ~10X slower than SRAM • disk is ~100,000X slower than DRAM • importance of exploiting spatial locality: • first byte is ~100,000X slower than successive bytes on disk • vs. ~4X improvement for page-mode vs. regular accesses to DRAM • “cache” size: • main memory is ~100X larger than an SRAM cache • addressing for disk is based on sector address, not memory address

  5. Impact of These Properties on Design • If DRAM was to be organized similar to an SRAM cache, how would we set the following design parameters? • Line size? • Associativity? • Replacement policy (if associative)? • Write through or write back? • What would the impact of these choices be on: • miss rate • hit time • miss latency • tag overhead

  6. “Cache” Tag Data 0: Object Name Object Name X D J 243 17 105 = X? 1: X X • • • • • • N-1: Data 0: 243 D: 1: 17 J: • • • N-1: 105 X: Locating an Object in a “Cache” • 1. Search for matching tag • SRAM cache • 2. Use indirection to look up actual object location • virtual memory Lookup Table “Cache” Location 0 N-1 • • • 1

  7. Memory 0: 1: Store 0x10 CPU Load 0xf0 N-1: A System with Physical Memory Only • Examples: • most Cray machines, early PCs, nearly all embedded systems, etc. CPU’s load or store addresses used directly to access memory.

  8. Memory 0: 1: Page Table Virtual Addresses Physical Addresses 0: 1: Store 0x10 CPU Load 0xf0 P-1: N-1: Disk A System with Virtual Memory • Examples: • workstations, servers, modern PCs, etc. Address Translation: the hardware converts virtual addresses into physical addresses via an OS-managed lookup table (page table)

  9. Memory 0: 1: Page Table Virtual Addresses Physical Addresses CPU 0: 1: Load 0x05 Store 0xf8 P-1: N-1: Disk Page Faults (Similar to “Cache Misses”) • What if an object is on disk rather than in memory? • Page table entry indicates that the virtual address is not in memory • An OS trap handler is invoked, moving data from disk into memory • current process suspends, others can resume • OS has full control over placement, etc.

  10. disk Disk Servicing a Page Fault (1) Initiate Block Read • Processor Signals Controller • Read block of length P starting at disk address X and store starting at memory address Y • Read Occurs • Direct Memory Access • Under control of I/O controller • I / O Controller Signals Completion • Interrupt processor • Can resume suspended process Processor Reg (3) Read Done Cache Memory-I/O bus (2) DMA Transfer I/O controller Memory disk Disk

  11. Reserved 0000 03FF 8000 0000 Not yet allocated Dynamic Data Static Data $gp Text (Code) 0000 0001 2000 0000 Stack $sp Not yet allocated 0000 0000 0001 0000 Reserved Motivation #2: Memory Management • Multiple processes can reside in physical memory. • How do we resolve address conflicts? (Virtual) Memory Image for Alpha Process e.g., what if two different Alpha processes access their stacks at address 0x11fffff80 at the same time?

  12. Soln: Separate Virtual Addr. Spaces • Virtual and physical address spaces divided into equal-sized blocks • “Pages” (both virtual and physical) • Each process has its own virtual address space • operating system controls how virtual pages as assigned to physical memory Virtual Addresses Physical Addresses 0 Address Translation 0 VP 1 PP 2 Process 1: VP 2 N-1 (Read-only library code) PP 7 0 VP 1 Process 2: VP 2 PP 10 N-1 M-1

  13. Shared Address Space P1 Pointer Table A Process P1 B P2 Pointer Table C Process P2 D E Contrast: Macintosh Memory Model • Does not use traditional virtual memory • All program objects accessed through “handles” • indirect reference through pointer table • objects stored in shared global address space “Handles”

  14. Shared Address Space P1 Pointer Table B Process P1 A P2 Pointer Table C Process P2 D E Macintosh Memory Management • Allocation / Deallocation • Similar to free-list management of malloc/free • Compaction • Can move any object and just update the (unique) pointer in pointer table “Handles”

  15. Mac vs. VM-Based Memory Mgmt • Allocating, deallocating, and moving memory: • can be accomplished by both techniques • Block sizes: • Mac: variable-sized • may be very small or very large • VM: fixed-size • size is equal to one page (8KB in Alpha) • Allocating contiguous chunks of memory: • Mac: contiguous allocation is required • VM: can map contiguous range of virtual addresses to disjoint ranges of physical addresses • Protection? • Mac: “wild write” by one process can corrupt another’s data

  16. VP 0: VP 0: VP 1: VP 1: VP 2: VP 2: Motivation #3: Protection • Page table entry contains access rights information • hardware enforces this protection (trap into OS if violation occurs) Page Tables Memory Read? Write? Physical Addr 0: Yes No PP 9 1: Process i: Yes Yes PP 4 No No XXXXXXX • • • • • • • • • Read? Write? Physical Addr Yes Yes PP 6 Process j: Yes No PP 9 N-1: No No XXXXXXX • • • • • • • • •

  17. VM Address Translation V = {0, 1, . . . , N–1} virtual address space P = {0, 1, . . . , M–1} physical address space MAP: V  P U {} address mapping function N > M MAP(a) = a' if data at virtual address a is present at physical address a' in P =  if data at virtual address a is not present in P missing item fault fault handler Processor  Secondary memory Addr Trans Mechanism Main Memory a a' OS performs this transfer (only if miss) physical address

  18. n–1 p p–1 0 virtual address virtual page number page offset address translation m–1 p p–1 0 physical address physical page number page offset VM Address Translation • Parameters • P = 2p = page size (bytes). Typically 1KB–16KB • N = 2n = Virtual address limit • M = 2m = Physical address limit Notice that the page offset bits don't change as a result of translation

  19. Disk Storage Page Tables Virtual Page Number Page Table (physical page or disk address) Physical Memory Valid 1 1 0 1 1 1 0 1 0 1

  20. Address Translation via Page Table virtual address page table base register n–1 p p–1 0 VPN acts as table index virtual page number page offset access valid physical page number Address if valid=0 then page not in memory m–1 p p–1 0 physical page number page offset physical address

  21. Page Table Operation • Translation • Separate (set of) page table(s) per process • VPN forms index into page table • Computing Physical Address • Page Table Entry (PTE) provides information about page • if (Valid bit = 1) then page in memory. • Use physical page number (PPN) to construct address • if (Valid bit = 0) then page in secondary memory • Page fault • Must load into main memory before continuing • Checking Protection • Access rights field indicate allowable access • e.g., read-only, read-write, execute-only • typically support multiple protection modes (e.g., kernel vs. user) • Protection violation fault if don’t have necessary permission

  22. Integrating VM and Cache miss • Most Caches “Physically Addressed” • Accessed by physical addresses • Allows multiple processes to have blocks in cache at same time • Allows multiple processes to share pages • Cache doesn’t need to be concerned with protection issues • Access rights checked as part of address translation • Perform Address Translation Before Cache Lookup • But this could involve a memory access itself • Of course, page table entries can also become cached VA PA Trans- lation Cache Main Memory CPU hit data

  23. hit miss VA PA TLB Lookup Cache Main Memory CPU miss hit Trans- lation data Speeding up Translation with a TLB • “Translation Lookaside Buffer” (TLB) • Small, usually fully associative cache • Maps virtual page numbers to physical page numbers • Contains complete page table entries for small number of pages

  24. N–1 p p–1 0 virtual address virtual page number page offset process ID valid dirty tag physical page number valid valid valid valid = TLB hit physical address tag byte offset index valid tag data = data cache hit Address Translation with a TLB TLB Cache

  25. Alpha AXP 21064 TLB • page size: 8KB • hit time: 1 clock • miss penalty: 20 clocks • TLB size: ITLB 8 PTEs, DTLB 32 PTEs • placement: Fully associative

  26. TLB-Process Interactions • TLB Translates Virtual Addresses • But virtual address space changes each time have context switch • Could Flush TLB • Every time perform context switch • Refill for new process by series of TLB misses • ~100 clock cycles each • Could Include Process ID Tag with TLB Entry • Identifies which address space being accessed • OK even when sharing physical pages

  27. TLB Lookup PA VA = CPU Hit Tag Index Cache Data Virtually-Indexed Cache • Cache Index Determined from Virtual Address • Can begin cache and TLB index at same time • Cache Physically Addressed • Cache tag indicates physical address • Compare with TLB result to see if match • Only then is it considered a hit

  28. 31 12 11 0 virtual page number page offset 29 12 11 0 physical page number page offset Generating Index from Virtual Address • Size cache so that index is determined by page offset • Can increase associativity to allow larger cache • E.g., early PowerPC’s had 32KB cache • 8-way associative, 4KB page size • Page Coloring • Make sure lower k bits of VPN match those of PPN • Page replacement becomes set associative • Number of sets = 2k Index Index

  29. Alpha Physical Addresses • Model Bits Max. Size • 21064 34 16GB • 21164 40 1TB • 21264 44 16TB • Why a 1TB (or More) Address Space? • At $1.00 / MB, would cost $1 million for 1 TB • Would require 131,072 memory chips, each with 256 Megabits • Current uniprocessor models limited to 2 GB

  30. Interconnection Network M M M M P P P P • • • Massively-Parallel Machines • Example: Cray T3E • Up to 2048 Alpha 21164 processors • Up to 2 GB memory / processor • 8 TB physical address space! • Logical Structure • Many processors sharing large global address space • Any processor can reference any physical address • VM system manages allocation, sharing, and protection among processors • Physical Structure • Memory distributed over many processor modules • Messages routed over interconnection network to perform remote reads & writes

  31. level 1 level 2 level 3 page offset 10 10 10 13 Alpha Virtual Addresses • Page Size • Currently 8KB • Page Tables • Each table fits in single page • Page Table Entry 8 bytes • 4 bytes: physical page number • Other bytes: for valid bit, access information, etc. • 8K page can have 1024 PTEs • Alpha Virtual Address • Based on 3-level paging structure • Each level indexes into page table • Allows 43-bit virtual address when have 8KB page size

  32. Alpha Page Table Structure Level 2 Page Tables • Tree Structure • Node degree  1024 • Depth = 3 • Nice Features • No need to enforce contiguous page layout • Dynamically grow tree as memory needs increase • • • • • • • • • Level 1 Page Table • • • Physical Pages • • • Level 3 Page Tables • • • • • •

  33. Mapping an Alpha 21064 Virtual Address 13 bits 10 bits PTE size: 8 Bytes PT size: 1024 PTEs 13 bits 21 bits

  34. Virtual Address Ranges • Binary Address Segment Purpose 1…1 11 xxxx…xxx seg1 Kernel accessible virtual addresses • Information maintained by OS but not to be accessed by user 1…1 10 xxxx…xxx kseg Kernel accessible physical addresses • No address translation performed • Used by OS to indicate physical addresses 0…0 0x xxxx…xxx seg0 User accessible virtual addresses • Only part accessible by user program • Address Patterns • Must have high order bits all 0’s or all 1’s • Currently 64–43 = 21 wasted bits in each virtual address • Prevents programmers from sticking in extra information • Could lead to problems when want to expand virtual address space in future

  35. Alpha Seg0 Memory Layout 3FF FFFF FFFF • Regions • Data • Static space for global variables • Allocation determined at compile time • Access via $gp • Dynamic space for runtime allocation • E.g., using malloc • Text • Stores machine code for program • Stack • Implements runtime stack • Access via $sp • Reserved • Used by operating system • shared libraries, process info, etc. Reserved (shared libraries) 3FF 8000 0000 Not yet allocated Dynamic Data Static Data 001 4000 0000 Not used Text (Code) 001 2000 0000 Stack $sp Not yet allocated 000 0001 0000 Reserved

  36. Alpha Seg0 Memory Allocation • Address Range • User code can access memory locations in range 0x0000000000010000 to 0x000003FFFFFFFFFF • Nearly 242 4.3980465 X1012byte range • In practice, programs access far fewer • Dynamic Memory Allocation • Virtual memory system only allocates blocks of memory as needed • As stack reaches lower addresses, add to lower allocation • As break moves toward higher addresses, add to upper allocation • Due to calls to malloc, calloc, etc. Shared Libraries (Read Only) Gap break Dynamic Data Static Data Gap Text (Code) Stack Region Current $sp Minimum $sp

  37. Minimal Page Table Configuration 0000 03FF 8000 1FFF VP4 • User-Accessible Pages • VP4: Shared Library • Read only to prevent undesirable interprocess interactions • Near top of Seg0 address space • VP3: Data • Both static & dynamic • Grows upward from virtual address 0x140000000 • VP2: Text • Read only to prevent corrupting code • VP1: Stack • Grows downward from virtual address 0x120000000 0000 03FF 8000 0000 0000 0001 4000 1FFF VP3 0000 0001 4000 0000 0000 0001 2000 1FFF VP2 0000 0001 2000 0000 0000 0001 1FFF FFFF VP1 0000 0001 1FFF E000

  38. 0000 0000 0011 0000 0000 1111 0001 1111 0001 0100 1000 0010 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 0000 Partitioning Addresses • Address 0x001 2000 0000 • Level 1: 0 Level 2: 576 Level 3: 0 • Address 0x001 4000 0000 • Level 1: 0 Level 2: 640 Level 3: 0 • Address 0x3FF 8000 0000 • Level 1: 511 Level 2: 768 Level 3: 0 0000000000 100100000 0000000000 0000000000000 0000000000 101000000 0000000000 0000000000000 0111111111 110000000 0000000000 0000000000000

  39. 0 0 511 0 0 1023 Mapping Minimal Configuration 0000 03FF 8000 1FFF VP4 0000 03FF 8000 0000 768 0000 0001 4000 1FFF VP3 0000 0001 4000 0000 0000 0001 2000 1FFF VP2 640 576 575 0000 0001 2000 0000 0000 0001 1FFF FFFF VP1 0000 0001 1FFF E000

  40. 1 0 Increasing Heap Allocation • Without More Page Tables • Could allocate 1023 additional pages • Would give ~8MB heap space • Adding Page Tables • Must add new page table with each additional 8MB increment • Maxiumum Allocation • Our Alphas limit user to 1GB data segment • Limit stack to 32MB 0000 0001 407F FFFF • • • 1023 • • • 0000 0001 4000 2000 0000 0001 4000 1FFF VP3 0000 0001 4000 0000

  41. level 1 level 2 level 3 page offset 10+k 10+k 10+k 13+k Expanding Alpha Address Space • Increase Page Size • Increasing page size 2X increases virtual address space 16X • 1 bit page offset, 1 bit for each level index • Physical Memory Limits • Cannot be larger than kseg VA bits –2 ≥ PA bits • Cannot be larger than 32 + page offset bits • Since PTE only has 32 bits for PPN • Configurations • Page Size 8K 16K 32K 64K • VA Size 43 47 51 55 • PA Size 41 45 47 48

  42. Main Theme • Programmer’s View • Large “flat” address space • Can allocate large blocks of contiguous addresses • Process “owns” machine • Has private address space • Unaffected by behavior of other processes • System View • User virtual address space created by mapping to set of pages • Need not be contiguous • Allocated dynamically • Enforce protection during address translation • OS manages many processes simultaneously • Continually switching among processes • Especially when one must wait for resource • E.g., disk I/O to handle page fault

More Related