1 / 29

ECE 454 Computer Systems Programming Virtual Memory and Prefetching

ECE 454 Computer Systems Programming Virtual Memory and Prefetching. Contents. Virtual Memory (review hopefully) Prefetching. Stack Runtime stack (8MB limit) Data Statically allocated data E.g., arrays & strings in code Heap Dynamically allocated storage malloc (), calloc (), new()

havyn
Télécharger la présentation

ECE 454 Computer Systems Programming Virtual Memory and Prefetching

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ECE 454 Computer Systems ProgrammingVirtual Memory and Prefetching

  2. Contents • Virtual Memory (review hopefully) • Prefetching

  3. Stack Runtime stack (8MB limit) Data Statically allocated data E.g., arrays & strings in code Heap Dynamically allocated storage malloc(), calloc(), new() Text Executable instructions Read-only IA32 Linux Memory Layout

  4. Programs access data and instructing using virtual memory addresses Conceptually very large array of bytes 4GB for 32 bit architectures, 16EB (exabytes) for 64 bit architectures Each byte has its own address System provides address space private to each process Memory allocation Need to decide where different program objects should be stored Performed by a combination of compiler and run-time system But why virtual memory? Why not physical memory? Virtual Memory

  5. Used in “simple” embedded microcontrollers Introduces several problems for larger, multi-process systems A System Using Physical Addressing Main memory 0: 1: 2: Physical address (PA) 3: CPU 4: 5: 6: 7: 8: ... M-1: Data word

  6. Problem 1: How Does Everything Fit? 64-bit addresses: 16 Exabytes Physical main memory: Few Gigabytes ? And there are many processes ….

  7. Problem 2: Memory Management Physical main memory Process 1 Process 2 Process 3 … Process n stack heap .text .data … What goes where? x

  8. Problem 3: Portability Machine 1 Physical main memory Machine 2 Physical main memory programA: stack heap .text .data … What goes where What goes where?

  9. Problem 4: Protection Physical main memory Process i Process j

  10. Problem 4: Sharing Physical main memory Process i Process j

  11. Each process gets its own private memory space Solves all the previous problems Solution: Add a Level Of Indirection Virtual memory Process 1 mapping Physical memory Virtual memory Process n

  12. A System Using Virtual Addressing • MMU = Memory Management Unit Main memory 0: CPU Chip Virtualaddress (VA) Physicaladdress (PA) 1: 2: 3: CPU MMU 4: 5: 6: 7: 8: ... M-1: Data word

  13. Driven by enormous miss penalty: Disk is about 10,000x slower than DRAM DRAM-Cache design: Large page (block) size: typically 4KB Fully associative Any virtual can be placed in any physical Requires a complex mapping function Sophisticated, expensive replacement algorithms Write-back rather than write-through Virtual Memory Turns Main Memory into a Cache

  14. MMU Needs A Large Table of Translations • MMU keeps mapping of VAs -> PAs in page tables Main memory 0: CPU Chip Virtualaddress (VA) Physicaladdress (PA) 1: 2: 3: CPU MMU 4: 5: 6: 7: 8: ... Page Table M-1: Data word

  15. Page Table is Stored in Memory Main memory 1) Processor sends virtual address (VA) to MMU 2-3) MMU requests page table entry (PTE) from page table 4) MMU sends physical address (PA) to cache/memory 5) Cache/memory sends data word to processor PTErequest CPU Chip 2 MMU PageTable VA 1 PTE CPU 3 PA 4 Data 5

  16. Page Not Mapped inPhysical Memory Main memory 1) Processor sends virtual address (VA) to MMU 2-3) MMU requests PTE from page table, but PTE is invalid 4) PTE absent so MMU triggers page fault exception 5) Handler chooses victim page (and, if page is dirty, pages it out to disk) 6) Handler pages in new page and updates PTE in memory 7) Handler returns to original process, restarting faulting instruction PTErequest Disk New page CPU Chip 2 6 MMU PageTable VA 1 CPU PTE invalid 3 Victim page 7 5 4 Page fault OS pagefault handler

  17. Page table entries (PTEs) are cached in L1 (like any other memory word) But PTEs may be evicted by other data references Even on a cache hit, PTE access requires a 1-cycle delay Doubles the cost of accessing physical addresses from cache Solution: Translation Lookaside Buffer (TLB) Small hardware cache in MMU Caches PTEs for a small number of pages (e.g., 256 entries) Speeding up Translation with a TLB

  18. TLB Hit • A TLB hit avoids access to page table in memory TLB Main memory VA PTE 2 3 CPU Chip MMU PageTable VA 1 CPU PA 4 Data 5

  19. TLB Miss • A TLB miss incurs additional memory access to read PTE TLB Invalid PTE Main memory VA 2 3 PTE CPU Chip 5 MMU PageTable VA 1 PTE reqeust CPU 4 PA 6 Data 7

  20. Programs tend to access a set of active virtual pages at any point in time called the working set Programs with better locality will have smaller working sets If (working set size) > main mem size: Pages are swapped (copied) in and out continuously Called thrashing, leads to performance meltdown If (# working set pages) > # TLB entries: TLB misses occur Not as bad as page thrashing, but still worth avoiding How to Program for Virtual Memory

  21. More on TLBs • Assume a 256-entry TLB, 4kB pages • 256*4kB = 1MB: can only have TLB hits for 1MB of data • This is called the TLB reach, i.e., amount of memory TLB covers • Typical L2 cache is 8MB • Hence can’t have TLB hits for all of L2 • Possibly consider TLB-size before L2 size • Real CPUs have second-level TLBs • This is getting complicated to reason about! • Need to experiment with varying sizes, e.g., find best tile size

  22. Prefetching

  23. Prefetching • Basic idea: • Predict data that might be needed soon (might be wrong) • Initiate an early request for that data (a load-to-cache) • If effective, helps tolerate latency to memory ORIGINAL CODE: CODE WITH PREFETCHING: inst1 inst1 inst2 prefetch X inst3 inst2 inst4 inst3 Cache miss latency load X (misses cache) inst4 load X (hits cache) Cache miss latency inst5 (load value is ready) inst5 (must wait for load value) inst6 inst6

  24. Prefetching is effective only if all of these are true: There is spare memory bandwidth Otherwise prefetching can cause bandwidth bottleneck Prefetching is accurate Only useful if the prefetched data will be used soon Prefetching is timely I.e., prefetch the right data, but not enough in advance Prefetched data doesn’t displace other in-use data E.g., prefetched data should not replace a cache block about to be used Latency hidden by prefetching outweighs its cost Cost of lots of useless prefetched data can be significant Ineffective prefetching can hurt performance! Prefetching is Difficult

  25. A simple hardware prefetcher: When one cache block is accessed, prefetch the adjacent block I.e., behaves like cache blocks are twice as big Helps with unaligned instructions (even when data is aligned) A more complex hardware prefetcher: Can recognize a “stream”: addresses separated by a “stride” Eg1: 0x1, 0x2, 0x3, 0x4, 0x5, 0x6... (stride = 0x1) Eg2: 0x100, 0x300, 0x500, 0x700, 0x900… (stride = 0x200) Prefetch predicted future addresses Eg., cur_addr + stride, cur_addr+ 2*stride, cur_addr+ 3*stride, … Hardware Prefetching

  26. Core 7 Hardware Prefetching L2-> L1 inst prefetching • Includes next-block prefetching • Includes multiple streaming prefetchers • Prefetching performed within a page boundary • Details are kept vague/secret 256 KB 8MB > 500 GB 32 KB 16 GB Disk L2 unified cache L3 shared cache Main Memory L1 I-cache 32 KB CPU Reg L1 D-cache 40-75 cycles Latency: 4cycles 10 cycles 60-100 cycles 10s of millions of cycles L2-> L1 data prefetching Mem -> L3 data prefetching 26

  27. Software Prefetching • Hardware provides special prefetch instructions: • Eg., intel’s prefetchnta instruction • Compiler or programmer can insert them in code • Can PF (non-strided) patterns that hardware wouldn’t recognize void process_list(list_t *head){ list_t*p = head; while (p){ process(p); p = p->next; } } void process_list_PF(list_t *head){ list_t*p = head; list_t *q; while (p){ q = p->next; prefetch(q); process(p); p = q; } } Assume process()runs long enough tohide prefetchlatency

  28. Summary: Optimizing for a ModernMemory Hierarchy

  29. Caches Conflict misses: less of a concern due to high-associativity Modern CPUs have 8-way L1/L2, 16-way L3 Cache capacity: keep working set within on-chip cache capacity Focus on either L1 or L2 depending on required working-set size Virtual memory Page Misses: keep working set within main memory capacity TLB Misses: keep working set #pages < TLB #entries Prefetching Arrange data structures so access patterns are sequential/strided Use compiler or manually-inserted prefetch instructions Memory Optimization: Summary

More Related