280 likes | 443 Vues
Lecture 21: Virtual Memory I. Last Time SRAM vs. DRAM Technology Trends Today Advanced DRAM organizations Virtual Memory. Interleaved Memory Organization. Bank Select. Latch or Queue. CPU & Cache. Memory Bank. Accesses may not reference banks evenly Consider 0,1,2,3 … vs
E N D
Lecture 21: Virtual Memory I Last Time SRAM vs. DRAM Technology Trends Today Advanced DRAM organizations Virtual Memory
Interleaved Memory Organization Bank Select Latch or Queue CPU & Cache Memory Bank
Accesses may not reference banks evenly Consider 0,1,2,3 … vs 0,8,16,24 … often caused by column access to a matrix causes problems for large block size too Solutions don’t do that number of columns in matrix not a power of 2 prime number of banks number of active banks with stride s is lcd(s,b) hash the banks Bank Conflicts
Synchronous DRAM • Interface signals are clocked • Clock provided by microprocessor • Why? • Easier to designed timed protocols • “Data available 8 cycles after CAS” • Add intelligence to EMI (external memory interface) on CPU
RAS CAS Burst Mode • Provide one address and sequence of data comes out • Perfect for cache line reads and writes • Burst size programmable Address Row Column Dout D0 D1 D3 D2
RAS CAS Page Mode Access • One RAS (get whole row) • Multiple CAS (different parts of the row) • Exploits spatial locality (kind of like DRAM cache) Address Row ColB ColC Dout C0 B0 C1 B1 B3 B2
RAS CAS Pipelined Mode Access • Interleave access to multiple internal banks • Lower latency for back-to-back access to different banks Address ColC RowB RowC ColB Dout C0 B0 C1 B1 B3 B2
New DRAM Interfaces • Rambus • 800 MHz interface (18 bits gets you 14.4Gb/sec) • compare this to 100Mhz, 16 bit SyncDram = 1.6Gb/sec • More complicated electrical interface on DRAM and CPU • Restrictions on board level design
Virtual Memory What is virtual memory? Why “virtualize” memory? Segmentation Address translation
address tag ind off 32 bits 32 bits Physical Memory Addressing LW R1,0(R2) DRAM 64MB CPU Cache • 4 bytes per word • 4 words per block 22 bits
What if? • A program is loaded into different places in memory each time it runs? • Relocation • A program wants to use more than 64MB? • Page to disk • We want to switch between multiple programs that use different data? • Protection
Single program runs at a time Code and static data are at fixed locations code starts at fixed location, e.g., 0x100 subroutines may be at fixed locations (absolute jumps) data locations may be wired into code Stack accesses relative to stack pointer. Simple View of Memory PC Code R0 ... Data R31 Stack
Need to relocate logical addresses to physical locations Stack is already relocatable all accesses relative to SP Code can be made relocatable allow only relative jumps all accesses relative to PC Data segment can calculate all addresses relative to a DP expensive faster with hardware support base register Running Two Programs (Relocation)No Protection PC Code R0 ... Data R31 Stack PC Code R0 Data ... R31 Stack
Add a single base register, BR, to hardware Base register loaded with data pointer (DP) for current program All data addresses added to base before accessing memory Can relocate code too Often implemented with a three-input adder Need to bypass base register to access system tables for program switching a place to stand Base-Register Addressing Logical Address Base (DP) Logical Address
Base Register Addressing Sys Code Sys Table System code handles switching between programs System table contains Base address of each program Saved state of non-running programs Base 0 Code Data Stack Base 1 Code Data Stack
Add a Length Register LR to the hardware A program is only allowed to access memory from BR to BR+Length-1 A program cannot set BR or LR they are privileged registers But how do we switch programs? Providing Protection Between Programs(Length Registers) Sys Code Sys Table Base 0 Code Data + Length 0 Stack Base 1 Code Data + Length 1 Stack
Base + Length Addressing Privileged Registers Logical Address Base Length < Logical Address Allowed
Segmentation • Break up memory space into segments • Segments placement and size can vary over time • Solves relocation and Protection • Memory accesses use base and length registers • But - what about accessing more memory?
Main Memory as a Cache for Disk • 32 bit addresses = 4GB, Main Memory = 64MB • Dynamically adjust what data stays in main memory • Page similar to cache block • Note: file system >> 4GB, managed by O/S data page (1-4KB) Demand Paging
Virtual Addresses Physical Addresses Disk Virtual Addresses Span Memory+Disk • Mappings changed dynamically by O/S • In response to users data accesses • OS triggered by hardware
Virtual Addr. Physical Addr. 32 bits 26 bits A Load to Virtual Memory LW R1,0(R2) DRAM 64MB • Translate from virtual space to physical space • VA PA • May need to go to disk CPU Cache Translate 22 bits
Virtual Page Number (VPN) Page Offset Virtual Address Translation 31 12 11 0 • Main Memory = 64MB • Page Size = 4KB • VPN = 20 bits • PPN = 14 bits Translation Table 25 12 11 0 Physical Page Number (PPN) Page Offset • Translation table • aka “Page Table”
Physical Page Number valid VPN offset PPN offset Page Table Construction Page Table Register • Page table size • (14 + 1) * 220 = 4MB • Where to put the page table? +
What if Data is Not in DRAM? 1) Examine page table 2) Discover that no mapping exists 3) Select page to evict, store back to disk 4) Bring in new page from disk 5) Update page table
Page Fault User Program Runs User program resumes Page fault OS Installs page OS requests page Disk read Disk interrupt 2nd User Program Runs
Next Time • Virtual Memory • Making Address Translation Fast • Page Table Issues • Multiple Programs, Address spaces, and Aliasing