1 / 11

Virtual to Physical Address Translation

Virtual to Physical Address Translation. Effective Address. TLB Lookup.  1 pclk. miss. hit. Protection Check. Page Table Walk.  1 pclk. 100’s pclk. succeed. permitted. fail. denied. Update TLB. Page Fault OS Table Walk. Protection Fault. Physical Address To Cache.

teige
Télécharger la présentation

Virtual to Physical Address Translation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Virtual to Physical Address Translation Effective Address TLB Lookup  1 pclk miss hit Protection Check Page Table Walk  1 pclk 100’s pclk succeed permitted fail denied Update TLB Page Fault OS Table Walk Protection Fault Physical Address To Cache 10000’s pclk

  2. PA VA Physical Memory Physical Cache CPU MMU VA PA Physical Memory Virtual Cache CPU MMU Cache Placement and Address Translation Physical Cache (Most Systems) longer hit time fetch critical path Virtual Cache (SPARC2’s) aliasing problem cold start after context switch fetch critical path Virtual caches are not popular anymore because MMU and CPU can be integrated on one chip

  3. Physically Indexed Cache Virtual Page No. (VPN) Virtual Tag Index Page Offset (PO) Address (n=v+g bits) v-k k g TLB p Phy . Page No. (PPN) PO Physical Address Tag Index BO (m=p+g bits) t b i D-cache Data

  4. Virtually Indexed CacheParallel Access to TLB and Cache arrays Virtual Pg No. (VPN) Virtual Pg No. (VPN) Tag Index Page Offset Tag Index Page Offset Index BO v-k k g TLB b i p D-cache p PPN PPN p p Data = Hit/Miss How large can a virtually indexed cache get?

  5. Large Virtually Indexed Cache Virtual Pg No. (VPN) Virtual Pg No. (VPN) Tag Index Page Offset Tag Index Page Offset Index BO v-k k a g TLB b i p D-cache p PPN PPN p p Data = Hit/Miss If two VPNs differs ina,but both map to the same PPN then there is an aliasing problem

  6. Synonym (or Aliasing) • When VPN bits are used in indexing, two virtual addresses that map to the same physical address can end up sitting in two cache lines • In other words, two copies of the same physical memory location may exist in the cache  modification to one copy won’t be visible in the other Virtual Pg No. (VPN) Tag Index Page Offset Index BO a b i D-cache p PPN PPN from TLB p Data = Hit/Miss If the two VPNs do not differ in a then there is no aliasing problem

  7. R10000’s Virtually Index Caches • 32KB 2-Way Virtually-Indexed L1 • needs 10 bits of index and 4 bits of block offset • page offset is only 12-bits  2 bits of index are VPN[1:0] • Direct-Mapped Physical L2 • L2 is Inclusive of L1 • VPN[1:0] is appended to the “tag” of L2 • Given two virtual addresses VA and VB that differs in a and both map to the same physical address PA • Suppose VA is accessed first so blocks are allocated in L1&L2 • What happens when VB is referenced? 1 VB indexes to a different block in L1and misses 2 VB translates to PA and goes to the same block as VA in L2 3. Tag comparison fails (VA[1:0]VB[1:0]) 4. Treated just like as a L2 conflict miss VA’s entry in L1 and L2 are both ejected due to inclusion policy

  8. Memory Dataflow Techniques I-cache Instruction Branch FETCH Flow Predictor Instruction Buffer DECODE Memory Integer Floating-point Media Memory Data Flow EXECUTE Reorder Buffer Register (ROB) Data COMMIT Flow D-cache Store Queue

  9. Uniprocessor Load and Store Semantics (“<<“ means precedes) • Given Storei( a, v ) << Loadj( a ) Load(a) must return v if there does not exist another Storek such that Storei( a, v ) << Storek( a, v’ ) << Loadj(a) • This can be guaranteed by observing data dependence • RAW Store(a, v) followed by Load( a ) • WAW Store(a, v’ ) followed by Store(a, v ) • WAR Load( a ) followed by Store( a, v’ ) For a uniprocessor, do we need to worry about loads and stores to different addresses? What about SMPs?

  10. Total Ordering of Loads and Stores • Keep all loads and stores totally in order with respect to each other • However, loads and stores can execute out of order with respect to other types of instructions (while obeying register data-dependence) Except, stores must still be held for all previous instructions

  11. The “DAXPY” Example Y[ i ] = A * X[ i ] + Y[ i ] LD LD F0, a ADDI R4, Rx, #512 ; last address Loop: LD F2, 0(Rx) ; load X[ i ] MULTD LD MULTD F2, F0, F2 ; A*X[ i ] LD F4, 0(Ry) ; load Y[ i ] ?? ADDD F4, F2, F4 ; A*X[ i ] + Y[ i ] SD F4, 0(Ry) ; store into Y[ i ] ADDD ADDI Rx, Rx, #8 ; inc. index to X ADDI Ry, Ry, #8 ; inc. index to Y SUB R20, R4, Rx ; compute bound BNZ R20, loop ; check if done SD

More Related