1 / 28

The Effect of Interconnect Design on the Performance of Large L2 Caches

The Effect of Interconnect Design on the Performance of Large L2 Caches. Naveen Muralimanohar Rajeev Balasubramonian. Motivation: Large Caches. Future processors will have large on-chip caches Intel Montecito has 24MB on-chip cache Wire delay dominates in large caches

sheldonm
Télécharger la présentation

The Effect of Interconnect Design on the Performance of Large L2 Caches

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Effect of Interconnect Design on the Performance of Large L2 Caches Naveen Muralimanohar Rajeev Balasubramonian University of Utah

  2. Motivation: Large Caches • Future processors will have large on-chip caches • Intel Montecito has 24MB on-chip cache • Wire delay dominates in large caches • Conventional design can lead to very high hit time (CACTI access time for 24 MB cache is 90 cycles @ 5GHz, 65nm Tech) • Careful network choices • Improve access time • Open room for several other optimizations • Reduces power significantly University of Utah

  3. Effect of L2 Hit Time 8-issue, out-of-order processor (L2-hit time 30-15cycles) Avg = 17% University of Utah

  4. Cache Design Bitlines Input address Wordline Decoder Tag array Data array Column muxes Sense Amps Comparators Output driver Mux drivers Output driver Data output Valid output? University of Utah

  5. Existing Model - CACTI Wordline & bitline delay Wordline & bitline delay Decoder delay Decoder delay Cache model with 4 sub-arrays Cache model with 16 sub-arrays University of Utah

  6. Shortcomings • CACTI • Suboptimal for large cache size • Access delay is equal to the delay of slowest sub-array • Very high hit time for large caches • Employs a separate bus for each cache bank for multi-banked caches University of Utah

  7. Non-Uniform Cache Access (NUCA) • Large cache is broken into a number of small banks • Employs on-chip network for communication • Access delay a (distance between bank and cache controller) CPU & L1 Cache banks University of Utah

  8. Shortcomings • NUCA • Banks are sized such that the link latency is one cycle (Kim et al. ASPLOS 02) • Increased routing complexity • Dissipates more power University of Utah

  9. Extension to CACTI • On-chip network • Wire model is done using ITRS 2005 parameters • Grid network • No. of rows = No. of columns (or ½ the no. of columns) • Network latency vs Bank access latency tradeoff • Modified the exhaustive search to include the network overhead University of Utah

  10. Effect of Network Delay (32MB cache) Delay optimal point University of Utah

  11. Outline • Overview • Cache Design • Effect of Network Delay • Wire Design Space • Exploiting Heterogeneous Wires • Results University of Utah

  12. Wire Characteristics • Wire Resistance and capacitance per unit length University of Utah

  13. Design Space Exploration • Tuning wire width and spacing Base case B wires Fast but Low bandwidth L wires (Width & Spacing)   Delay  Bandwidth  University of Utah

  14. Design Space Exploration • Tuning Repeater size and spacing Power Optimal Wires Smaller repeaters Increased spacing Delay Power Traditional Wires Large repeaters Optimum spacing University of Utah

  15. Design Space Exploration Base case B wires 8x plane Base case W wires 4x plane Power optimized PW wires 4x plane Fast, low bandwidth L wires 8x plane Latency 1x Power 1x Area 1x Latency 1.6x Power 0.9x Area 0.5x Latency 3.2x Power 0.3x Area 0.5x Latency 0.5x Power 0.5x Area 5x University of Utah

  16. Access time for different link types University of Utah

  17. Outline • Overview • Cache Design • Effect of Network Delay • Wire Design Space • Exploiting Heterogeneous Wires • Results University of Utah

  18. Cache Look-Up Total cache access time Bank access Decoder, Wordline, Bitline delay (req 10-15 bits of address) Comparator, output driver delay (req remaining address for tag match) Network delay (req 6-8 bits to identify the cache Bank) The entire access happens in a sequential manner University of Utah

  19. Early Look-Up • Send partial address in L-wires • Initiate the bank lookup • Wait for the complete address • Complete the access L Early lookup (req 10-15 bits of address) Tag match We can hide 60-70% of the bank access delay University of Utah

  20. Aggressive Look-Up • Send partial address bits on L-wires • Do early look-up and do partial tag match • Send all the matched blocks aggressively Network delay reduced L Tag match at cache controller Agg. lookup (req additional 8-bits of address fpr partial tag match) University of Utah

  21. Aggressive Look-Up • Significant reduction in network delay (for address transfer) • Increase in traffic due to false match < 1% • Marginal increase in link overhead • Additional 8-bits of L-wires compared to early lookup • Adds complexity to cache controller • Needs logic to do tag match University of Utah

  22. Outline • Overview • Cache Design • Effect of Network Delay • Wire Design Space • Exploiting Heterogeneous Wires • Results University of Utah

  23. Experimental Setup • Simplescalar with contention modeled in detail • Single core, 8-issue out-of-order processor • 32 MB, 8-way set-associative, on-chip L2 cache (SNUCA organization) • 32KB I-cache and 32KB D-cache with hit latency of 3 cycles • Main memory latency 300 cycles University of Utah

  24. Cache Models University of Utah

  25. Performance Results (Global Wires) Model 2 (CACTI-L2) : Average performance improvement – 11% Performance improvement for L2 latency sensitive benchmarks – 16.3% Model 3 (Early Lookup): Average performance improvement – 14.4% Performance improvement for L2 latency sensitive benchmarks – 21.6% Model 4 (Aggressive Lookup): Average performance improvement – 17.6% Performance improvement for L2 latency sensitive benchmarks – 26.6% Model 6 (L-Network): Average performance improvement – 11.4% Performance improvement for L2 latency sensitive benchmarks – 16.2% University of Utah

  26. Performance Results (4X – Wires) Wire delay constrained model • Performance improvements are better • Early lookup performs 5% better • Aggressive model performs 28% better University of Utah

  27. Future Work • Heterogeneous network in a CMP environment • Hybrid-network • Employs a combination of point-to-point and bus for L-messages • Effective use of L-wires • Latency/bandwidth trade-off • Use of heterogeneous wires in DNUCA environment • Cache design focusing on power • Pre-fetching (Power optimized wires) • Writeback (Power optimized wires) University of Utah

  28. Conclusion • Traditional design approaches for large caches is sub-optimal • Network parameters play a significant role in the performance of large caches • Modified CACTI model, that includes network overhead performs 16.3% better compared to previous models • Heterogeneous network has potential to further improve the performance • Early lookup – 21.6% • Aggressive lookup – 26.6% University of Utah

More Related