1 / 29

Stack Value File : Custom Microarchitecture for the Stack

Stack Value File : Custom Microarchitecture for the Stack. Hsien-Hsin Lee Mikhail Smelyanskiy Chris Newburn Gary Tyson. University of Michigan Intel Corporation. Agenda. Organization of Memory Regions Stack Reference Characteristics Stack Value File Performance Analysis

saber
Télécharger la présentation

Stack Value File : Custom Microarchitecture for the Stack

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stack Value File : Custom Microarchitecture for the Stack Hsien-Hsin Lee Mikhail Smelyanskiy Chris NewburnGary Tyson University of Michigan Intel Corporation

  2. Agenda • Organization of Memory Regions • Stack Reference Characteristics • Stack Value File • Performance Analysis • Conclusions

  3. Memory Space Partitioning max mem • Based on programming language • Non-overlapped subdivisions • Split code and data ÞI-cache & D-cache • Split data into regions • Stack (¯) • Heap (­) • Global (static) • Read-only (static) reserved Stack grows downward Protected Heap grows upward Global Static Data Region Code Region Read-only data reserved min mem

  4. Memory Access Distribution • SPEC2000int benchmark (Alpha binary) • 42% instructions access memory

  5. Access Method Breakdown 86% of the stack references use ($sp+disp)

  6. Morphing $sp-relative References • Morph $sp-relative references into register accesses • Use a Stack Value File (SVF) • Resolve address early in decode stage for stack-pointer indexed accesses • Resolve stack memory dependency early • Aliased references are re-routed to SVF

  7. Stack Reference Characteristics • Contiguity • Good temporal and spatial locality • Can be stored in a simple, fast structure • Smaller die area relative to a regular cache • Less power dissipation • No address tag need for each datum

  8. Stack Reference Characteristics • First touch is almost always a Store • Avoid waste bandwidth to bring in dead data • A register write to the SVF • Deallocated stack frame • Dead data • No need to write them back to memory

  9. Baseline Microarchitecture Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) ArchRF ReOrder Buffer

  10. Microarchitecture Extension Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset ArchRF Max ReOrder Buffer Hash SP Stack SP Value File interlock

  11. Microarchitecture Extension stq $r10, 24($sp) TOS Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset ArchRF Max ReOrder Buffer Hash SP Stack SP Value File interlock

  12. Microarchitecture Extension stq $r10, 24($sp) TOS Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset 3 ArchRF Max ReOrder Buffer Hash SP Stack SP Value File interlock

  13. Microarchitecture Extension stq $r10, 24($sp) $r35  ROB-18 TOS Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset ArchRF Max ReOrder Buffer Hash SP Stack SP Value File interlock

  14. Microarchitecture Extension stq $r10, 24($sp) $r35  ROB-18 TOS Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset ArchRF Max ReOrder Buffer Hash SP Stack SP Value File interlock

  15. File Value Microarchitecture Extension stq $r10, 24($sp) $r35  SVF3 TOS Issue Execute Commit Ld /St Dispatch Fetch Decode MOB Unit DecoderQ Reservation Station / LSQ Reg Decoder Instr -Cache Renamer Func Unit ( RAT) Morphing Pre-Decode offset ArchRF Max ReOrder Buffer Hash SP Stack SP interlock

  16. Why could SVF be faster ? • It reduces the latency of stack references • It effectively increases the number of memory port by rerouting more than ½ of all memory references to the SVF • It reduces contention in the MOB • More flexibility in renaming stack references • It reduces memory traffic

  17. Simulation Framework Simple Scalar (Alpha binary), OOO model

  18. Speedup Potential of SVF • Assume all references can be morphed • ~30% speedup for a 16-wide with dual-ported L1

  19. SVF Reference Type Breakdown • 86% stack references can be morphed • Re-routed references enter normal memory pipeline

  20. Comparison with stack cache • (R+S) : Regular and Stack or SVF cache ports

  21. Memory Traffic • SVF dramatically reduces memory traffic by many order of magnitude. • For gcc, ~28M (Stk$  L2) reduced to ~86K (SVF  L1). • Incoming traffic is eliminated because SVF does not allocate a cache line on a miss. • Outgoing traffic consists of only those words that are dirty when evicted (instead of entire cache lines).

  22. SVF over Baseline Performance • (R+S) : Regular and SVF cache ports

  23. Conclusions • Stack references have several unique characteristics • Contiguity, $sp+disp, first reference store, frame deallocation. • Stack Value File • a microarchitecture extension to exploit these characteristics • improves performance by 24 - 65%

  24. Questions & Answers

  25. That's all, folks !!! http://www.eecs.umich.edu/~linear

  26. Backup Foils

  27. Stack Depth Variation

  28. Cumulative % Offset in Bytes (Log scale) Offset Locality of Stack • Cumulative offset within a function call • Avg: 3b - 380b • >80% offset within“400b” • >99% offset within“8Kb”

  29. Conclusions • Stack reference features • Contiguity • No dirty writeback when stack deallocated • Stack Value File • Fast indexing. • Alleviate multi-porting L1 cache. • Smaller, No tags, and less power. • Exploiting ILP

More Related