1 / 26

Data Speculation

Data Speculation. Lipasti and Shen. Exceeding the dataflow limit, 1996. Sodani and Sohi. Understanding the differences between value prediction and instruction reuse , 1998. Adam Wierman Daniel Neill. Control Speculation. Data Speculation. Branch Direction. Branch Target.

nanda
Télécharger la présentation

Data Speculation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Speculation Lipasti and Shen. Exceeding the dataflow limit, 1996. Sodani and Sohi. Understanding the differences between value prediction and instruction reuse, 1998. Adam Wierman Daniel Neill

  2. Control Speculation Data Speculation Branch Direction Branch Target Data Location Data Value A Taxonomy of Speculation What can we speculate on? Speculative Execution Question:What makes speculation possible?

  3. How often does the same value result from the same instruction twice in a row Value Locality Question: Where does value localityoccur? Somewhat Yes No Yes Yes No Yes Somewhat Somewhat Yes Single-cycle Arithmetic (i.e. addq $1 $2) Single-cycle Logical (i.e bis $1 $2) Multi-cycle Arithmetic (i.e. mulq $1 $2) Register Move (i.e. cmov $1 $2) Integer Load (i.e. ldq $1 8($2)) Store with base register update FP Load FP Multiply FP Add FP Move

  4. Value Locality Question: Why is speculation useful? addq $1 $2$3 addq$3$1 $4 addq$3$2 $5 Speculation lets all these run in parallel on a superscalar machine

  5. Exploiting Value Locality “predict the results of instructions based on previously seen results” Value Prediction (VP) Instruction Reuse (IR) “recognize that a computation chain has been previously performed and therefore need not be performed again”

  6. Fetch Decode Issue Execute Commit Predict Value Verify if mispredicted Fetch Decode Issue Execute Commit Check for previous use Verify arguments are the same if reused Exploiting Value Locality Value Prediction (VP) Instruction Reuse (IR)

  7. Value Prediction (Lipasti & Shen, 1996)

  8. Fetch Decode Issue Execute Commit Predict Value Verify if mispredicted Value prediction • Speculative prediction of register values • Values predicted during fetch and dispatch, forwarded to dependent instructions. • Dependent instructions can be issued and executed immediately. • Before committing a dependent instruction, we must verify the predictions. If wrong: must restart dependent instruction w/ correct values.

  9. Overview Classification Table (CT) Value Prediction Table (VPT) PC Pred History Value History PC Should I predict? Predicted Value Prediction

  10. How to predict values? Classification Table (CT) Value Prediction Table (VPT) Value Prediction Table (VPT) • Cache indexed by instruction address (PC) • Mapped to one or more 64-bit values • Values replaced (LRU) when instruction first encountered or when prediction incorrect. • 32 KB cache: 4K 8-byte entries PC Pred History Value History PC Prediction

  11. Estimating prediction accuracy Classification Table (CT) Value Prediction Table (VPT) Classification Table (CT) • Cache indexed by instruction address (PC) • Mapped to 2-bit saturating counter, incremented when correct and decremented when wrong. • 0,1 = don’t use prediction • 2 = use prediction • 3 = use prediction and don’t replace value if wrong • 1K entries sufficient PC Pred History Value History PC Predicted Value Prediction

  12. Fetch Decode Issue Execute Commit Predict Value Verify if mispredicted Verifying predictions • Predicted instruction executes normally. • Dependent instruction cannot commit until predicted instruction has finished executing. • Computed result compared to predicted; if ok then dependent instructions can commit. • If not, dependent instructions must reissue and execute with computed value. Miss penalty = 1 cycle later than no prediction.

  13. Results • Realistic configuration, on simulated (current and near-future) PowerPC gave 4.5-6.8% speedups. • 3-4x more speedup than devoting extra space to cache. • Speedups vary between benchmarks (grep: 60%) • Potential speedups up to 70% for idealized configurations. • Can exceed dataflow limit (on idealized machine).

  14. Instruction Reuse (Sodani & Sohi, 1998)

  15. Fetch Decode Issue Execute Commit Check for previous use Verify arguments are the same if reused Instruction Reuse • Obtain results of instructions from their previous executions. • If previous results still valid, don’t execute the instruction again, just commit the results! • Non-speculative, early verification • Previous results read in parallel with fetch. • Reuse test in parallel with decode. • Only execute if reuse test fails.

  16. How to reuse instructions? • Reuse buffer • Cache indexed by instruction address (PC) • Stores result of instruction along with info needed for establishing reusability: • Operand register names • Pointer chain of dependent instructions • Assume 4K entries (each entry takes 4x as much space as VPT: compare to 16K VP) • 4-way set-associative.

  17. Reuse Scheme • Dependent chain of results (each points to previous instruction in chain) • Entry is reusable if the entries on which it depends have been reused (can’t reuse out of order). • Start of chain: reusable if “valid” bit set; invalidated when operand registers overwritten. • Special handling of loads and stores. • Instruction will not be reused if: • Inputs not ready for reuse test (decode stage) • Different operand registers

  18. Results • Attempts to evaluate “realistic” and “comparable” schemes for VP and IR on simulated MIPS architecture. • Are these really realistic? Assume oracle or || test. • Net performance: VP better on some benchmarks; IR better on some. All speedups typically 5-10%. • More interesting question: can the two schemes be combined? • Claim: 84-97% of redundant instructions reusable.

  19. Comparing VP and IR “predict the results of instructions based on previously seen results” Value Prediction (VP) Instruction Reuse (IR) “recognize that a computation chain has been previously performed and therefore need not be performed again”

  20. Comparing VP and IR • IR can’t predict when: • Inputs aren’t ready • Same result follows from different inputs • VP makes a lucky guess “predict the results of instructions based on previously seen results” Which captures more redundancy? Value Prediction (VP) Instruction Reuse (IR) Which captures more redundancy? “recognize that a computation chain has been previously performed and therefore need not be performed again”

  21. Comparing VP and IR “predict the results of instructions based on previously seen results” Which handles misprediction better? Value Prediction (VP) Instruction Reuse (IR) Which captures more redundancy? IRis non-speculative, so it never mispredicts “recognize that a computation chain has been previously performed and therefore need not be performed again”

  22. Comparing VP and IR “predict the results of instructions based on previously seen results” Which integrates best with branches? Value Prediction (VP) Instruction Reuse (IR) Which captures more redundancy? • IR • Mispredicted branches are detected earlier • Instructions from mispredicted branches can be reused. • VP • Causes more misprediction “recognize that a computation chain has been previously performed and therefore need not be performed again”

  23. Comparing VP and IR “predict the results of instructions based on previously seen results” Which is better for resource contention? Value Prediction (VP) Instruction Reuse (IR) Which captures more redundancy? IR might not even need to execute the instruction “recognize that a computation chain has been previously performed and therefore need not be performed again”

  24. Comparing VP and IR “predict the results of instructions based on previously seen results” Which is better for execution latency? Value Prediction (VP) Instruction Reuse (IR) Which captures more redundancy? VP causes some instructions to be executed twice (when values are mispredicted), IR executes once or not at all. “recognize that a computation chain has been previously performed and therefore need not be performed again”

  25. Possible class project: Can we get the best of both techniques? “predict the results of instructions based on previously seen results” Value Prediction (VP) Instruction Reuse (IR) “recognize that a computation chain has been previously performed and therefore need not be performed again”

  26. Data Speculation Lipasti and Shen. Exceeding the dataflow limit, 1996. Sodani and Sohi. Understanding the differences between value prediction and instruction reuse, 1998. Adam Wierman Daniel Neill

More Related