Download
cs61c midterm 2 review session n.
Skip this Video
Loading SlideShow in 5 Seconds..
CS61C Midterm #2 Review Session PowerPoint Presentation
Download Presentation
CS61C Midterm #2 Review Session

CS61C Midterm #2 Review Session

132 Vues Download Presentation
Télécharger la présentation

CS61C Midterm #2 Review Session

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CS61C Midterm #2 Review Session A little Cache goes a long way

  2. The Ideal Memory System Fast Cheap (Large)

  3. Actual Memory Systems Fast, Expensive (Small) Slow, Cheap (Large)

  4. Idea: Multilevel Memory (cache) + =

  5. The Cache CPU • Store recently used data in fast memory • Cache Hit • Address we’re looking for is in cache • Cache Miss • Not found… read memory and insert into cache • This works because… Tag Data Main Memory

  6. Locality Just referenced x address Spatial Locality data Reference to data near x likely Temporal Locality stack Likely to reference x again soon code time

  7. Computing Average Access Time Q: Suppose we have a cache with a 5ns access time, main memory with a 60ns access time, and a cache hit rate of 95%. What is the average access time?

  8. Cache Design Issues • Associativity • Fully associative, direct-mapped, n-way set associative • Block Size • Replacement Strategy • LRU, etc. • Write Strategy • Write-through, write-back

  9. An Example

  10. Multiple Choice (1) • LRU is an effective cache replacement policy primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache

  11. Multiple Choice (2) • Increasing the associativity of a cache improves performance primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache

  12. Multiple Choice (3) • Increasing the block size of a cache improves performance primarily because programs • exhibit locality of reference • usually have small working sets • read data much more frequently than writing data • can generate addresses that collide in the cache