1 / 12

The Stanford Directory Architecture for Shared Memory (DASH)*

The Stanford Directory Architecture for Shared Memory (DASH)*. Presented by: Michael Bauer. ECE 259/CPS 221 Spring Semester 2008 Dr. Lebeck. * Based on “The Stanford Dash Multiprocessor” in IEEE Computer March 1992. Outline. Motivation High Level System Overview Cache Coherence Protocol

alisa
Télécharger la présentation

The Stanford Directory Architecture for Shared Memory (DASH)*

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Stanford Directory Architecture for Shared Memory (DASH)* Presented by: Michael Bauer ECE 259/CPS 221 Spring Semester 2008 Dr. Lebeck * Based on “The Stanford Dash Multiprocessor” in IEEE Computer March 1992

  2. Outline • Motivation • High Level System Overview • Cache Coherence Protocol • Memory Consistency Model: Release Consistency • Overcoming Long Latency Operations • Software Support • Performance Results • Conclusion: Where is it now?

  3. Motivation Goals: • Minimal impact on programming model • Cost efficiency • Scalability!!! Design Decisions: • Shared Address Space (no MPI) • Parallel architecture instead of next sequential processor (no clock issues yet!) • Hardware controlled, directory based cache coherency

  4. Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Memory Memory Memory Directory Directory Directory High Level System Overview Cluster Interconnect Network A shared address space without shared memory??* * See http://www.uschess.org/beginners/read/ for meaning of “??”

  5. Processor Level Processor Cache Local Cluster Level Other processor caches within local cluster Home Cluster Level Directory and main memory associated with a given address Remote Cluster Level Processor caches in remote clusters Cache Coherence Protocol DASH’s Big Idea: Hierarchical Directory Protocol • Locate cache blocks using a hierarchy of directories • Like NUCA except for directories (NUDA = Non-Uniform Directory Access?) • Cache blocks in three possible states • Dirty (M) • Shared (S) • Uncached (I)

  6. Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Processor Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Cache Memory Memory Memory Directory Directory Directory Interconnect Network Cache Coherency Example Processor Holding Block Requesting Processor Home Cluster 1. Processor makes request on local bus 2. No response, directory broadcasts on network 3. Home directory sees request, sends message to remote cluster 4. Remote directory puts request on bus 5. Remote processor responds with data 6. Remote directory forwards data, updates home directory 7. Data delivered, home directory updated

  7. Implications of Cache Coherence Protocol • What do hierarchical directories get us? • What problems still exist? • Very fast access on local cluster • Moderately fast access to home cluster • Minimized data movement (assumed temporal and spatial locality?) • Broadcast in some circumstances can be bottleneck to scalability • Complexity of cache and directory controllers, require manyoutstanding requests to hide latency -> power hungry CAM’s • Potential for long latency events as shown in example (more onthis later)

  8. Memory Consistency Model: Release Consistency Release Consistency Review*: • W->R reordering allowed (to different blocks only) • W->W reordering allowed (to different blocks only) • R->W (to different blocks only) and R-R reordering allowed Why Release Consistency? • Provides acceptable programming model • Reordering events is essential for performanceon a variable latency system • Relaxed requirements for interconnect network, no need forin order distribution of messages * Taken from “Shared Memory Consistency Models: A Tutorial”, we’ll read this later

  9. Overcoming Long Latency Operations Prefetching: • How is this beneficial to execution? • What can go wrong with prefetching? • Does this scale? Update and Deliver Operations: • What if we know data is going to be needed by many threads? • Tell system to broadcast data to everyone using Update-Writeoperation • Does this scale well? • What about embarrassingly parallel applications?

  10. Software Support • Parallel version of Unix OS • Handle prefetching in softwared (will this scale?) • Parallelizing compiler (how well do you think this works?) • Parallel language Jade (how easy to rewrite applications?)

  11. Performance Results What is goingon here?!? Do theselook like they scale well?

  12. Conclusion: Where is it now? • - Novel architecture and cache coherence protocol • Some level of scalability for diverse applications • Why don’t we see DASH everywhere? • Parallel architectures not cost-effective for general purposecomputing until recently • Requires adaptation of sequential code to parallel architecture • Power? • Any other reasons? • For anyone interested: DASH -> FLASH -> SGI Origin (Server) • http://www-flash.stanford.edu/architecture/papers/ISCA94/ • http://www.futuretech.blinkenlights.nl/origin/isca.pdf

More Related