1 / 44

Spatial Computation Computing without General-Purpose Processors

Spatial Computation Computing without General-Purpose Processors. Mihai Budiu Microsoft Research – Silicon Valley Girish Venkataramani, Tiberiu Chelcea, Seth Copen Goldstein Carnegie Mellon University. 100. 10. 1. 1980. 1982. 1984. 1986. 1988. 1990. 1992. 1994. 1996. 1998. 2000.

tobit
Télécharger la présentation

Spatial Computation Computing without General-Purpose Processors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spatial ComputationComputing without General-Purpose Processors Mihai Budiu Microsoft Research – Silicon Valley Girish Venkataramani, Tiberiu Chelcea, Seth Copen Goldstein Carnegie Mellon University

  2. 100 10 1 1980 1982 1984 1986 1988 1990 1992 1994 1996 1998 2000 Outline • Intro: Problems of current architectures • Compiling Application-Specific Hardware • ASH Evaluation • Conclusions 1000 Performance

  3. Resources [Intel] • We do not worry about not having hardware resources • We worry about being able to use hardware resources

  4. gate wire 20ps 5ps 1010 109 108 Chip size 107 106 ALUs Designer productivity 105 104 1999 2003 2007 1991 2001 1987 1993 1995 1997 2005 1989 2009 1983 1985 1981 Complexity Cannot rely on global signals (clock is a global signal)

  5. 1010 109 108 Chip size 107 106 ALUs Designer productivity 105 104 1999 2003 2007 1991 2001 1987 1993 1995 1997 2005 1989 2009 1983 1985 1981 Complexity Automatic translation C ! HW Simple, short, unidirectional interconnect Simple hw, mostly idle gate wire 20ps 5ps No interpretation Distributed control, Asynchronous Cannot rely on global signals (clock is a global signal)

  6. CPU ASH Low ILP computation + OS + VM High-ILP computation $ Memory Our Proposal:Application-Specific Hardware • ASH addresses these problems • ASH is not a panacea • ASH “complementary” to CPU

  7. Paper Content • Automatic translation of C to hardware dataflow machines • High-level comparison of dataflow and superscalar • Circuit-level evaluation -- power, performance, area

  8. Outline • Problems of current architectures • CASH: Compiling Application-Specific Hardware • ASH Evaluation • Conclusions

  9. HW backend Dataflow machine Application-Specific Hardware C program Compiler Dataflow IR Reconfigurable/custom hw

  10. Computation Dataflow Program IR Circuits a a 7 x = a & 7; ... y = x >> 2; & &7 2 x >> >>2 No interpretation

  11. Basic Computation=Pipeline Stage + latch data ack valid

  12. globalFSM Distributed Control Logic ack rdy + - short, local wires

  13. SSA = no arbitration MUX: Forward Branches b x 0 if (x > 0) y = -x; else y = b*x; * - > ! f y Conditionals ) Speculation

  14. Memory Access LD Monolithic Memory pipelined arbitrated network ST LD local communication global structures Future work: fragment this!

  15. Outline • Problems of current architectures • Compiling ASH • ASH Evaluation • Conclusions

  16. Evaluating ASH Mediabench kernels (1 hot function/benchmark) C CASHcore Verilog back-end commercial tools Synopsys,Cadence P/R 180nm std. cell library, 2V ~1999 technology Mem ModelSim (Verilog simulation) performancenumbers ASIC

  17. Compile Time C 200 lines CASHcore 20 seconds Verilog back-end 10 seconds 20 minutes Synopsys,Cadence P/R 1 hour Mem ASIC

  18. ASH Area P4: 217 minimal RISC core

  19. ASH vs 600MHz CPU [.18 mm]

  20. LSQ • Enabling dependent operations requires round-trip to memory. • Limit study: round trip zero time ) up to 5x speed-up. • Exploring novel memory access protocols. Bottleneck: Memory Protocol LD Memory ST

  21. Power Xeon [+cache] 67000 mP 4000 DSP 110

  22. Energy-delay vs. Wattch

  23. 1000x Energy Efficiency Dedicated hardware ASH media kernels Asynchronous P FPGA General-purpose DSP Microprocessors 0 . 1 1 0 1 1 0 0 0 0 0 1 1 0 0 . Energy Efficiency [Operations/nJ]

  24. Outline • Problems of current architectures • Compiling ASH • Evaluation • Related work, Conclusions

  25. Related Work • Optimizing compilers • High-level synthesis • Reconfigurable computing • Dataflow machines • Asynchronous circuits • Spatial computation We target an extreme point in the design space: no interpretation,fully distributed computation and control

  26. ASH Design Point • Design an ASIC in a day • Fully automatic synthesis to layout • Fully distributed control and computation (spatial computation) • Replicate computation to simplify wires • Energy/op rivals custom ASIC • Performance rivals superscalar • E£t 100 times better than any processor

  27. Conclusions Spatial computation strengths

  28. Backup Slides • Absolute performance • Control logic • Exceptions • Leniency • Normalized area • Loops • ASH weaknesses • Splitting memory • Recursive calls • Leakage • Why not compare to… • Targetting FPGAs

  29. Absolute Performance

  30. Pipeline Stage ackout C rdyin ackin rdyout = D Reg dataout datain back

  31. Exceptions • Strictly speaking, C has no exceptions • In practice hard to accommodate exceptions in hardware implementations • An advantage of software flexibility: PC is single point of execution control CPU ASH Low ILP computation + OS + VM + exceptions High-ILP computation $$$ Memory back

  32. Critical Paths b x 0 if (x > 0) y = -x; else y = b*x; * - > ! y

  33. - > Lenient Operations b x 0 if (x > 0) y = -x; else y = b*x; * ! y Solves the problem of unbalanced paths back

  34. Normalized Area back

  35. p ! Split (branch) Control Flow ) Data Flow data f Merge (label) data data predicate Gateway

  36. 0 i * 0 +1 < 100 sum + return sum; ! ret Loops int sum=0, i; for (i=0; i < 100; i++) sum += i*i; return sum; back

  37. ASH Weaknesses • Both branch and join not free • Static dataflow (no re-issue of same instr) • Memory is “far” • Fully static • No branch prediction • No dynamic unrolling • No register renaming • Calls/returns not lenient back

  38. ASH crit path CPU crit path Predicted not taken Effectively a noop for CPU! result available before inputs Predicted taken. Branch Prediction i 1 + for (i=0; i < N; i++) { ... if (exception) break; } < exception ! & back

  39. Memory Partitioning • MIT RAW project: Babb FCCM ‘99, Barua HiPC ‘00,Lee ASPLOS ‘00 • Stanford SpC: Semeria DAC ‘01, TVLSI ‘02 • Illinois FlexRAM: Fraguella PPoPP ‘03 • Hand-annotations #pragma back

  40. Recursion save live values recursive call restore live values stack back

  41. Leakage Power Ps = k Area e-VT • Employ circuit-level techniques • Cut power supply of idle circuit portions • most of the circuit is idle most of the time • strong locality of activity • High VT transistors on non-critical path back

  42. Why Not Compare To… • In-order processor • Worse in all metrics than superscalar, except power • We beat it in all metrics, including performance • DSP • We expect roughly the same results as for superscalar(Wattch maintains high IPC for these kernels) • ASIC • No available tool-flow supports C to the same degree • Asynchronous ASIC • We compared with a Balsa synthesis system • We are 15 times better in Et compared to resulting ASIC • Async processor • We are 350 times better in Et than Amulet (scaled to .18) back

  43. Compared to Next Talk back

  44. Why not target FPGA • Do not support asynchronous circuits • Very inefficient in area, power, delay • Too fine-grained for datapath circuits • We are designing an async FPGA back

More Related