1 / 14

Focus group Benchmarks

Benchmarks. Focus group Benchmarks. Andre Reis (UFRGS) Jarrod Roy (Univ. Michigan) Vivek Shende (Univ. Michigan) Igor Markov (Univ. Michigan) Fan Mo (UC Berkeley) Andreas Kuehlmann (Cadence). The “Benchmark” Focus Group. The Benchmark discussion is a….

kgutierrez
Télécharger la présentation

Focus group Benchmarks

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Benchmarks Focus groupBenchmarks Andre Reis (UFRGS) Jarrod Roy (Univ. Michigan) Vivek Shende (Univ. Michigan) Igor Markov (Univ. Michigan) Fan Mo (UC Berkeley) Andreas Kuehlmann (Cadence)

  2. The “Benchmark” Focus Group • The Benchmark discussion is a… • forever ongoing mutual fingerpointing to assign blame for bad result sections in papers • Igor and Prabhakar (OUR IWLS BENCHMARK CHAIR!!!!)will talk about the IWLS benchmark effort • Assuming that… • after wasting many more IWLS benchmark chairs… • we finally get companies to give us some circuits with more than 100 gates… • we have put all data nicely into a well defined open format… • What would be left ?????

  3. The “Benchmark” Focus Group Accountability Reproducibility Transparency … for reporting research results • So, we decided to rename ourselves into … The “ART” Focus Group

  4. A Disclaimer • The following statements… • do not apply to the • attendees of this year’s IWLS • advisors or supervisors of members of this Focus group • but to some of our colleagues who try to publish in the same area

  5. The “ART” Focus Group • Long time ago, when CAD was still real CAD…. • Models were simple • Few previous papers written on my favored topic • Results could be reported in simple numbers • Number of rows and columns in a PLA • Number of literals in a ML circuit • Runtime on a VAX 11/780 (IBM 360 ??) • Area overhead of a floorplan • Main focus was on point tools • Results were intuitive and easily verifiable • And then there were simple focused benchmarks • Deutsch’s difficult example for routing • PLA benchmarks • And then there was also SPICE

  6. The “ART” Problem • Today… • Things got really messy… • No hope for point tools anymore • Model boundaries broke down many years ago… … and we are in permanent denial about it • Nobody wants to tell us the real technology and timing data • Big divergence between academic research and industry use • Assumptions are oversimplified “Keep twisting the problem untill it finally fits the solution” • Results are not reproducible

  7. The “ART” Problem • Publication ethics versus effort … • There are sooooo many papers out there and I need to get mine in. • Citation cache is < 5 years • Authors don’t read old papers …. …. and neither do the reviewers • Using “MyMath” helps hiding that I reinvented the wheel “To the best knowledge of the authors this work was not done before” • Too hard to make results comparable… • So, why bother?

  8. The “ART” Problem • There is no validation process of published results • The papers just stay out there “uncommented” • Poor developers suffer just to find out that it does not work … … and don’t tell anyone. • This is different in other communities

  9. Remember This “Breakthrough”?

  10. Remember This “Breakthrough”? Hendrik Schon does not work for Bell Labs anymore…

  11. What shall we do about this?

  12. The “ART” Proposal • Need publication channel for verifying published research • Common practice in Physics and Medical community • As “messy” environment as ours has become • Encourage publishing scientific evaluations of previously reported algorithms/experiments • Currently a straight “reject” in conferences/workshops because contribution is not “novel” • Should confirm/refute results in a scientific manner • Not just replicate • Refutation requires proof, not just “doesn’t work” • E.g. Sum of the block areas of some floorplan greater than chip area

  13. The “ART” Proposal • Multiple advantages…. • Graduate students can get familiar with an area and still get some publication out of it • Authors will be more careful when submitting papers • Tremendous help for industry and others to integrate solutions • Independent evaluation, different point of view • More experiments • No “convenient” hiding of some benchmarks • More emphasis on implementation details • Heuristic choices • And for our more senior academic fellows…. • Additional input for tenure evaluation • Instead of Citeseer numbers

  14. Thank You

More Related