1 / 13

NoC Benchmarks Part 1:

NoC Benchmarks Part 1:. Application Modeling and Hardware Description. OCP-IP NoC Benchmarking Workgroup. March, 2008. NoC benchmarking questions:. How good is this NoC? How good is this NoC at transporting data? How good is this NoC at transporting this application’s data?

amandla
Télécharger la présentation

NoC Benchmarks Part 1:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. NoC Benchmarks Part 1: Application Modeling and Hardware Description OCP-IP NoC Benchmarking Workgroup March, 2008

  2. NoC benchmarking questions: • How good is this NoC? • How good is this NoC at transporting data? • How good is this NoC at transporting this application’s data? • How good is this NoC at transporting these applications’ data? • How good is this NoC compared to this other NoC? • Can a NoC be (formally) described without exposing proprietary data?

  3. How good is this NoC? • Define set of parameters of interest • Measure and report systematically • Refine NoC design based on reproducible data • Improve NoC CAD tools

  4. How good is this NoC at transporting data? • Separate computing elements from NoC infrastructure • Model NoC data flow and interaction with processing elements • Define common formats for describing: • Application • Mapping • NoC platform

  5. Benchmarks modeling principles • Separation: task graphs model computation/communication • Orthogonality: application task graph, mapped tasks/PE set and network fabric are interchangeable • Modularity: each hierarchical component can be used independently

  6. XML model • - XML/schema description • tags for each component/set of components

  7. Data flow model • Data is generated by task graphs nodes • Trigger conditions defined for each task • AND: all task’s inputs received triggering data • OR: any one input received triggering data • Each triggering defines: • Operation count • Data amount to be sent • Output port(s) to send data

  8. Triggering events • Set of events defines the “testbench” • Events represented as non-mapped nodes in task graph • Periodic or “one-shot” • At least one event needed to start an application

  9. Mapping • Relates tasks to processing elements • Two-step mapping: • Tasks into groups • Groups onto processing elements • Can model OS threads

  10. Platform model • Resources • Type • NoC terminal to connect • DMA if present • Communication overhead • Other parameters: operating frequency, # of operations per cycle, power, aspect ratio

  11. Network model • Terminals • Topology (routers, links) • arbitrary level of design details • default parameters provided • additional parameters can be provided as name-value pairs

  12. Measurements • Amount of data injected/ejected • Number of tasks processed • Number of time a task is processed • Number of communication phases executed • Deadline violations

  13. Ongoing work • Call for benchmarks • Collect & make available NoC benchmarks • Testcases examples • More updates to come

More Related