1 / 12

Onchip Interconnect Exploration for Multicore Processors Utilizing FPGAs

Onchip Interconnect Exploration for Multicore Processors Utilizing FPGAs. Graham Schelle and Dirk Grunwald University of Colorado at Boulder. Outline. Network on Chip (NoC) defined Current onchip interconnect tools NoCem (NoC Emulator) specification What else is needed before release

isla
Télécharger la présentation

Onchip Interconnect Exploration for Multicore Processors Utilizing FPGAs

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Onchip Interconnect Exploration for MulticoreProcessors Utilizing FPGAs Graham Schelle and Dirk Grunwald University of Colorado at Boulder

  2. Outline • Network on Chip (NoC) defined • Current onchip interconnect tools • NoCem (NoC Emulator) specification • What else is needed before release • We want it to be used…and cited • Conclusions

  3. Network on Chip Defined (in 1 slide!) Power/design concerns in modern processors lead to multicore chips Transistors seen as “free” allowing more transistors for non-computational tasks Network on Chip High speed clocking leads to signals not propagating across chip in single cycle Networking scales to infinite number of access points and is well understood

  4. Onchip Interconnects for FPGAs • Existing Buses on FPGAs • PLB,OPB,FSL • Can have multiple masters (e.g. processors) • Scale well for current uses of FPGAs • Existing NoCs • Research projects • Proprietary projects • Application specific (streaming…) • Not built for parameterization, some other VALID focus

  5. NoCem Specification • Synthesizable VHDL • Heavy use of generics / generate statements • Requires minimal Xilinx IP (FIFOs…) • To modify anything • Change generics, everything automatically generated • E.g. to go from 2x2 mesh with 16b datawidth to 4x4 torus with 8b datawidth, change 3 lines of code!

  6. NoCem Interface • FIFO-ish • Enqueue and dequeue path for every access point • Packet Control and Data paths • Meaning of those paths depends on NoC configuration • Datapath • Only variable width. Length of packet determined by packet control • Packet control: src, dest, packet length • Underlying Network reads toplevel packet structure, reads correct fields at correct times

  7. NoCem Bridges • Use Existing Buses, bridge to NoC • Integration into existing Xilinx tool flows • NoC can look like memory, SoC, … • Use IPIF interface • PLB, OPB • Different bus widths… • But processors both 32b

  8. How Big is NoCem? Mesh, 16-deep channel FIFOs, RR Arbitration

  9. Example Uses • Memory Architecture (in paper) • Various distributed cache configurations • Asymmetric Processor Configuration • Using Microblaze, PowerPC • Special Processor Offloads • Floating Point, Network Processing All can be emulated over NoC using NoCem…

  10. For Release • We want NoCem to be used! • Already in use at CU Boulder • Full source will be made available online • To do for release • Clean/zip up code • Some Documentation • ETA: April 2006

  11. Conclusions • NoCem as a research tool • Open source • Non-proprietary • Non application Specific • NoCem for multicore processor research • Allows NoC exploration • Easy integration into Xilinx EDK flow • Useful for a variety of research topics in this space

  12. Any Questions?

More Related