1 / 6

Group Discussion

Group Discussion. Hong Man 07/21/2010. UMD DIF with GNU Radio. From Will Plishker’s presentation. GRC. DIF specification (.dif). 1) Convert or generate .dif file (Complete). 3b) Architecture specification (.arch?). The DIF Package (TDP). XML Flowgraph (.grc). Python Flowgraph (.py).

Télécharger la présentation

Group Discussion

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Group Discussion Hong Man 07/21/2010

  2. UMD DIF with GNU Radio From Will Plishker’s presentation. GRC DIF specification (.dif) 1) Convert or generate .dif file (Complete) 3b) Architecture specification (.arch?) The DIF Package (TDP) XML Flowgraph (.grc) Python Flowgraph (.py) • Processors • Memories • Interconnect • 4) Architecture aware MP scheduling • (assignment, ordering, invocation) Uniprocessor Scheduling GNU Radio Engine Python/C++ DIF Lite 2) Execute static schedules from DIF (Complete) Schedule (.dif, .sched) Platform Retargetable Library 3a) Perform online scheduling Legend Existing or Completed Proposed Platforms Multi-processors GPUs Cell FPGA

  3. SSP Interface with DIF • Currently DIF extracts dataflow model from GRC of GNU radio. • GRC is at the waveform level (component block diagram) • To interact with DIF, we need to construct CL models at the waveform level • Our current works are mostly at radio primitive level • We need to start waveform level CL modeling • Open questions: • Mapping “things” and “paths” in CL models to “actors” in dataflow models • Representing “data rates” (“tokens”) in CL models • “Processing delay” is missing in both models

  4. Scheduling with Dataflow Models • Scheduling based on dataflow models may achieve performance improvement with multi-rate processes (example from Will Plishker’s presentation) • SDR at physical layer and MAC layer are mostly single-rate processes, and may not see significant performance improvement by using dataflow based scheduling • Multicore scheduling is an interesting topic • Currently the assignments of “actors” to processors are done manually

  5. GPU and Multicore • Our findings on CUDA • Many specialized library functions optimized for GPUs • Parallelization has to be implemented manually • UMD CUDA work (FIR and Turbo decoding) have not been connected to their dataflow work yet • Some considerations • Extend our investigation to OpenCL • Focus on CL modeling for multicore systems • Automatically parallelize certain common DSP operations (e.g. FIR, FFT) from CL models • Operation recognition and rule-based mapping

  6. Next Step • Beyond rehosting – optimal code generation • c/c++ → (CL model) → SPIRAL • c/c++ → (CL model) → CUDA or OPEN CL (GPU and multicore) • c/c++ → (CL model) → c/c++ using SSE intrinsics • CL modeling tasks • At both primitive level and waveform level • CL modeling from AST • DSP operation (or primitive) recognition • Code segment extraction, validation and transform

More Related