1 / 15

Static Image Filtering on Commodity Graphics Processors

Static Image Filtering on Commodity Graphics Processors. Peter Djeu May 1, 2003. Filters from Computer Vision. Mean (a.k.a. average) filter each element in a neighborhood is given equal weight; a simple image smoother Gaussian

gypsy
Télécharger la présentation

Static Image Filtering on Commodity Graphics Processors

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Static Image Filtering on Commodity Graphics Processors Peter Djeu May 1, 2003

  2. Filters from Computer Vision • Mean (a.k.a. average) filter • each element in a neighborhood is given equal weight; a simple image smoother • Gaussian • a neighborhood is weighted by a 2-D Gaussian, with the peak at the center; a better image smoother • Laplacian of Gaussian • The Gaussian filter is applied, and then the Laplacian (spatial derivative is applied); good for edge detection

  3. The Convolution Kernel • We want to transmit pixel information from neighbors to a central pixel • Use the convolution kernel as a window to frame the work that needs to be done 1 161

  4. Filtering on a CPU vs. a GPU • CPU • sequential and straightforward • GPU • not so straightforward if the goal is to exploit parallelism and maintain good locality • a pixel’s output value depends on the weighted value of it’s neighbors, so there is dependency across various elements

  5. Pixel Buffers in GPUS • GPU’s do not have indirect addressing to memory, so results have to be stored in pixel buffers. The card is really rendering to an off-screen frame (writing). • The GPU can then treat the Pixel Buffer as a texture for rendering (reading).

  6. Proposal for the GPU Algorithm 1. Store original into pb1. 2. For each element ki in the convolution kernel { 3. Copy pb1 into pb2, scaling by ki in the process (use Cg shader). 4. Based on the location of ki, render pb2 into pb3 with a certain offset. The blending is a single add. } 5. return pb3

  7. The Ups and Downs • This technique may be fast because… • parallelism is completely possible during the scaling stage and the blending • since most convolution kernels have symmetry, a little bit of preprocess could mean • On the other hand… • as image size grows, cache misses may become more prominent, since we manipulate the whole image • when translating, coords. are interpolated, not mapped • Tiling? Can a good size be determined in exp.?

  8. Current Progress • P-Buffer’s are frustrating • wglReleasePbufferDCARB() returning type PFNWGLRELEASEPBUFFERDCARBPROC • Lot’s of low level implementation / debugging, very much on a hardware level • (Naïve) CPU implementation is complete and working, and P-Buffers are almost done

  9. Results (in real-time sec’s)CPU, Gaussian Filter, w/ RGB, 24 bit targa’s

  10. Time (s) versus Kernel Size (elts)

  11. Time(s) versus Image Size (x*y)using a (31 x 31) kernel

  12. Applications? • Super fast filtering techniques on 2-D images may provide tools or insight for traditionally more complex problems involving 2-D images, like categorization / classification

More Related