1 / 27

Parallel Integration of Video Modules

Parallel Integration of Video Modules. T. Poggio, E.B. Gamble, J.J. Little 6.899 Paper Presentation Presenter: Brian Whitman. Overview. Different cues make up a ‘reliable map’ Edge Stereo Color Motion How can we integrate these cues to find surface discontinuities?. Architecture.

jamesrjones
Télécharger la présentation

Parallel Integration of Video Modules

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Parallel Integration of Video Modules T. Poggio, E.B. Gamble, J.J. Little 6.899 Paper Presentation Presenter: Brian Whitman

  2. Overview • Different cues make up a ‘reliable map’ • Edge • Stereo • Color • Motion • How can we integrate these cues to find surface discontinuities?

  3. Architecture

  4. Physical Discontinuities • Depth • Orientation • Albedo Edges • Specular Edges • Shadow Edges

  5. Implementation • The architecture was not fully implemented • Results in integrating brightness with: • Hue • Texture • Motion • Stereo • But separately – not together

  6. Smoothness • Physical processes behind cues change slowly over time: • Two points adjacent are not vastly different depths • Need a representation to capture this

  7. Discontinuities • Cues are assumed smooth everywhere except on discontinuities • Each module needs to • assume and interpolate smoothness • detect edges and changes

  8. Dual Lattices • Circles are smooth, crosses are lines / discontinuities

  9. Neighborhoods

  10. Quickly, MRF (again) • Prior probability of depth in the lattice is: • Z: normalization, T is temperature, U is energy (sum of local contributions) • If we know g (observation) use it

  11. Membrane Prior • Prior energy when surface is smooth:

  12. Gaussian Process • If we assume gaussian process generated the noise:

  13. Line Process • Where is the smoothness assumption broken? • l: line between i and j? • Vc: varying energies for different line configurations

  14. Integrated Process • Extend the energy function to tie together vision modules to brightness gradients • Assumption: changes in brightness guide our belief of the source of surface discontinuities

  15. High Brightness Gradients • Instead of energy terms based on line configuration, use strengths of brightness edges

  16. Low-level Modules • Paper mentions: • Edge detection • Stereo • Motion • Color • Texture • But only has short detail on texture & color.

  17. Texture Module • Measures level density • ‘Blobs’ are taken through a center-surround filter

  18. Color Module • Hue = R/(R+G) • Should be independent of illumination • MRF uses this to segment image into sections of ‘constant reflectance’

  19. Original image + brightness edges

  20. Stereo data, MRF generated depth

  21. Motion data, MRF generated flow

  22. Texture data, MRF generated texture regions

  23. Hue, MRF hue segmentation

  24. Parallelizing • Many words about specialized architecture • Small processes better for mass computation • Specialized experts model

  25. More Recent • Recent Mohan, Papageorgiou, Poggio paper: • “Example-Based Object Detection in Images by Components” • Train an ‘ACC’ using different ‘experts’

  26. Conclusions • All extracted surface discontinuities can be used in later understanding • “Do brightness edges aid human computation of surface discontinuities?” • Parallelizing image analysis…

More Related