1 / 54

Richard Baraniuk Mark Davenport Marco Duarte Chinmay Hegde Rice University Michael Wakin

Manifold Models for Signal Acquisition, Compression and Processing. Richard Baraniuk Mark Davenport Marco Duarte Chinmay Hegde Rice University Michael Wakin Colorado School of Mines. Supported by NSF, ONR, ARO, AFOSR, DARPA, Texas Instruments. Digital Sensing Revolution.

effie
Télécharger la présentation

Richard Baraniuk Mark Davenport Marco Duarte Chinmay Hegde Rice University Michael Wakin

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Manifold Models for Signal Acquisition, Compression and Processing Richard Baraniuk Mark DavenportMarco DuarteChinmay HegdeRice University Michael Wakin Colorado School of Mines Supported by NSF, ONR, ARO, AFOSR, DARPA, Texas Instruments

  2. Digital Sensing Revolution

  3. Pressure is on Digital Sensors • Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support higher resolution / denser sampling • ADCs, cameras, imaging systems, microarrays, … x large numbers of sensors • image data bases, camera arrays, distributed wireless sensor networks, … xincreasing numbers of modalities • acoustic, RF, visual, IR, UV, x-ray, gamma ray, …

  4. Pressure is on Digital Sensors • Success of digital data acquisition is placing increasing pressure on signal/image processing hardware and software to support higher resolution / denser sampling • ADCs, cameras, imaging systems, microarrays, … x large numbers of sensors • image data bases, camera arrays, distributed wireless sensor networks, … xincreasing numbers of modalities • acoustic, RF, visual, IR, UV = deluge of sensor data • how to efficiently acquire, fuse, process, communicate?

  5. Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) sample

  6. Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) too much data! sample

  7. Sensing by Sampling • Long-established paradigm for digital data acquisition • uniformly sampledata at Nyquist rate (2x Fourier bandwidth) • compress data using a sparse basis expansion sample compress transmit/store JPEG JPEG2000 … receive decompress

  8. Sparsity / Compressibility largewaveletcoefficients (blue = 0) pixels • Sparse: K-term basis expansion yields exact representation • Compressible: K-term basis expansion yields close approximation

  9. Sample / Compress • Long-established paradigm for digital data acquisition • uniformly sample data at Nyquist rate • compress data using a sparse basis expansion sample compress transmit/store sparse /compressiblewavelettransform receive decompress

  10. What’s Wrong with this Picture? • Why go to all the work to acquire N samples only to discard all but K pieces of data? sample compress transmit/store sparse /compressiblewavelettransform receive decompress

  11. What’s Wrong with this Picture? nonlinear processing nonlinear signal model (union of subspaces) linear processing linear signal model (bandlimited subspace) sample compress transmit/store sparse /compressiblewavelettransform receive decompress

  12. Compressive Sensing • Directly acquire “compressed” data • Replace samples by more general “measurements” compressive sensing transmit/store receive reconstruct

  13. Compressive Sensing • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss • Dimensionality reduction sparsesignal measurements nonzero entries

  14. Compressive Sensing • When data is sparse/compressible, can directly acquire a condensed representation with no/little information loss • Random projection will work sparsesignal measurements nonzero entries [Candes-Romberg-Tao, Donoho, 2004]

  15. Why CS Works • Random projection not full rank, but stably embedssignals with concise geometrical structure • sparse signal models is K-sparse • compressible signal models with high probability provided M large enough • Stable embedding: preserves structure • distances between points, angles between vectors, …

  16. Stable Embedding • K-sparse signals live on a union of K-dimensional hyperplanes aligned with coordinate axes in K-planes

  17. Stable Embedding K-planes • For all K-sparse x1 and x2,

  18. CS Signal Recovery • Recover sparse/compressible signal x from CS measurements y via optimization K-sparsemodel K-dim planes recovery linear program

  19. CS Imaging: Single-Pixel Camera target 65536 pixels 1300 measurements (2%) 11000 measurements (16%)

  20. Stable Embedding • Random projection not full rank, but stably embedssignals with concise geometrical structure • sparse signal models is K-sparse • compressible signal models with high probability provided M large enough • Q: What about other concise signal models? • Result: smooth K-dimensional manifolds in

  21. Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V

  22. Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with

  23. Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with Then with probability at least 1-r, the following statement holds: For every pair x1,x2inM, [B and Wakin, FOCM, 2007]

  24. Stable Manifold Embedding Theorem: Let MinRN be a compact K-dimensional manifold with • condition number 1/t (curvature, self-avoiding) • volume V Let F be a random MxN orthoprojector with Then with probability at least 1-r, the following statement holds: For every pair x1,x2inM, [B and Wakin, FOCM, 2007]

  25. Stable Manifold Embedding Sketch of proof: • construct a sampling of points • on manifold at fine resolution • from local tangent spaces • apply JLL to these points(concentration of measure) • extend to entire manifold Implication: Nonadaptive (even random) linear projections can efficiently capture & preserve structure of manifold See also: Indyk and Naor, Agarwal et al., Dasgupta and Freund

  26. Application: CompressiveDetection/ClassificationviaSmashed Filtering

  27. Information Scalability • Many applications involve signal inferenceand not reconstructiondetection < classification < estimation < reconstruction computationalcomplexityfor linearprogramming

  28. Information Scalability • Many applications involve signal inferenceand not reconstructiondetection < classification < estimation < reconstruction • Good news: CS supports efficient learning, inference, processing directly on compressive measurements • Random projections ~ sufficient statisticsfor signals with concise geometrical structure • Leverages stable embedding of smooth manifolds

  29. Matched Filter • Detection/classification with K unknown articulation parameters • Ex: position and pose of a vehicle in an image • Ex: time delay of a radar signal return • Matched filter: joint parameter estimation and detection/classification • compute sufficient statistic for each potential target and articulation • compare “best” statistics to detect/classify

  30. Matched Filter Geometry data • Detection/classification with K unknown articulation parameters • Images are points in • Classify by finding closesttarget template to datafor each class (AWG noise) • distance or inner product target templatesfromgenerative modelor training data (points)

  31. Matched Filter Geometry data • Detection/classification with K unknown articulation parameters • Images are points in • Classify by finding closesttarget template to data • As template articulationparameter changes, points map out a K-dimnonlinear manifold • Matched filter classification = closest manifold search articulation parameter space

  32. Recall: CS for Manifolds • Recall the Theorem: random measurements preserve manifold structure • Enables parameter estimation and MFdetection/classificationdirectly on compressivemeasurements • K very small in many applications

  33. Example: Matched Filter • Detection/classification with K=3 unknown articulation parameters • horizontal translation • vertical translation • rotation

  34. Smashed Filter • Detection/classification with K=3 unknown articulation parameters (manifold structure) • Dimensionally reduced matched filter directly on compressive measurements

  35. Smashed Filter • Random shift and rotation (K=3 dim. manifold) • Noise added to measurements • Goal: identify most likely position for each image class identify most likely class using nearest-neighbor test more noise classification rate (%) avg. shift estimate error more noise number of measurements M number of measurements M

  36. Application: Compressive Manifold Learning

  37. Manifold Learning • Given training points in , learn the mapping to the underlying K-dimensional articulation manifold • ISOMAP, LLE, HLLE, … • Ex: images of rotating teapotarticulation space = circle

  38. Compressive Manifold Learning • ISOMAP algorithm based on geodesic distances between points • Random measurements preserve these distances • Theorem: If , then the ISOMAP residual variance in the projected domain is bounded by the additive error factor [Hegde et al NIPS ’08] translatingdisk manifold(K=2)‏ full data (N=4096)‏ M = 100 M = 50 M = 25

  39. Application: Scalable Data Fusion

  40. Multisensor Inference • Example: Network of J cameras observing an articulating object • Each camera’s images lie on K-dim manifold in • How to efficiently fuse imagery from J cameras to solve an inference problem while minimizing network communication?

  41. Multisensor Fusion • Fusion: stack corresponding image vectors taken at the same time • Fused images still lie on K-dim manifold in“joint manifold”

  42. Joint Manifolds • Given submanifolds • -dimensional • homeomorphic (we can continuously map between any pair) • Define joint manifoldas concatenation of • Example: joint articulation

  43. Joint Manifolds: Properties • Joint manifold inherits properties from component manifolds • compactness • smoothness • volume: • condition number ( ): • Translate into algorithm performance gains • Bounds are often loose in practice

  44. Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=225 pixelsJ=20 views at different angles

  45. Manifold Learning via Joint Manifolds • Goal: Learn embeddingof 2D translating ellipse(with noise) N=45x45=225 pixelsJ=20 views • Embeddingslearnedseparately • Embedding learned jointly

  46. Manifold Learning via JM+CS • Goal:Learn embeddingvia random compressivemeasurements N=45x45=225 pixels J=20 views • Embeddingslearnedseparately • Embedding learned jointly M=100 measurements per view

  47. Multisensor Fusion via JM+CS • Can take random CS measurements of stacked images and process or make inferences w/ unfused CS w/ unfused and no CS

  48. Multisensor Fusion via JM+CS • Can compute CS measurements in-place • ex: as we transmit to collection/processing point

  49. Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Goal: classify a test image class 1 class 2

  50. Simulation Results • J=3 CS cameras, each N=320x240 resolution • M=200 random measurements per camera • Two classes • truck w/ cargo • truck w/ no cargo • Smashed filtering • independent • majority vote • joint manifold Joint Manifold

More Related