1 / 29

Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791 (CMU ECE) : 42-735 (CMU BME)

Lecture 2 Math & Probability Background ch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi. Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791 (CMU ECE) : 42-735 (CMU BME) Dr. John Galeotti & Prahlad G Menon. Two themes in Text Book. Consistency

olympe
Télécharger la présentation

Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791 (CMU ECE) : 42-735 (CMU BME)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 2Math & Probability Backgroundch. 1-2 of Machine Vision by Wesley E. Snyder & Hairong Qi Spring 2014 BioE 2630 (Pitt) : 16-725 (CMU RI) 18-791 (CMU ECE) : 42-735 (CMU BME) Dr. John Galeotti & Prahlad G Menon

  2. Two themes in Text Book • Consistency • A conceptual tool implemented in many/most algorithms • Often must fuse information from local measurements to make global conclusions about the image • Is the fused input information conflicting..? • Optimization • Mathematical mechanism to solve matrix equations. • The “workhorse” of machine vision

  3. Image Processing Topics • Enhancement • Coding • Compression • Restoration • “Fix” an image • Requires model of image degradation • Reconstruction

  4. Machine Vision Topics Original Image Feature Extraction Classification & Further Analysis • AKA: • Computer vision • Image analysis • Image understanding • Pattern recognition: • Measurement of features Features characterize the image, or some part of it • Pattern classification Requires knowledge about the possible classes Our Focus

  5. Ch. 6-7 Ch. 8 Ch. 9 Ch. 10-11 Ch. 12-16 Feature measurement Original Image Restoration Varies Greatly Noise removal Segmentation Shape Analysis Consistency Analysis Matching Features

  6. Linear Operations • What is a linear operation..? How to represent it as a Matrix operator..? • Rules for basic matrix multiplication. • Matrix inverse & unitary matrix. • What is linear independence..? What is a basis..? • What is an orthonormal basis set..? Why is it useful..? • What is function minimization..? Why is it useful..? • Examples of useful linear operations / operators. • What is an inverse linear transform..? Example.

  7. Linear algebra • Unit vector: |x| = 1 • Orthogonal vectors: xTy = 0 • Orthogonal square matrices: AT = A-1 • Orthonormal: orthogonal unit vectors • Inner product of continuous functions • Orthogonality & orthonormality apply here too

  8. Linear transforms • For a matrix, A • Quadratic form: • Positive definite: • Applies to A if

  9. Linear independence • No one vector is a linear combination of the others • xj aixi for any ai across all i j • Any linearly independent set of d vectors {xi=1…d} is a basis set that spans the space d • Any other vector in d may be written as a linear combination of {xi} • Often convenient to use orthonormal basis sets • Projection: if y=aixi then ai=yTxi

  10. Vector norms

  11. Examples of linear operators • Rotation, translation, scaling – spatial transforms • Convolution • Wolfram – definition of a linear operator / transformation:

  12. The Convolution Operator – Example… • Simple example of convolution of input image (matrix, S) and impulse response (kernel) in 2D spatial. Y[n] = (S * h) [n]. • Notice that the kernel matrix is flipped both horizontal and vertical direction before multiplying the overlapped input data.

  13. Eigen values & Eigen vectors • Let A be an n x n matrix. • The number is an eigenvalue of A if there exists a non-zero vector v such that • Av = λv (TheCharacteristic equation) • In this case, vector v is called an eigen-vector of A corresponding to λ. • What does this mean physically..? • A maps v onto itself with only a change in length.

  14. Derivatives • Of a scalar function of x: • Called the gradient • Really important! • Of a vector function of x • Called the Jacobian • Hessian = matrix of 2nd derivatives of a scalar function

  15. Linear INVERSE problems – de-blurring an image Given measured data, g, and information about the system H and noise model, recover an estimate ~f that closely approximates f . When H is square, solve g = Hfvia f = H-1 g directly (exactly) via Fourier Transform methods. Eg: Lucy–Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function .

  16. Least Squares Fitting – solving as a linear system (assignment) What kind of basis is ideal for A , here..?

  17. Function minimization • Find the vector x which produces a minimum of some function f (x) • x is a parameter vector • f(x) is a scalar function of x • The “objective function” • The minimum value of f is denoted: • The minimizing value of x is denoted:

  18. Other Numerical minimization techniques • Gradient descent • The derivative points away from the minimum • Take small steps, each one in the “down-hill” direction • Local vs. global minima • Combinatorial optimization: • Use simulated annealing • Image optimization: • Use mean field annealing

  19. Fourier Orthonormal Basis • Many imaging problems easier to understand if we choose building blocks (k) carefully. • <Φk,Φk> = ΣnΦ*k[n]Φl[n] = 0 if k !=l, (1/N) Ac = s , is simply c = A*s: The coefficients c[k] of are the Discrete Fourier Transform of the data s[n].

  20. Orthonormal Expansion • Relate the problem of expanding these approximations to solving Ax = b. • Now x are the expansion coefficients, b is an image, and A= [Φ1,Φ2, … ], where each Φiis a column vector representing an orthonormal basis. • When columns of A are orthonormal: (A*)(A) = I, so x = AT b. • b[p] = ΣkxkAp,k , or, s[p] = Σkck Φk[p] , where ck may be termed Fourier cofficients, if Φk[p] refers to the Fourier basis. • k[n] = e-j2kn/N = cos(2kn/N) + j sin(2kn/N) . <Φk,Φk> = N for Fourier.

  21. Many useful properties of Fourier basis…Interesting fact:k-space  MRI data is acquired in frequency space (or Fourier space) and then converted back to image space.

  22. Probability • Probability of an event a occurring: • Pr(a) • Independence • Pr(a) does not depend on the outcome of event b, and vice-versa • Joint probability • Pr(a,b) = Prob. of both a and b occurring • Conditional probability • Pr(a|b) = Prob. of a if we already know the outcome of event b • Read “probability of a given b”

  23. Probability for continuously-valued functions • Probability distribution function: P(x) = Pr(z<x) • Probability density function: • Expected value

  24. Markov models • For temporal processes: • The probability of something happening is dependent on a thing that just recently happened. • For spatial processes • The probability of something being in a certain state is dependent on the state of something nearby. • Example: The value of a pixel is dependent on the values of its neighboring pixels.

  25. Markov chain • Simplest Markov process – automatic random transition between finite states with determinable: • Example: symbols transmitted one at a time • What is the probability that the next symbol will be w? • For a Markov chain: • “The probability conditioned on all of history is identical to the probability conditioned on the last symbol received.” – Initial probabilities – Transition probabilities

  26. Hidden Markov models (HMMs)

  27. HMM switching • Governed by a finite state machine (FSM) Output 1st Process Output 2nd Process HMM generates observation sequence

  28. The HMM Task • Given only the output f (t), determine: • The most likely state sequence of the switching FSM • Use the Viterbi algorithm • Computational complexity = (# state changes) * (# state values)2 • Much better than brute force, which = (# state values)(# state changes) • The parameters of each hidden Markov model • Use the iterative process in the book • Better, use someone else’s debugged code that they’ve shared

  29. Markov models for Image Segmentation http://www.stat.ucla.edu/~sczhu/papers/DDMCMC_reprint.pdf Code: http://pages.ucsd.edu/~ztu/Download_ddmcmc.htm

More Related