1 / 14

Image Modeling & Segmentation

Image Modeling & Segmentation. Aly Farag and Asem Ali Lecture #3. Parametric methods. These methods are useful when the underlying distribution is known in advance or is simple enough to be modeled by a simple distribution function or a mixture of such functions

bian
Télécharger la présentation

Image Modeling & Segmentation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Image Modeling & Segmentation AlyFarag and Asem Ali Lecture #3

  2. Parametric methods • These methods are useful when the underlying distribution is known in advance or is simple enough to be modeled by a simple distribution function or a mixture of such functions • The parametric model is very compact (low memory and CPU usage) where only few parameters need to fit. • The model’s parameters are estimated using these methods such as maximum likelihood estimation, Bayesian estimation and expectation maximization. • A location parameter simply shifts the graph left or right on the horizontal axis. • A scale parameter (>1) stretches or (<1 ) compress the pdf,

  3. Parametric methods (1- Maximum Likelihood Estimator: MLE) • Suppose thatnsamplesx1, x2, … , xnare drawn independently and identically distributed (i.i.d.) ~ distributionφ(θ) with vector of parametersθ=(θ1,…. , θr) • Know: data samples, the distribution type Unknown : θ????? • MLE Method estimates θby maximizing the log likelihood of the data To show the dependence of p on ɵ explicitly. By i.i.d. monotonicity of log

  4. Parametric methods (1- Maximum Likelihood Estimator: MLE) • Let • Then calculate • Find θby letting • Coin Example ……………….. • In some cases we can find a closed form for θ Example: Suppose thatnsamplesx1, x2, … , xnare drawn independently and identically distributed (i.i.d.)~ 1D N(μ,σ) find the MLE of μ and σ Matlab Demo

  5. Parametric methods (1- Maximum Likelihood Estimator: MLE) • An estimator of a parameter is unbiased if the expected value of the estimate is the same as the true value of the parameters. Example: • An estimator of a parameter is biased if the expected value of the estimate is different from the true value of the parameters. Example: doesn’t make much difference once n --> large

  6. Parametric methods What if there are distinct subpopulations in observed data? Example • Pearson in 1894, tried to model the distribution of the ratio between measurements of forehead and body length on crabs. • He used a two-component mixture. • It was hypothesized that the two-component structure was related to the possibility of this particular population of crabs evolving into two new subspecies Mixture Model The underlying density is assumed to have the form What is the difference between Mixture Model and the kernel-based estimator? Components of the mixture are densities and are parameterized by The weights, Constrained

  7. Parametric methods Example • Given that {xi, Ci1, Ci2} nsamples (complete data)drawn i.i.d. two normal distributions • xi observed value of ith instance • Ci1 and Ci2 indicate which of two normal distributions was used to generate xi • Cij=1 if Cij was used to generate xi, 0 otherwise • MLE How can we estimate the parameters given incomplete data (don’t know Ci1 andCi2 )?

  8. Parametric methods (2- Expectation Maximization: EM) • The EM algorithm is a general method of finding the maximum-likelihood estimate of the parameters of an underlying distribution from a given data set when the data is incomplete or has missing values. EM Algorithm: • Given initial parameters Θ0 • Repeatedly • Re-estimating expected values of hidden binary variables Cij • Then recalculate the MLE of Θusing these expected values for the hidden variables • Note: • EM unsupervised method, but MLE supervised • To use EM you must to know: • Number of classes K, • Parametric form of the distribution.

  9. Illustrative example complete incomplete

  10. Illustrative example

  11. Parametric methods (2- Expectation Maximization: EM) • Assume that a joint density function for complete data set • The EM algorithm first finds the expected value of the complete-data log-likelihood with respect to the unknown data C given the observed data x and the current parameter estimates Θi-1. The current parameters estimates that we used to evaluate the expectation The new parameters that we optimize to maximize Q • The evaluation of this expectation is called the E-step of the algorithm • The second step (the M-step) of the EM algorithm is to maximize the expectation we computed in the first step. • These two steps are repeated as necessary. Each iteration is guaranteed to increase the log likelihood and the algorithm is guaranteed to converge to a local maximum of the likelihood function.

  12. Parametric methods (2- Expectation Maximization: EM) • The mixture-density parameter estimation problem • Using Bayes’s rule, we can compute: E-step • Then • Grades Example ………………..

  13. Parametric methods (2- Expectation Maximization: EM) • For some distributions, it is possible to get an analytical expressions for • For example, if we assume d-dimensional Gaussian component distributions M-step E-step

  14. Parametric methods (2- Expectation Maximization: EM) Example: • MATLAB demo

More Related