1 / 13

Understanding Bottleneck vs. Maximum Likelihood: A Deep Dive into Probability Estimation

This article explores the concepts of bottleneck versus maximum likelihood in statistical modeling. It begins with a simple example of estimating the probability of obtaining heads from a biased coin when tossed three times. We calculate likelihoods for various probabilities. The discussion expands to a more complex scenario involving a mixture model with different colored balls drawn from baskets. We explain the role of hidden and observed variables, log-likelihood, and model parameters. Finally, the Expectation-Maximization (EM) algorithm is introduced as a method for maximizing likelihoods.

jabari
Télécharger la présentation

Understanding Bottleneck vs. Maximum Likelihood: A Deep Dive into Probability Estimation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Information Bottleneck versus Maximum Likelihood Felix Polyakov

  2. Likelihood of the Data Probability of a head A simple example... A coin is known to be biased The coin is tossed three times – two heads and one tail Use ML to estimate the probability of throwing a head • Model: • p(head) = P • p(tail) = 1 - P • Try P = 0.2 L(O) = 0.2 * 0.2 * 0.8 = 0.032 • Try P = 0.4 L(O) = 0.4 * 0.4 * 0.6 = 0.096 • Try P = 0.6 L(O) = 0.6 * 0.6 * 0.4 = 0.144 • Try P = 0.8 L(O) = 0.8 * 0.8 * 0.2 = 0.128

  3. A bit more complicated example… :Mixture Model • Three baskets with white (O= 1), grey (O = 2), and black (O = 3) balls B1 B2 B3 • 15 balls were drawn as follows: • Choose a basket according to p(i) =  bi • Draw the ball j from basket i with probability • Use ML to estimate  given the observations: sequence of balls’ colors

  4. Likelihood of observations • Log Likelihood of observations • Maximal Likelihood of observations

  5. Likelihood of the observed data • x – hidden random variables [e.g. basket] • y – observed random variables [e.g. color] • - model parameters [e.g. they define p(y|x)] 0 – current estimate of model parameters

  6. 2. Maximization • EM algorithm converges to local maxima Expectation-maximization algorithm (I) • Expectation • Compute • Get

  7. Log-likelihood is non-decreasing, examples

  8. Jensen’s inequality for concave function EM – another approach • Goal:

  9. 2. Maximization Expectation-maximization algorithm (II) • Expectation (I) and (II) are equivalent

  10. Scheme of the approach Expectation Maximization

More Related