1 / 11

Understanding Tail Inequalities: Markov, Chebyshev, and Chernoff Bounds in Probability Theory

This document outlines essential concepts in probability theory, focusing on tail inequalities such as Markov's distribution, Chebyshev’s inequality, and Chernoff bounds. Markov's inequality offers insight into the probability of a random variable exceeding a certain threshold, while Chebyshev's inequality provides bounds on deviations from the mean. Chernoff bounds deliver sharper results, especially with repeated trials. The document includes examples to clarify the probability of events, showcasing how increasing trials leads to more accurate outcomes in statistical experiments.

Télécharger la présentation

Understanding Tail Inequalities: Markov, Chebyshev, and Chernoff Bounds in Probability Theory

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Advanced Algorithms (6311)Gautam Das Notes: 04/28/2009 Ranganath M R.

  2. Outline • Tail Inequalities • Markov distribution • Chebyshev’s Inequality • Chernoff Bounds

  3. Tail Inequalities Markov Distribution says that the probability of X being greater that t is p(X>t) < µ/t

  4. Chebyshev’s inequality P(|X-µ| > t.σ) ≤ 1/t2

  5. Chernoff bounds • This is a specific distribution where we can obtain much sharper tail inequalities (exponentially sharp). If the trials are repeated more, the chances of getting very accurate results are more. Lets see how this is possible. • Example: imagine we have n coins (X1……Xn). and let the probabilities of each coins (say heads) be (p1…pn). Now the Randon variable X = ∑ni=1 xi and µ = E[X] = ∑ni=1 pi in general.

  6. Some special cases • All coins are equally unbiased i.e. pi= ½. • µ = n * pi = n*1/2 = n/2, σ= √n/2, Example for n = 100, σ= √100/2 = 5

  7. Chernoff bounds is given by • If E[X] > µ • P(X- µ ≥ § µ) ≤ [e § /(1 + §)1 + §] µ -----------------eqn 1 • If µ > E[X] • P(µ - X ≥ § µ) ≤ e µ§2/2 • Here the µ is the power of right hand side expression. Hence If more trials are taken, µ = n/2 increaese, hence we get accurate results as the expression (e § /(1 + §)1 + §) would be less than 1. • Example problem to illustrate this. • Probability of a team winning is 1/3 . • What is the probability that the team will win 50 out of the 100 games

  8. µ = n* pi = 100 * 1/3 = 100/3 • σ= (no of games to win - µ)/ µ • = (50 – 100/3)/(100/3) = ½. • Now to calculating probability of winning we need to substitute all these in eqn 1. • [e ½ /(3/2 3/2 )]100/3 = 0.027 (approx). • Here if we increase no of games(in general no of trials), the µ increases, and the expression [e § /(1 + §)1 + §] evaluates to less than 1. hence we get more accurate results, when more trails are done.

  9. Derivation • Let X = ∑ni=1 xi • Let Y = etX • P(X- µ ≥ §µ) = P(X ≥ (1 + §) µ ) = P(Y ≥ et (1 + §) µ) ≤ E[Y]/ et (1 + §) µ Now E[Y] = E[etX] = e tX1+ tX2 +…+tXn = E[etX1]* E[etX2]*………………* E[etXn]

  10. Now lets consider = E[etXi] • Xi is either 0 or 1 • Xi will be 0 with probability 1-Pi and 1 with probability Pi.

  11. = P(et) + (1 - Pi)(1) [if x = 1 then y =et if x =0 then y = 1] to be continued in next class

More Related