160 likes | 276 Vues
This lecture from Farinaz Koushanfar explores the concepts of conditional probability and independence as they relate to random signals, providing a thorough breakdown of foundational principles such as the total probability theorem and Bayes' rule. Key examples include the analysis of rare diseases, event coupling, and binomial probabilities. Attendees will benefit from visual aids and illustrations that clarify complex ideas, making them accessible for students in electrical and computer engineering.
E N D
ELEC 303 – Random Signals Lecture 3 – Conditional probability FarinazKoushanfar ECE Dept., Rice University Sept 1, 2009
Lecture outline • Reading: Sections 1.4, 1.5 • Review • Independence • Event couple • Multiple events • Conditional • Independent Trials and Binomial
Bayes rule Let A1,A2,…,An be disjoint events that form a partition of the sample space, and assume that P(Ai) > 0, for all i. Then, for any event B such that P(B)>0, we have The total probability theorem is used in conjunction with the Bayes rule:
Bayes rule inference Figure courtesy of Bertsekas&Tsitsiklis, Introduction to Probability, 2008
The false positive puzzle Test for a certain rare disease is correct 95% of times, and if the person does not have the disease, the results are negative with prob .95 A random person drawn from the population has probability of .001 of having the disease Given that a person has just tested positive, what is the probability of having the disease?
Independence • The occurrence of B does not change the probability of A, i.e., P(A|B)=P(A) P(AB)=P(A).P(B) • Symmetric property • Can we visualize this in terms of the sample space?
Independence? A and B disjoint A and Bc are disjoint P(A|B) = P(A|Bc)
Effect of conditioning on independence Assume that A and B are independent If we are told that C occurred, are A and B independent? The events A and B are called conditionally independent if P(AB|C)=P(A|C)P(B|C)
Conditional independence Independence is defined based on a probability law Conditional probability defines a probability law Example courtesy of Prof. Dahleh, MIT
Example • Example: Rolling 2 dies, Let A and B are A={first roll is a 1}, B={second roll is a 4} C={The sum of the first and second rolls is 7} • Is (AB|C) independent?
Independence vs. pairwise independence • Another example: 2 independent fair coin tosses A={first toss is H}, B={second toss is H} C={The two outcomes are different} • P(C)=P(A)=P(B) • P(CA)=? • P(C|A B)=? Example courtesy of Prof. Dahleh, MIT
Independence of a collection of events • Events A1,A2,…,An are independent if For every subset S of {1,2,…,n} • For 3 events, this amounts to 4 conditions: • P(A1A2) • P(A1A3) • P(A2A3) • P(A1A2A3)
Reliability example: network connectivity P(series subsystem succeeds) P(parallel subsystem succeeds) Figure courtesy of Bertsekas&Tsitsiklis, Introduction to Probability, 2008
Independent trials and the Binomial probabilities Independent trials: sequence of independent but identical stages Bernoulli: there are 2 possible results at each stage Figure courtesy of Bertsekas&Tsitsiklis, Introduction to Probability, 2008
Bernoulli trial P(k)=P(k heads in an n-toss sequence) From the previous page, the probability of any given sequence having k heads: pk(1-p)n-k The total number of such sequences is Where we define the notation
Binomial formula The probabilities have to add up to 1!