210 likes | 231 Vues
Learn about random processes, experiments, events, sample spaces, discrete vs continuous events, probability axioms, conditional probability, Bayes' Rule, sequence models, and Markov models.
E N D
Reading • Chapter 1 in the book. • Chapter 3 pages 46-51.
Definition: Random Process A RANDOM PROCESS is something that has a “random” outcome: • Roll a die, flip a coin, roll 2 dice • Observe orthologous base pair in 2 seqs • Measure an mRNA level • Weigh a person
Definition: Experiment In probability theory, an EXPERIMENT is a single observation of a random process.
Definition: Event An EVENT is a set of possible outcomes of an experiment. • An ELEMENTARY EVENT is whatever you decide it is. For example: • The outcome of 1 roll of a die • The outcomes of n rolls of a die • The residue at position 237 in a protein • The residues at position 237 in a family of proteins • The weight of a person • Elementary events must be non-overlapping!
Compound Events A COMPOUND EVENT is a set of one or more elementary events. For example, you might define two compound events in a die-rolling experiment: E=“roll less than 3”, F=“roll greater than or equal to 3”. Then, E = {1, 2} and F = {3, 4, 5,6}.
Defn: Sample Space The SAMPLE SPACE is the set of all ELEMENTARY EVENTS. • So the sample space is the “universe” of all possible outcomes of the experiment. • This is written: • Ω = { Ei} • For example, for rolls of a die, you might have: • Ω = {1, 2, 3, 4, 5, 6}
Discrete vs. Continuous Events • The sample space might be INFINITE. For example, the weight of person can be any real number greater than 0. • Some events are DISCRETE: countable • Base pairs, residues, die rolls • Other events are CONTINUOUS: eg, real numbers • Weights, alignment scores, mRNA levels
Ω Ω E E F F E U F E ∩ F The Axioms of Probability Let E and F be events. Then the axioms of probability are: • Pr(E) ≥ 0 • Pr(Ω) = 1 • Pr(E U F) = Pr(E) + Pr(F) if (E ∩ F) = empty set • Pr(E | F) Pr(F) = Pr (E ∩ F) Probability Is like “area” in Venn diagrams
Notation Joint Probability: Pr(E,F) The probability of E and F Conditional probability: Pr(E | F) The probability of E given F
Conditional Probability and Bayes’ Rule Conditional probability can be defined as: Pr(E | F) = Pr (E,F) / Pr(F) Bayes’ Rule can be used to reverse the roles of E and F: Pr(F | E) = Pr (E|F) Pr(F) / Pr(E)
Sequence Models • Observed biological sequences (DNA, RNA, protein) can be thought of as the outcomes of random processes. • So, it makes sense to model sequences using probabilistic models. • You can think of a sequence model as a little machine that randomly generates sequences.
A Simple Sequence Model • Imagine a tetrahedral (four-sided) die with the letters A, C, G and T on its sides. • You roll the die 100 times and write down the letters that come up (down, actually). • This is a simple random sequence model.
Zero-order Markov Model • The four-sided die model is called a 0-order Markov model. • It can be drawn thus: p=1 Transition probability 0-order Markov Sequence model M qA qC qG qT Emission Probabilites
p S 1 1-p E M qA qC qG qT Complete 0-order Markov Sequence model Complete 0-order Markov Model • To model the length of the sequences that the model can generate, we need to add “start” and “end” states.
This Markov model can generateany DNA sequence. Associated with each sequence is a path and a probability. Start in state S: P = 1 Move to state M: P=1·P Print “x”: P = qX·P Move to state M: P=p·P or to state E: P=(1-p) ·P If in state M, go to 3. If in state E, stop. Sequence: GCAGCT Path: S, M, M, M, M, M, M, E P=1·qG·p·qC·p·qA·p·qG·p·qC·p·qT·(1-p) p S E M 1 1-p qA qC qG qT Generating a Sequence
Using a 0-order Markov Model This model can generate any DNA sequence, so it can be used to model DNA. • We used it when we created scoring matrices for sequence alignment as the background model. • It’s a pretty dumb model, though. • DNA is not very well modeled by a 0-order Markov model because the probability of seeing, say, a “G” following a “C” is usually different than a “G” following an “A”, (e.g, in CpG islands.) • So we need a better models: higher order Markov models.
This simple sequence model is called a 0-order Markovmodel because the probability distribution of the next letter to be generated doesn’t depend on any (zero) of the letters preceding it. The Markov Property: Let X = X1X2…XL be a sequence. In an n-order Markov sequence model, the probability distribution of the next letter depends on the previous n letters generated. 0-order: Pr(Xi|X1X2…Xi-1)=Pr(Xi) 1-order: Pr(Xi|X1X2…Xi-1)=Pr(Xi|Xi-1) n-order: Pr(Xi|X1X2…Xi-1)=Pr(Xi|Xi-1Xi-2…Xi-n) p S E M 1 1-p qA qC qG qT Markov Model “Order”
Pr(G|G) Pr(C|C) Pr(A|A) Pr(T|T) Pr(T|A) G C A T Pr(A) Pr(T|G) S E Pr(C|A) Pr(G) Pr(C|G) A 1-order Markov Sequence Model In a first-order Markov sequence model, the probability of the next letter depends on what the previous letter generated was. We can model this by making a state for each letter. Each state always emits the letter it is labeled with. (Not all transitions are shown.)
A 2-order Markov Model • To make a second order Markov sequence model, each state is labelled with two letters. It emits the second letter in its label. • There would have to be sixteen states: AA, AC, AG, AT, CA, CG, CT etc., plus four states for the first letter in the sequence: A, C, G, T • Each state would have transitions only to states whose first letter matched their second letter.
Pr(A|AA) AA Part of a 2-order Model • Each state “remembers” what the previous letter emitted was in its label. Pr(T|AA) AT E Pr(G|AA) Pr(C|AA) AG AC