1.78k likes | 1.94k Vues
PROLOGUE: Pitfalls of standard alignments. Scoring a pairwise alignment. A: ALA E VLIRLIT K LYP B: ASA K HLNRLIT E LYP. Blosum62. Alignment of a family (globins). …………………………………………. Different positions are not equivalent. Sequence logos. http://weblogo.berkeley.edu/cache/file5h2DWc.png.
E N D
PROLOGUE: Pitfalls of standard alignments
Scoring a pairwise alignment A: ALAEVLIRLITKLYP B: ASAKHLNRLITELYP Blosum62
Alignment of a family (globins) …………………………………………. Different positions are not equivalent
Sequence logos http://weblogo.berkeley.edu/cache/file5h2DWc.png The substitution score IN A FAMILY should depend on the position (the same for gaps) For modelling families we need more flexible tools
Probabilistic Models for Biological Sequences • What are they?
Probabilistic models for sequences • Generative definition: • Objects producing different outcomes (sequences) with different probabilities • The probability distribution over the sequences space determines the model specificity Probability Sequence space M Generates si with probability P(si | M) e.g.: M is the representation of the family of globins
Probabilistic models for sequences • We don’t need a generator of new biological sequences • the generative definition is useful as operative definition • Associative definition: • Objects that, given an outcome (sequence), compute a probability value Probability M Sequence space Associates probability P(si | M) to si e.g.: M is the representation of the family of globins
Probabilistic models for sequences Most useful probabilistic models are Trainable systems The probability density function over the sequence space is estimated from known examples by means of a learning algorithm Known examples Pdf estimate (generalization) Probability Probability Sequence space Sequence space e.g.: Writing a generic representation of the sequences of globins starting from a set of known globins
Probabilistic Models for Biological Sequences • What are they? • Why to use them?
Seq1 Seq2 Seq3 Seq4 Seq5 Seq6 0.98 0.21 0.12 0.89 0.47 0.78 Modelling a protein family Probabilistic model Given a protein class (e.g. Globins), a probabilistic model trained on this family can compute a probability value for each new sequence This value measures the similarity between the new sequence and the family described by the model
Probabilistic Models for Biological Sequences • What are they? • Why to use them? • Which probabilities do they compute?
P( s | M ) or P( M | s ) ? A model M associates to a sequence s the probability P( s | M ) This probability answers the question: Which is the probability for a model describing the Globins to generate the sequence s ? The question we want to answer is: Given a sequence s, is it a Globin? We need to compute P( M | s ) !!
BayesTheorem • P(X,Y) = P(X | Y) P(Y) = P(Y | X) P(X) Joint probability • So: P(X | Y) P(Y) P(Y | X) = P(X) P(s | M) P(M) A priori probabilities P(M | s) = P(s) P(s | M) P(M | s) Evidence M Conclusion s Conclusion M Evidence s
The A priori probabilities P(s | M) P(M) A priori probabilities P(M | s) = P(s) P(M) is the probability of the model (i.e. of the class described by the model) BEFORE we know the sequence: can be estimated as the abundance of the class P(s) is the probability of the sequence in the sequence space. Cannot be reliably estimated!!
P(s | M1) P(M1) P(M1 | s) P(s) P(M2 | s) P(s) P(s | M2) P(M2) P(s | M1) P(M1) P(s | M2) P(M2) Comparison between models We can overcome the problem comparing the probability of generating s from different models = = = Ratio between the abundance of the classes
Null model Otherwise we can score a sequence for a model M comparing it to a Null Model: a model that generates ALL the possible sequences with probabilities depending ONLY on the statistical amino acid abundance P(s | M) S(M, s) = log P(s | N) Sequences NOT belonging to model M Sequences belonging to model M S(M, s) In this case we need a threshold and a statistic for evaluating the significance (E-value, P-value)
The simplest probabilistic models: Markov Models • Definition
Markov Models Example: Weather Register the weather conditions day by day: as a first hypothesis the weather condition in a day depends ONLY on the weather conditions in the day before. Define the conditional probabilities P(C|C), P(C|R),…. P(R|C)….. R C F S C: Clouds R: Rain F: Fog S: Sun The probability for the 5-days registration CRRCS P(CRRCS) = P(C)·P(R|C) ·P(R|R) ·P(C|R) ·P(S|C)
Markov Model Stochastic generator of sequences in which the probability of state in position i depends ONLY on the state in position i-1 Given an alphabet C = {c1; c2; c3; ………cN } a Markov model is described with N×(N+2) parameters {art, aBEGIN t , ar END; r, tC} arq = P( s i= q| s i-1= r ) aBEGIN q = P( s 1= q ) ar END= P( s T= END | s T-1= r ) c2 c1 c3 END BEGIN t art + ar END = 1 r t aBEGIN t = 1 cN c4
T aBEGIN s i=2as s as END = i T 1 i-1 Markov Models Given the sequence: s = s1s2s3s4s6 ……………sT with siC = {c1; c2; c3; ………cN } P( s | M ) = P( s1) i=2 P( s i| s i-1 ) = P(“ALKALI”)= aBEGIN A aA L aL K aK A aA L aL I aI END
?? 0.3 0.2 0.4 0.3 0.3 C C R R 0.2 0.5 0.3 0.1 0.2 0.0 0.2 ?? 0.1 0.2 ?? 0.1 ?? 0.7 0.0 0.2 0.1 0.4 S S S S F F 0.2 1.0 0.3 0.0 0.8 0.2 0.4 0.0 Markov Models: Exercise 1) Fill the non defined values for the transition probabilities
0.1 0.3 0.2 0.4 0.3 0.3 C C R R 0.2 0.5 0.3 0.1 0.2 0.0 0.2 0.0 0.1 0.2 0.2 0.1 0.0 0.7 0.0 0.2 0.1 0.4 S S S S F F 0.2 1.0 0.3 0.0 0.8 0.2 0.4 0.0 Markov Models: Exercise 2) Which model better describes the weather in summer? Which one describes the weather in winter?
0.3 0.1 0.2 0.4 0.3 0.3 C C R R 0.2 0.5 0.1 0.3 0.2 0.0 0.2 0.0 0.1 0.2 0.1 0.2 0.7 0.0 0.2 0.0 0.1 0.4 S S S S F F 1.0 0.2 0.3 0.0 0.8 0.2 0.4 0.0 Markov Models: Exercise Winter 3) Given the sequence CSSSCFS which model gives the higher probability? [Consider the starting probabilities: P(X|BEGIN)=0.25] Summer
0.3 0.4 0.3 C R 0.5 0.3 0.2 0.2 0.2 0.2 0.0 0.2 0.1 S S F 0.2 0.3 0.2 0.4 0.1 0.2 0.3 C R 0.2 0.1 0.0 0.0 0.1 0.1 0.7 0.0 0.4 S S F 1.0 0.0 0.8 0.0 Markov Models: Exercise Winter P (CSSSCFS | Winter) = =0.25x0.1x0.2x0.2x0.3x0.2x0.2= =1.2 x 10-5 P (CSSSCFS | Summer) = =0.25x0.4x0.8x0.8x0.1x0.1x1.0= =6.4 x 10-4 4) Can we conclude that the observation sequence refers to a summer week? Summer
0.3 0.4 0.3 C R 0.5 0.3 0.2 0.2 0.2 0.2 0.0 P(Summer | Seq) 0.2 0.1 S S P(Winter | Seq) F 0.2 0.3 0.2 0.4 0.1 0.2 0.3 C R 0.2 0.1 0.0 0.0 0.1 0.1 0.7 0.0 0.4 S S F 1.0 0.0 0.8 0.0 Markov Models: Exercise Winter P (Seq | Winter) =1.2 x 10-5 P (Seq | Summer) =6.4 x 10-4 = Summer P(Seq |Summer) P(Summer) = P(Seq| Winter) P(Winter)
C G A T Simple Markov Model for DNA sequences DNA: C = {Adenine, Citosine, Guanine, Timine } 16 transition probabilities (12 of which independent) + 4 Begin probabilities + 4 End probabilities. The parameters of the model are different in different zones of DNA They describe the overall composition and the couple recurrences
C G A T P ( s | GpC ) ·P(GpC) P (GpC | s) = P (s | GpC) ·P(GpC) + P (s | nonGpC) ·P(nonGpC) C G A T Example of Markov Models: GpC Island GATGCGTCGC CTACGCAGCG GpC Islands Non-GpC Islands In the Markov Model of GpC Islands aGC is higher than in Markov Model Non-GpC Islands Given a sequence s we can evaluate
The simplest probabilistic models: Markov Models • Definition • Training
Probabilistic training of a parametric method Generally speaking, a parametric model M aims to reproduce a set of known data Model M Parameters T Modelled data Real data (D) How to compare them?
nik aik = Sjnij Training of Markov Models • Let M be the set of parameters of model M. • During the training phase, M parameters are estimated from the set of known data D • Maximum Likelihood Extimation (ML) • ML = argmaxP( D | M, ) • It can be proved that: Frequency of occurrence as counted in the data set D Maximum A Posteriori Extimation (MAP) MAP = argmax P( | M, D ) = argmax[P( D | M, ) P( ) ]
d P(D|M) = 0 d p d P(D|M) = ph -1(1- p)t-1(h(1-p)-tp) = 0 d p Example (coin-tossing) Given N tossing of a coin (our data D), the outcomes are h heads and t tails (N=t+h) ASSUME the model P(D|M)= ph (1- p)t Computing the maximum likelihood ofP(D|M) We obtain that our estimate of p is p = h / (h+t) = h / N
Example (Error measure) Suppose you think that your data are affected by a Gaussian error So that they are distributed according to F(xi)=A*exp-[(xi – m)2 /2s 2] With A=1/sqrt(2 s) If your measures are independent the data likelihood is P(Data| model) = PiF(xi) Find m and sthat maximize the P(Data| model)
Maximum Likelihood training: Proof Given a sequence s contained in D: s = s1s2s3s4s6 ……………sT We can count the number of transitions between any to states j and k: njk Where states 0 and N+1 are BEGIN and END Normalisation contstraints are taken into account using the Lagrange multipliers lk
Hidden Markov Models • Preliminary examples
Loaded dice We have 99 regular dice (R) and 1 loaded die (L). P(1) P(2) P(3) P(4) P(5) P(6) R 1/6 1/6 1/6 1/6 1/6 1/6 L 1/10 1/10 1/10 1/10 1/10 1/2 Given a sequence: 4156266656321636543662152611536264162364261664616263 We don’t know the sequence of dice that generated it. RRRRRLRLRRRRRRRLRRRRRRRRRRRRLRLRRRRRRRRLRRRRLRRRRLRR
Loaded dice Hypothesis: We chose a different die for each roll Two stochastic processes give origin to the sequence of observations. 1) Choosing the die ( R o L ). 2) Rolling the die The sequence of dice is hidden The first process is assumed to be Markovian (in this case a 0-order MM) The outcome of the second process depends only on the state reached in the first process (that is the chosen die)
Casinò • Model • Each state(R and L) generates a character of the alphabet • C = {1, 2, 3, 4, 5, 6 } • The emission probabilities depend only on the state. • The transition probabilities describe a Markov model that generates a state path: the hidden sequence (p) • The observationssequence (s) is generated by two concomitant stochastic processes 0.01 R L 0.01 0.99 0.99
The observationssequence (s) is generated by two concomitant stochastic processes 4156266656321636543662152611 RRRRRLRLRRRRRRRLRRRRRRRRRRRR 0.01 R L 0.01 0.99 0.99 Choose the State : R Probability= 0.99 Chose the Symbol: 1 Probability= 1/6 (given R) 4156266656321636543662152611 RRRRRLRLRRRRRRRLRRRRRRRRRRRR
The observationssequence (s) is generated by two concomitant stochastic processes 415626665632163654366215261 RRRRRLRLRRRRRRRLRRRRRRRRRRR 0.01 R L 0.01 0.99 0.99 Choose the State : L Probability= 0.99 Chose the Symbol: 5 Probability= 1/10 (given L) 41562666563216365436621526115 RRRRRLRLRRRRRRRLRRRRRRRRRRRRL
0.01 R L 0.01 0.99 0.99 Loaded dice • Model • Each state(R and L) generates a character of the alphabet • C = {1, 2, 3, 4, 5, 6 } • The emission probabilities depend only on the state. • The transition probabilities describe a Markov model that generates a state path: the hidden sequence (p) • The observationssequence (s) is generated by two concomitant stochastic processes
Some not so serious example 1) DEMOGRAPHY Observable: Number of births and deaths in a year in a village. Hidden variable: Economic conditions (as a first approximation we can consider the success in business as a random variable, and by consequence, the wealth as a Markov variable ---> can we deduce the economic conditions of a village during a century by means of the register of births and deaths? 2) THE METEREOPATHIC TEACHER Observable: Average of the marks that a metereopathic teacher gives to their students during a day. Hidden variable: Weather conditions ---> can we deduce the weather conditions during a years by means of the class register?
To be more serious 1) SECONDARY STRUCTURE Observable: protein sequence Hidden variable: secondary structure ---> can we deduce (predict) the secondary structure of a protein given its amino acid sequence? 2) ALIGNMENT Observable: protein sequence Hidden variable: position of each residue along the alignment of a protein family ---> can we align a protein to a family, starting from its amino acid sequence?
Hidden Markov Models • Preliminary examples • Formal definition
Formal definition of Hidden Markov Models • A HMM is a stochastic generator of sequences characterised by: • N states • A set of transition probabilities between two states {akj} • akj = P( (i) = j | (i-1) = k ) • A set of starting probabilities {a0k} • a0k = P( (1) = k ) • A set of ending probabilities {ak0} ak0 = P( p (i) = END | (i-1) = k ) • An alphabet C with M characters. • A set of emission probabilities for each state {ek (c)} • ek (c) = P( s i= c | (i) = k ) • Constraints: ka0 k = 1 ak0 + jak j = 1 k cCek (c) = 1 k s: sequence p: path through the states
Choose the initial state p (1) following the probabilities a0k i = 1 Choose the character s i from the alphabet C following the probabilities ek(c) Choose the next state following the probabilities ak j and ak0 No Yes ii +1 End Is the END state choosed? Generating a sequence with a HMM
BEGIN a0Y= 0.2 a0N = 0.8 Gpc Island Non- Gpc Island aYN = 0.2 aNN = 0.8 Y N aYY = 0.7 aNY = 0.1 eY (A) = 0.1 eY (G) = 0.4 eY (C) = 0.4 eY (T) = 0.1 eN (A) = 0.25 eN (G) = 0.25 eN (C) = 0.25 eN (T) = 0.25 aN0 = 0.1 aY0 = 0.1 END GpC Island, simple model • s :AGCGCGTAATCTG • p :YYYYYYYNNNNNN • P( s, p | M ) can be easily computed
BEGIN a0Y= 0.2 a0N = 0.8 GpC Island Non- GpC Island aYN = 0.2 aNN = 0.8 Y N aYY = 0.7 aNY = 0.1 eY (A) = 0.1 eY (G) = 0.4 eY (C) = 0.4 eY (T) = 0.1 eN (A) = 0.25 eN (G) = 0.25 eN (C) = 0.25 eN (T) = 0.25 aN0 = 0.1 aY0 = 0.1 END P( s, p | M ) can be easily computed s : A G C G C G T A A T C T G p : Y Y Y Y Y Y Y N N N N N N Emission: 0.1 0.4 0.4 0.4 0.4 0.4 0.10.250.250.250.250.250.25 Transition: 0.2 0.7 0.7 0.7 0.7 0.7 0.7 0.20.8 0.8 0.80.8 0.8 0.1 Multiplying all the probabilities gives the probability of having the sequence AND the path through the states
Evaluation of the joint probability of the sequence ad the path
Hidden Markov Models • Preliminary examples • Formal definition • Three questions
BEGIN a0Y= 0.2 a0N = 0.8 GpC Island Non- GpC Island aYN = 0.2 aNN = 0.8 Y N aYY = 0.7 aNY = 0.1 eY (A) = 0.1 eY (G) = 0.4 eY (C) = 0.4 eY (T) = 0.1 eN (A) = 0.25 eN (G) = 0.25 eN (C) = 0.25 eN (T) = 0.25 aN0 = 0.1 aY0 = 0.1 END GpC Island, simple model • s :AGCGCGTAATCTG • p:????????????? • P( s, p | M ) can be easily computed • How to evaluate P ( s | M )?