1 / 45

12 TH INTERNATIONAL WORKSHOP IN PHYLOGENETIC METHODS SUPPORTED BY THE WILLI HENNIG SOCIETY

12 TH INTERNATIONAL WORKSHOP IN PHYLOGENETIC METHODS SUPPORTED BY THE WILLI HENNIG SOCIETY INSTITUTO DE ECOLOGIA, A.C. MAY 22‐28, 2011. Model Based Phylogenetics I: Maximum Likelihood. Christopher P. Randle Sam Houston State University Department of Biological Sciences.

arawn
Télécharger la présentation

12 TH INTERNATIONAL WORKSHOP IN PHYLOGENETIC METHODS SUPPORTED BY THE WILLI HENNIG SOCIETY

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 12THINTERNATIONAL WORKSHOP IN PHYLOGENETIC METHODS SUPPORTED BY THE WILLI HENNIG SOCIETY INSTITUTO DE ECOLOGIA, A.C. MAY 22‐28, 2011 Model Based Phylogenetics I: Maximum Likelihood Christopher P. Randle Sam Houston State University Department of Biological Sciences

  2. Regarding this week’s workshop “I imagine that students will receive a first-rate introduction to the latest advances in parsimony, which seem to focus mostly on improvements to tree searches.” -John Huelsenbeck, Dechronization blog But this is the central problem of phylogenetics, isn’t it? Model-based approaches are not magical, and they are no more difficult to understand than parsimony.

  3. Part 1: What is Likelihood? -Likelihood of a hypothesis: L(h) = p (data | hypothesis) -Probability of the data or evidence or series of observations if that hypothesis is true • The likelihood of a hypothesis is proportional to the probability that it is true. (Fisher 1922) Ronald Fisher

  4. Part 1: What is Likelihood? Example: An unfortunate story that would initially seem to be about fishing, but is ultimately about a friendship that depends on a coin flip. Given a coin, how can you tell if it is fair? HypothesisA: A coin is unbiased (50% heads). HypothesisB: A coin is biased (90% heads). Data: In four flips, you obtain 3 heads and 1 tail. What is the probability of observing that data given either hypothesis?

  5. HA: 50% heads From an intuitive approach we can see that there are (24) = 16 possible, equally probable outcomes if the hypothesis is true. The probability of any combination of heads and tails in four flips = (0.5)(0.5)(0.5)(0.5) = 0.0625, or 1/16. Of these sixteen, four outcomes have three heads. p (3 heads) = p(HHHT) +p(HHTH) + p(HTHH) +p(THHH) = 4(0.5)4 = 0.25 p (3 heads| unbiased coin) = 0.25 L(unbiased coin| 3 heads) = 0.25

  6. Binomial distribution General formulation of Binomial Distribution L(Hx) = p(K|N,Hx) = N = number of trials x = probability of heads K = number of heads in N trials = {1,2,3,….N} L(HA) = p(3|4,HA) = =0.25

  7. L(HA) = p(3|4,HA) = =0.25 =0.29 L(HB) = p(3|4,HB) = Four flips of a coin does not allow easy differentiation between Hypothesis A and Hypothesis B!

  8. L(HA) = p(7|8,HA) = L(HB) = p(7|8,HB) = Eight flips (in this case) allow considerably greater power to differentiate.

  9. n P L(j) j=1 Part 2: Phylogeny and Likelihood For character j, the likelihood of this tree is the sum of probabilities of all character state assignments at z internal nodes. How many unique state assignments are possible for this tree, given 4 nucleotide character states? -There are two internal nodes with four possible character states each. -There are 42 = 16 unique state assignments

  10. n n P S L(j) ln L(j) j=1 j=1 Unique state assignments for r character states at z=n-2 internal nodes = rz

  11. n n P S L(j) ln L(j) j=1 j=1 The likelihood of the entire data set is the product of likelihoods for each character j given tree (((1,2)3)4) = joint probability L = L(1) X L(2) X L(3)…X L(j) = Since these probabilities are very small numbers, it is better to consider the natural log of these likelihoods in which case ln L = ln L (1) + ln L (2) + ln L (3)…+ ln L(j) =

  12. n n P S L(j) ln L(j) j=1 j=1 Trees imply nothing regarding probability of state change A process model of evolution is required to assess the likelihood of a tree.

  13. Part 3: What is a Model? A statistical hypothesis in which essential aspects are represented by parameters Parameters specify all the ways in which predictions of the hypothesis may vary Hypothesis 1: Coin A is fair. Parameter: p(heads) -One parameter is sufficient to represent hypotheses about the fairness of coins. -In likelihood analysis (and Bayes) the values of parameters must be estimated.

  14. Evaluating Models with Likelihood: Another Trivial Example Ron has been abducted by aliens and suspects that he has been dropped off somewhere in Switzerland Fact: 65% of Swiss speak German as a first language Fact: 69% of Swiss work in the service industry Parameter 1: p(German speaker) Parameter 2: p(Service Industry Workers) Given a sample of K German speakers and X Service Industry Workers of N interviewees, we can ask the following: What is the probability of observing this sample if Ron’s hunch is true? -Each type of data should be binomially distributed -We may obtain the likelihood that Ron is in Switzerland by 1. Estimating the probability for each character separately with parameter values chosen to reflect the hypothesis 2. Taking the product of the likelihood for each character (or the sum of the log likelihoods)

  15. Evaluating Models with Likelihood: More Trivial Examples There may be more than one way to specify parameters for the same hypothesis. Hypothesis:Ron is in Switzerland Parameterization 1: parameter 1 = latitude parameter 2 = longitude Parameterization 2: parameter 1 = distance from Xalapa parameter 2 = direction from Xalapa How parameters are specified should depend strongly on the data that you hope to obtain and the inference that you want to make.

  16. Part 4: Continuous Markov Models of in Phylogenetics Trees themselves imply NOTHING regarding probability of state change. Models of change are needed! Some general assumptions of evolutionary models* Time Reversibility: The direction of change is immaterial for the calculation. e.g. prob {A→C} = prob {C→A} Markov Process: Changes on one branch occur independently of changes on another. Homogeneity: The process is homogenous across characters and throughout the branching process *All evolutionary models in ML are continuous Markov models…for now we will only discuss those that model changes in nucleotide sequences.

  17. Model format: Each model must include an instantaneous rate matrix that specifies the relative probabilities of each type of substitution over an infinitesimal increment of time dt The instantaneous rate matrix is also known as a Q-matrix, and in many ways it is analogous to a Sankoff matrix,but rather than specifying cost in number of steps, it is specified in instantaneous probability of change.

  18. Each element of the Q-matrix is termed an instantaneous rate parameter and must be estimated before probability can be assessed Instantaneous probability of a change from A to T = Q(A→T) Q(A→T) = mcpT • wherem= ln (mean instantaneous substitution rate) • c = relative rate parameter • pT= frequency of T in data matrix Because m is invariant , the product of m and the relative rate parameter are treated as a single parameter

  19. Generalized Instantaneous Rate Matrix • This matrix is symmetrical reflecting the assumption of timer reversibility • The sum of the rows = 0; this is because the instantaneous rate will ultimately be an exponent of e. • e0 = 100%... The probability that either a change occurs or doesn’t occur in any time interval must sum to one.

  20. Conventional models are specialized (or constrained) cases of the General Time Reversible model. Some parameters held equal, reducing the number of parameters to be estimated. Transversions: Change from a purine (A,G) to a pyrimidine (C,T). Transitions: Changes within nucleotide classes (A <--> G), (C <-->T) -constrained versions of GTR are formulated by holding relative rate parameters (a-f) equal for certain types of changes. Base frequencies can be made equal by specifying the same value of p for elements of Q

  21. So, how is GTR converted to HKY85* for instance? Q= *Hashagano Kishino Yano 1985: Unequal base frequencies; transitions; transversions

  22. 1. Make all transversion types the same using relative rate parameter “a” Q= *Hashagano Kishino Yano 1985: Unequal base frequencies; transversions; transitions

  23. 2. Make all transition types the same using relative rate parameter “b” Q= *Hashagano Kishino Yano 1985: Unequal base frequencies; transversions; transitions

  24. How can we convert HKY85 into K2P* Q= *Kimura 2 parameter: equal base frequencies; transversions; transitions

  25. Make all base-frequency parameters equal: this results in two instantaneous rate values Q= *Kimura 2 parameter: equal base frequencies; transversions; transitions

  26. The most restrictive model is Jukes Cantor* Q= *Jukes Cantor: Equal Base Frequencies, all substitutions equally probable

  27. Part 4: Estimating likelihood given a model. • To determine the actual probability of change, branch lengths (t) must be specified. • -Branch lengths represent the expected number of substitutions on a • branch • -Rate of change on a branch and duration of the branch (in time) are not • distinguishable without making additional assumptions. • The Q-matrix is an estimate of instantaneous rates, or rates of change over an infinitesimal amount of time • The probability matrix of change for any given character over a branch with length t: • P(t) = eQt JC69: Pij(t) = ¼ + ¾ (e-µt) if i = j = ¼ - ¼ (e-µt) if i ≠ j

  28. The actual P calculations on this slide have been abbreviated for simplicity, but you get the picture.

  29. Additional Parameter: Γ; rate heterogeneity Up until this point it has been assumed that all characters will evolve at the same underlying base rate μ Unrealistic in that for many data sets the evolution of some characters will be constrained by natural selection. Gamma distribution: A representation of moderately – very asymmetrical probability densities. Shape parameter = α Scale parameter = β = 1

  30. Additional Parameter: Γ; rate heterogeneity Gamma correction: Usually, once a shape and scale parameter are determined, the gamma distribution is broken up into rate classes (rj) and each character assigned a rate parameter value proportional to the probability density of gamma. This parameter is added to the estimate of probability of character state distribution for each character j. The probability of tree hypothesis i with associated branch lengths t for character j is then P(i,j) = eQtrj

  31. Additional Parameter: Invariant sites Obviously, some characters in a data set do not change and are invariant. The proportion of invariant sites is paramterized as q. Where Lj = Likelihood of a tree given character j, LI= Likelihood for the tree if character j does not change LV= Likelihood for the tree if character j does change Ping-pong effect: If one of the rate classes assigned to gamma is sufficiently close to zero, this may effectively incorporate I -gamma (α) and invariants (q)may be difficult to estimate simultaneously

  32. How are parameters estimated during a likelihood search? Parameters that must be estimated: Instantaneous rates and nucleotide frequencies (Q-matrix), branch lengths, gamma shape and scale parameters. At every stage during a tree search, parameter values may be readjusted to maximize likelihood. Parameter values that maximize likelihood are retained until better values are found. -This is why ML searches take so long. -In reality, Q matrix parameters are often held fairly constant, and most change occurs in topology and branch-lengths. Starting values are often estimated at the beginning of tree search from a most parsimonious tree.

  33. Part 5:Choosing the appropriate model Likelihood models are chosen that provide the best fit (highest likelihood) without adding too many parameters.

  34. Likelihood Ratio Test Pairs of nested models can be compared using a Likelihood Ratio Test. or Where Δ should approximate the c2 distribution, with degrees of freedom (k) equal to the difference in free parameters between model0 and model1.

  35. Hierarchical LRT Hierarchical likelihood ratio tests. Here likelihood ratio tests are used to select the best-fit model among a set of 24 candidate models. At each test, two models differing in a set of parameter constraints are contrasted. The null o simple model (above vs) can be accepted (A) or rejected (R) against the alternative or complex model (below vs). In the former case the null model, and in the later case the alternative model, becomes the null model of the next test. In red, a potential sequence of test outcomes, leading the selection of the HKY+G model. Many other hierarchies (order of addition/removal of parameters) could be constructed. From: Posada & Crandall, 2001. Syst. Biol. 50(4): 580-601, 2001

  36. Dynamic LRT LRT may begin anywhere in the hierarchy and will move up or down depending on likelihood ratios From: Posada & Crandall, 2001. Syst. Biol. 50(4): 580-601, 2001

  37. Information criteria quantify the information lost by removing parameters: do not require models to be nested. Akaike Information Criterion (AIC) Where n is the number of free parameters. Smaller AIC indicates better fit. Bayesian Information Criterion (AIC) Estimates how well data improve belief that the model is the “true” model. Where n is the number of free parameters and X is the number of characters. Smaller BIC indicates better fit.

  38. Part 6: Complex Modeling • Mixed model analysis: Different models are applied to different data partitions: Stems v. loops, synonymous vs. non-synonymous sites, introns vs. exons, etc. • Covarion models: Models allow changes (within sites) in the instantaneous rate matrix across a tree. Such changes are autocorrelated, meaning that the rate parameter changes in association with the branching process.

  39. Part 6: Complex Modeling NCM (Tuffley and Steel 1997): No common mechanism. Each character is parameterized separately using JC69. Branch lengths vary between characters. Number of parameters to be estimated equals product of the number of branches and the number of characters. MK (Lewis 2001): Markov k-states. Generalization of JK69, allowing the number of observed character states to vary across characters. MK requires many fewer parameters to be estimated than NCM, leading to greater statistical consistency.

  40. Maximum Likelihood: The tree hypothesis (topology + branch lengths) that maximizes the probability of having observed the data, is the tree of maximum likelihood, and is to be preferred over less “likely” hypotheses. What’s wrong with parsimony? 1) logical rather than probabilistic inferences are more difficult to apply to secondary problems -By being non-distributional, the results of parsimony analyses do not allow straightforward testing of hypotheses of relationship or character evolution. e.g. Is the evolution of stem succulence concomitant with the modification of leaves into spines

  41. Two competing ideologies Is the evolution of stem succulence concomitant with the modification of leaves into spines? -Parsimony: In several cases it is and in other cases it isn’t. -Statistical: The null hypothesis that these two events are not connected (or distributed randomly in regards to each other given phylogeny j) can be rejected with a probability of XX%

  42. Maximum Likelihood: The tree hypothesis (topology + branch lengths) that maximizes the probability of having observed the data, is the tree of maximum likelihood, and is to be preferred over less “likely” hypotheses. What’s wrong with parsimony? 2) When framed in a statistical setting, parsimony can be inconsistent. This means as the quantity of data increase, parsimony will prefer the incorrect topology with increasing support. MP tree Actual branching process

  43. Inconsistency in parsimony results in something called long branch attraction True underlying phylogeny Long Branch Attraction: When two branches that are not sisters exhibit many autapomorphies (A and B), parsimony will assume that some of them are synapomorphies, resulting in these two branches incorrectly forming a clade. This problem is exacerbated by addition of data from the same distribution.

More Related