1 / 31

Motif Refinement using Hybrid Expectation Maximization Algorithm

This paper presents a hybrid approach for motif refinement, combining global and local solvers to efficiently detect conserved patterns in DNA and protein sequences. The hybrid algorithm includes a random projection method for neighborhood estimation and expectation maximization for local optimization. The approach improves upon existing methodologies by finding both global and local optimal solutions.

jimmerson
Télécharger la présentation

Motif Refinement using Hybrid Expectation Maximization Algorithm

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Motif Refinement using Hybrid Expectation Maximization Algorithm Chandan ReddyYao-Chung WengHsiao-Dong ChiangSchool of Electrical and Computer Engr.Cornell University, Ithaca, NY - 14853.

  2. Motif Finding Problem • Motifs are certain patterns in DNA and protein sequences that are strongly conserved i.e. they have important biological functions like gene regulation and gene interaction • Finding these conserved patterns might be very useful for controlling the expression of genes • Motif finding problem is to detect novel, over-represented unknown signals in a set of sequences (for eg. transcription factor binding sites in a genome).

  3. Motif Finding Problem Consensus Pattern - ‘ CCGATTACCGA ’ ( l, d ) – (11,2) consensus pattern

  4. Problem Definition Without any previous knowledge about the consensus pattern, discover all instances (alignment positions) of the motifs and then recover the final pattern to which all these instances are within a given number of mutations.

  5. Complexity of the Problem Let n is the length of the DNA sequence l is the length of the motif t is the number of sequences d is the number of mutations in a motif The running time of a brute force approach: There are (n-l+1) l-mers in each of tsequences. Total combination is (n-l+1)t l-mers for t sequences. Typically, n is much larger than l. ie. n = 600, t = 20.

  6. Existing methodologies Generative probabilistic representation - continuous Gibbs Sampling Expectation Maximization Greedy CONSENSUS HMM based Mismatch representation – Discrete Consensus Projection Methods Multiprofiler Suffix Trees

  7. Existing methodologies Global Solvers Advantage: neighborhood of global optimal solutions. Disadvantage: misses out better solutions locally. ie: Random Projection, Pattern Branching, etc… Local Solvers Advantage: returns best solution in neighborhood. Disadvantage: relies heavily on initial conditions. ie: EM, Gibbs Sampling, Greedy CONSENSUS, etc…

  8. Our Approach Performs global solver to estimate neighborhood of a promising solution. (Random Projection) Using this neighborhood as initial guess, apply local solver to refine the solution to be the global optimal solution. (Expectation Maximization) Performs efficient neighborhood search to jump out of convergence region to find another local solutions systematically. A hybrid approach includes the advantages of both the global and local solvers.

  9. Random Projection Implements a hash function h(x) to map l-mer onto a k-dimensional space. Hashes all possible l-mers in t sequences into 4kbuckets where each bucket corresponds an unique k-mer. Imposing certain conditions and setting a reasonable bucket threshold S, the buckets that exceed S is returned as the solution.

  10. ExpectationMaximization Expectation Maximization is a local optimal solver in which we refine the solution yielded by random projection methodology. The EM method iteratively updates the solution until it converges to a locally optimal one. Follow these steps : • Compute the scoring function • Iterate the Expectation step and the Maximization step

  11. Profile Space A profile is a matrix of probabilities, where the rows represent possible bases, and the columns represent consecutive sequence positions. • Applying the Profile Space into the coefficient formula constructs PSSM.

  12. Scoring function- Maximum Likelihood

  13. Basic Idea one-to-one correspondence of the critical points Local Minimum Stable Equilibrium Point Saddle Point Decomposition Point Local Maximum Source

  14. Theoretical Background Practical Stability Boundary The problem of finding all the Tier-1 stable equilibrium points of xs is the problem of finding all the decomposition points on its stability boundary

  15. Theoretical background Theorem (Unstable manifold of type-1 equilibrium point) :Let xs1be a stable e.p. of the gradient system (2) and xd be a type-1 e.p. on the practical stability boundary Ap(xs). Assume that there exist e and dsuch that |f (x)| > eunless x  {x : f (x) =0}. If every e.p. of (1) is hyperbolic and its stable and unstable manifolds satisfy the transversality condition, then there exists another stable e.p. xs2 to which the one dimensional unstable manifold of xdconverges.Our method finds the stability boundary between the two local minima and traces the stability boundary to find the saddle point. We used a newtrajectory adjustment procedureto move along the practical stability boundary.

  16. Definitions Def 1 :x is said to be acritical pointof (1) if it satisfies the condition f (x) = 0 where f (x) is the objective function assumed to be in C2(n, ).The corresponding nonlinear dynamical system is -------- Eq. (1) The solution curve of Eq. (1) starting from x at time t = 0 is called atrajectoryand it is denoted by F( x , .) :  → n. A state vector x is called anequilibrium point(e.p.) of Eq. (3) if f ( x ) = 0.

  17. Our Method

  18. Search Directions

  19. Search Directions

  20. Our Method The exit point method is implemented so that EM can move out of its convergence region to seek out other local optimal solutions. Construct a PSSM from initial alignments. Calculate eigenvectors of Hessian matrix. Find exit points (or saddle points) along each eigenvector. Apply EM from the new stability/convergence region. Repeat first step. Return max score {A, a1i, a2j}

  21. Results

  22. Improvements in the Alignment Scores

  23. Improvements in the Alignment Scores Random Projection method results

  24. Performance Coefficient K is the set of the residue positions of the planted motif instances, and P is the corresponding set of positions predicted

  25. Results Different Motifs and the average score using random starts. The first tier and second tier improvements on synthetic data.

  26. Results Different Motifs and the average score using random projection. The first tier and second tier improvements on synthetic data.

  27. Results Different Motifs and the average score using random projections and the first tier and second tier improvements on real human sequences.

  28. Results on Real data

  29. Concluding discussion Using dynamical system approach, we have shown that the EM algorithm can be improved significantly. In the context of motif finding, we see that there are many local optimal solutions and it is important to search the neighborhood space. Try different global methods and other techniques like GibbsDNA

  30. Questions and suggestions !!!!!

More Related