1 / 53

CS5263 Bioinformatics

CS5263 Bioinformatics. RNA Secondary Structure Prediction. Outline. Biological roles for RNA RNA secondary structure What’s “secondary structure”? How is it represented? Why is it important? How to predict?. Central dogma. The flow of genetic information. transcription. translation.

chus
Télécharger la présentation

CS5263 Bioinformatics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS5263 Bioinformatics RNA Secondary Structure Prediction

  2. Outline • Biological roles for RNA • RNA secondary structure • What’s “secondary structure”? • How is it represented? • Why is it important? • How to predict?

  3. Central dogma The flow of genetic information transcription translation DNA RNA Protein Replication

  4. Classical Roles for RNA • mRNA • tRNA • rRNA Ribosome

  5. “Semi-classical” RNA • snRNA - small nuclear RNA (60-300nt), involved in splicing (removing introns), etc. • RNaseP - tRNA processing (~300 nt) • SRP RNA - signal recognition particle RNA: membrane targeting (~100-300 nt) • tmRNA - resetting stalled ribosomes, destroy aberrant mRNA • Telomerase - (200-400nt) • snoRNA - small nucleolar RNA (many varieties; 80-200nt)

  6. Non-coding RNAs Dramatic discoveries in last 10 years • 100s of new families • Many roles: regulation, transport, stability, catalysis, … • siRNA: Small interfering RNA (Nobel prize 2006) and miRNAs: both are ~21-23 nt • Regulating gene expression • Evidence of disease-association • 1% of DNA codes for protein, but 30% of it is copied into RNA, i.e. ncRNA >> mRNA

  7. Take-home message • RNAs play many important roles in the cell beyond the classical roles • Many of which yet to be discovered • RNA functions are determined by structures

  8. Example: Riboswitch • Riboswitch: an mRNA regulates its own activity

  9. RNA structure • Primary: sequence • Secondary: base-pairing • Tertiary: 3D shape

  10. RNA base-pairing • Watson-Crick Pairing • C-G ~3kcal/mole • A-U ~2kcal/mole • “Wobble Pair” G – U ~1kcal/mole • Non-canonical Pairs

  11. tRNA structure

  12. Secondary structure prediction • Given: CAUUUGUGUACCU…. • Goal: • How can we compute that?

  13. Terminology Hairpin Loops Interior loops Stems Multi-branched loop Bulge loop

  14. 5’ 5 10 15 20 30 25 35 40 45 3’ Pseudoknot • Makes structure prediction hard. Not considered in most algorithms. ucgacuguaaaaaagcgggcgacuuucagucgcucuuuuugucgcgcgc 5’- -3’ 10 20 30 40

  15. The Nussinov algorithm • Goal: maximizing the number of base-pairs • Idea: Dynamic programming • Loop matching • Nussinov, Pieczenik, Griggs, Kleitman ’78 • Too simple for accurate prediction, but stepping-stone for later algorithms

  16. A C C A G C C G G C A U A U U A U A C A G A C A C A G U A A G C U C G C U G U G A C U G C U G A G C U G G A G G C G A G C G A U G C A U C A A U U G A The Nussinov algorithm Problem: Find the RNA structure with the maximum (weighted) number of nested pairings Nested: no pseudoknot ACCACGCUUAAGACACCUAGCUUGUGUCCUGGAGGUCUAUAAGUCAGACCGCGAGAGGGAAGACUCGUAUAAGCG

  17. The Nussinov algorithm • Given sequence X = x1…xN, • Define DP matrix: F(i, j) = maximum number of base-pairs if xi…xj folds optimally • Matrix is symmetric, so let i < j

  18. The Nussinov algorithm • Can be summarized into two cases: • (i, j) paired: optimal score is 1 + F(i+1, j-1) • (i, j) unpaired: optimal score is maxk F(i, k) + F(k+1, j) k = i..j-1

  19. The Nussinov algorithm • F(i, i) = 0 F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) • S(xi, xj) = 1 if xi, xj can form a base-pair, and 0 otherwise • Generalize: S(A, U) = 2, S(C, G) = 3, S(G, U) = 1 • Or other types of scores (later) • F(1, N) gives the optimal score for the whole seq

  20. How to fill in the DP matrix? F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) i i+1 j–1 j

  21. How to fill in the DP matrix? F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) j – i = 1

  22. How to fill in the DP matrix? F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) j – i = 2

  23. How to fill in the DP matrix? F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) j – i = 3

  24. How to fill in the DP matrix? F(i+1, j-1) + S(xi, xj) • F(i, j) = max maxk=i..j-1 F(i, k) + F(k+1, j) j – i = N - 1

  25. Minimum Loop length • Sharp turns unlikely • Let minimum length of hairpin loop be 1 (3 in real preds) • F(i, i+1) = 0 U  A G  C C  G G C

  26. Algorithm Initialization: F(i, i) = 0; for i = 1 to N F(i, i+1) = 0; for i = 1 to N-1 Iteration: For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N) F(i+1, j -1) + s(xi, xj) F(i, j) = max max{ i  k < j } F(i, k) + F(k+1, j) Termination: Best score is given by F(1, N) (For trace back, refer to the Durbin book)

  27. Complexity For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N) F(i+1, j -1) + s(xi, xj) F(i, j) = max max{ i  k < j } F(i, k) + F(k+1, j) • Time complexity: O(N3) • Memory: O(N2)

  28. Example • RNA sequence: GGGAAAUCC • Only count # of base-pairs • A-U = 1 • G-C = 1 • G-U = 1 • Minimum hairpin loop length = 1

  29. G G G A A A U C C G G G A A A U C C

  30. G G G A A A U C C G G G A A A U C C

  31. G G G A A A U C C G G G A A A U C C

  32. G G G A A A U C C G G G A A A U C C

  33. G G G A A A U C C G  U G  C G  C AAA G G G A A A U C C A  U G  C G  C G A  U G G  C G  C AA AA

  34. G G G A A A U C C G  U G  C G  C AAA G G G A A A U C C A  U G  C G  C G A  U G G  C G  C AA AA

  35. G G G A A A U C C G  U G  C G  C AAA G G G A A A U C C A  U G  C G  C G A  U G G  C G  C AA AA

  36. G G G A A A U C C G  U G  C G  C AAA G G G A A A U C C A  U G  C G  C G A  U G G  C G  C AA AA

  37. Energy minimization For L = 1 to N-1 For i = 1 to N – l j = min(i + L, N); E(i+1, j -1) + e(xi, xj) E(i, j) = min min{ i  k < j } E(i, k) + E(k+1, j) e(xi, xj) represents the energy for xi base pair with xj • Energy are negative values. Therefore minimization rather than maximize. • More complex energy rules: energy depends on neighboring bases

  38. More realistic energy rules 4nt hairpin +5.9 U U A A -1.1, Terminal mismatch of hairpin G C -2.9, stacking G C -2.9, stacking (special for 1nt bulge) 1nt bulge, +3.3 A G C -1.8, stack U A -0.9, stack A U -1.8, stack C G -2.1, stack A U 5’-dangle, -0.3 A 3’ unstructured, 0 A Overall G = -4.6 kcal/mol 5’ Complete energy rules at http://www.bioinfo.rpi.edu/zukerm/cgi-bin/efiles.cgi

  39. The Zuker algorithm – main ideas • Instead of base pairs, pairs of base pairs (more accurate) • Separate score for bulges • Separate score for different-size & composition of loops • Separate score for interactions between stem & beginning of loop • Use additional matrix to remember current state. e.g, to model stacking energy: • W(i, j): energy of the best structure on i, j • V(i, j): energy of the best structure on i, j given that i, j are paired • Similar to affine-gap alignment.

  40. Two popular implementations • mfold (Zuker) http://mfold.bioinfo.rpi.edu/ • RNAfold in the Vienna package (Hofacker) http://www.tbi.univie.ac.at/~ivo/RNA/

  41. Accuracy • 50-70% for sequences up to 300 nt • Not perfect, but useful • Possible reasons: • Energy rule not perfect: 5-10% error • Many alternative structures within this error range • Alternative structure do exist • Structure may change in presence of other molecules

  42. Comparative structure prediction To maintain structure, two nucleotides that form a base-pair tend to mutate together Given K homologous aligned RNA sequences: Human aagacuucggaucuggcgacaccc Mouse uacacuucggaugacaccaaagug Worm aggucuucggcacgggcaccauuc Fly ccaacuucggauuuugcuaccaua Orc aagccuucggagcgggcguaacuc If ith and jth positions are always base paired and covary, then they are likely to be paired

  43. Mutual information fab(i,j): Prob for a, b to be in positions i, j fa (i): Prob for a to be in positions i aagacuucggaucuggcgacaccc uacacuucggaugacaccaaagug aggucuucggcacgggcaccauuc ccaacuucggauuuugcuaccaua aagccuucggagcgggcguaacuc fc(13) = 3/5 fg(13) = 1/5 fu(13) = 1/5 fgc(3,13) = 3/5 fcg(3,13) = 1/5 fau(3,13) = 1/5 fg(3) = 3/5 fc(3) = 1/5 fa(3) = 1/5

  44. Mutual information • Also called covariance score • M is high if base a in position i always follow by base b in position j • Does not require a to base-pair with b • Advantage: can detect non-canonical base-pairs • However, M = 0 if no mutation at all, even if perfect base-pairs aagacuucggaucuggcgacaccc uacacuucggaugacaccaaagug aggucuucggcacgggcaccauuc ccaacuucggauuuugcuaccaua aagccuucggagcgggcguaacuc One way to get around is to combine covariance and energy scores

  45. Comparative structure prediction • Given a multiple alignment, can infer structure that maximizes the sum of mutual information, by DP • However, alignment is hard, since structure often more important than sequence

  46. Comparative structure prediction In practice: • Get multiple alignment • Find covarying bases – deduce structure • Improve multiple alignment (by hand) • Go to 2 A manual EM process!!

  47. Comparative structure prediction • Align then fold • Fold then align • Align and fold

  48. Context-free Grammar for RNA Secondary Structure • S = SS | aSu | cSg | uSa | gSc | L • L = aL | cL | gL | uL |  S ag u cg aaacgg ugcc S S S L S S a L L a L  a c g g a g u g c c c g u

  49. Stochastic Context-free Grammar (SCFG) • Probabilistic context-free grammar • Probabilities can be converted into weights • CFG vs SCFG is similar to RG vs HMM • S = SS • S = aSu | uSa • S = cSg | gSc • S = uSg | gSu • S = L • L = aL | cL | gL | uL |  0 e(xi, xj) + S(i+1, j-1) S(i, j) = max L(i, j) maxk (S(i, k) + S(k+1, j)) L(i, j) = 0 2 3 1 0 0

  50. SCFG Decoding • Decoding: given a grammar (SCFG/HMM) and a sequence, find the best parse (highest probability or score) • Cocke-Younger-Kasami (CYK) algorithm (analogous to Viterbi in HMM) • The Nussinov and Zuker algorithms are essentially special cases of CYK • CYK and SCFG are also used in other domains (NLP, Compiler, etc).

More Related