1 / 81

Computational Systems Biology of Cancer:

(II). Computational Systems Biology of Cancer:. Professor of Computer Science, Mathematics and Cell Biology ¦ Courant Institute, NYU School of Medicine, Tata Institute of Fundamental Research, and Mt. Sinai School of Medicine. Bud Mishra. DNA. RNA. Protein. Phenotype. Genotype.

wallis
Télécharger la présentation

Computational Systems Biology of Cancer:

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. (II) Computational Systems Biology of Cancer:

  2. Professor of Computer Science, Mathematics and Cell Biology ¦ Courant Institute, NYU School of Medicine, Tata Institute of Fundamental Research, and Mt. Sinai School of Medicine Bud Mishra

  3. DNA RNA Protein Phenotype Genotype Transcription Translation Part-lists, Annotation, Ontologies • micro-environment • epigenomics • transcriptomics • proteomic • metabolomics • signaling The New Synthesis Genome Evolution Selection perturbed pathways genetic instability

  4. Is the Genomic View of Cancer Necessarily Accurate ? • “If I said yes, that would then suggest that that might be the only place where it might be done which would not be accurate, necessarily accurate. It might also not be inaccurate, but I'm disinclined to mislead anyone.” • US Secretary of Defense, Mr. Donald Rumsfeld, Once again quoted completely out of context.

  5. Cancer Initiation and Progression Genomics (Mutations, Translocations, Amplifications, Deletions) Epigenomics (Hyper & Hypo-Methylation) Transcriptomics (Alternate Splicing, mRNA) Proteomics (Synthesis, Post-Translational Modification, Degradation) Signaling Cancer Initiation and Progression Proliferation, Motility, Immortality, Metastasis, Signaling

  6. Mishra’s Mystical 3M’s • Rapid and accurate solutions • Bioinformatic, statistical, systems, and computational approaches. • Approaches that are scalable, agnostic to technologies, and widely applicable • Promises, challenges and obstacles— Measure Mine Model

  7. “Measure” What we can quantify and what we cannot

  8. Normal DNA Tumor DNA Normal LCR Tumor LCR Label Hybridize Microarray Analysis of Cancer Genome • Representations are reproducible samplings of DNA populations in which the resulting DNA has a new format and reduced complexity. • We array probes derived from low complexity representations of the normal genome • We measure differences in gene copy number between samples ratiometrically • Since representations have a lower nucleotide complexity than total genomic DNA, we obtain a stronger specific hybridization signal relative to non-specific and noise

  9. Minimizing Cross Hybridization(Complexity Reduction)

  10. Copy Number Fluctuation A1 B1 C1 A2 B2 C2 A3 B3 C3

  11. Critical Innovations • Data Normalization and Background Correction for Affy-Chips • 10K, 100K, 500K (Affy); Generalized RMA • Multi-Experiment-Based Probe-Characterization (Kalman + EM) • A novel genome segmenter algorithm • Empirical Bayes Approach; Maximum A Posteriori (MAP) • Generative Model (Hierarchical, Heteroskedastic) • Dynamic Programming Solution • Cubic-Time; Linear-time Approximation using Beam-Search Heuristic • Single Molecule Technologies • Optical and Nanotechnologies • Sequencing: SMASH • Epigenomics • Transcriptomics

  12. Background Correction & Normalization

  13. DNA 25-mers Oligo Arrays: SNP genotyping • Given 500K human SNPs to be measured, select 10 25-mers that over lap each SNP location for Allele A. • Select another 10 25-mers corresponding to SNP Allele B. • Problem : Cross Hybridization

  14. Using SNP arrays to detect Genomic Aberrations • Each SNP “probeset” measures absense/presence of one of two Alleles. • If a region of DNA is deleted by cancer, one or both alleles will be missing! • If a region of DNA is duplicated/amplified by cancer, one or both alleles will be amplified. • Problem : Oligo arrays are noisy.

  15. 90 humans, 1 SNP (A=0.48) Allele B Allele A

  16. 90 humans, 1 SNP (A=0.24) Allele B Allele A

  17. 90 humans, 1 SNP (A=0.96) Allele B Allele A

  18. Background Correction & Normalization • Consider a genomic location L and two “similar” nucleotide sequences sL,x and sL,ystarting at that location in the two copies of a diploid genomes… • E.g., they may differ in one SNP. • Let qx and qy be their respective copy numbers in the whole genome and all copies are selected in the reduced complexity representation. The gene chip contains four probes px2 sL,x; py2 sL,y; px’, py’:2 G. • After PCR amplification, we have some Kx¢qx amount of DNA that is complementary to the probe px, etc.K' (¼ K’x) amount of DNA that is additionally approximately complementary to the probe px.

  19. Normalize using a Generalized RMA I’ = U - mn – [asn2 - fN(0,1)(a’/b’)/FN(0,1)(a’/b’)] £{(1 + b’ Bsn/FN(0,1)(a’/b’)}-1 + [bsn/Bsn] )] £{(1 + FN(0,1)(a’/b’)/(b’ Bsn)}-1, • Where a’ = U-mn -asn2; b’ = sn, and • bsn = å [Ii,j – U + mn] fN(0,1)([Ii,j – U + mn] ) • Bsn = åfN(0,1)([Ii,j – U + mn] )

  20. Background Correction & Normalization • If the probe has an affinity fx, then the measured intensity is can be expressed as [Kxqx + K’] fx +noise = [qx + K’/Kx] f’x + noise • With Exp[m1 + es1], a multiplicative logNormal noise, [m2 + es2] an additive Gaussian noise, and f’x = Kxfx an amplified affinity. • A more general model: Ix = [qx + K’/Kx] f’x em1+es1 + m2 +e s2

  21. Mathematical Model • In particular, we have four values of measured intensities: Ix = [qxf’x + Nx]e m1 +es1 +m2 + e s2 Ix’ = [Nx] e m1 +es1 +m2 + e s2 Iy = [qy f’y + Ny] e m1 +es1 +m2 + e s2 Iy’ = [Ny] e m1 +es1 +m2 + e s2

  22. Bioinformatics: Data modeling • Good news: For each 25-bp probe, the fluorescent signal increases linearly with the amount of complementary DNA in the sample (up to some limit where it saturates). • Bad news: The linear scaling and offset differ for each 25-bp probe. Scaling varies by factors of more than 10x. • Noise : Due to PCR & cross hybridization and measurement noise.

  23. Scaling & Offset differ • Scaling varies across probes: • Each 25-bp sequence has different thermodynamic properties. • Scaling varies across samples: • The scanning laser for different samples may have different levels. • The starting DNA concentrations may differ; PCR may amplify differently. • Offset varies across probes: • Different levels of Cross Hybridization with the rest of the Genome. • Offset varies across samples: • Different sample genomes may differ slightly (sample degradation; impurities, etc.)

  24. Linear Model + Noise

  25. Noise minimization

  26. Final Data Model

  27. MLE using gradients

  28. Data Outliers • Our data model fails for few data points (“bad probes”) • Soln (1): Improve the model… • Soln (2): Discard the outliers • Soln (3): Alternate model for the outliers… Weight the data approprately.

  29. Outlier Model

  30. Problem with MLE: No unique maxima

  31. Scaling of MLE estimate

  32. Segmentation to reduce noise • The true copy number (Allele A+B) is normally 2 and does not vary across the genome, except at a few locations (breakpoints). • Segmentation can be used to estimate the location of breakpoints and then we can average all estimated copy number values between each pair of breakpoints to reduce noise.

  33. Allelic Frequencies: Cancer & Normal

  34. Allelic Frequencies: Cancer & Normal

  35. Segmentation & Break-Point Detection

  36. Algorithmic Approaches • Local Approach • Change-point Detection • (QSum, KS-Test, Permutation Test) • Global Approach • HMM models • Wavelet Decomposition • Bayesian & Empirical Bayes Approach • Generative Models • (One- or Multi-level Hierarchical) • Maximum A Posteriori

  37. 6 5 4 3 2 1 0 HMM Model with a very high degree of freedom, but not enough data points. Small Sample statistics a Overfitting, Convergence to local maxima, etc.

  38. HMM, finally… Model with a very high degree of freedom, but not enough data points. Small Sample statistics a Overfitting, Convergence to local maxima, etc. ¸ 3 · 1 2

  39. HMM, last time • Advantages: • Small Number of parameters. Can be optimized by MAP estimator. (EM has difficulties). • Easy to model deviation from Markvian properties (e.g., polymorphisms, power-law, Polya’s urn like process, local properties of chromosomes, etc.) We will simply model the number of break-points by a Poisson process, and lengths of the aberrational segments by an exponential process. Two parameter model: pb & pe ¹ 2 1-pe pe =2 pb 1-pb

  40. Generative Model Breakpoints, Poisson, pb Segmental Length, Exponential, pe Copy number, Empirical Distribution Noise, Gaussian, m, s Amplification, c=4 Amplification, c=3 Deletion, c=0 Deletion, c=1

  41. A reasonable choice of priors yields good segmentation.

  42. A reasonable choice of priors yields good segmentation.

  43. Priors: Deletion + Amplification Data: Priors + Noise Goal: Find the most plausible hypothesis of regional changes and their associated copy numbers Generalizes HMM:The prior depends on two parameters pe and pb. pe is the probability of a particular probe being “normal”. pb is the average number of intervals per unit length. (pe,pb) max at (0.55,0.01) A MAP (Maximum A Posteriori) Estimators

  44. Likelihood Function • The likelihood function for first n probes: • L(h i1, m1, …, ik, mki) = Exp(-pb n) (pb n)k £ (2 ps2)(-n/2)Õi=1n Exp[-(vi - mj)2/2s2] £ pe(#global)(1-pe)(#local) • Where ik = n and i belongs to the jth interval. • Maximum A Posteriori algorithm (implemented as a Dynamic Programming Solution) optimizes L to get the best segmentation • L(h i*1, m*1, …, i*k, m*ki)

  45. Dynamic Programming Algorithm • Generalizes Viterbi and Extends. • Uses the optimal parameters for the generative model: • Adds a new interval to the end: • h i1, m1, …, ik, mki±h ik+1, mk+1i = h i1, m1, …, ik, mk, ik+1, mk+1i • Incremental computation of the likelihood function: – Log L(h i1, m1, …, ik, mk, ik+1, mk+1i) = –Log L(h i1, m1, …, ik, mki) + new-res./2s2 – Log(pbn) +(ik+1 – ik) Log (2ps2) – (ik+1 – ik) [Iglobal Log pe + Ilocal Log(1 – pe)]

  46. (pe,pb) max at (0.55,0.01) Prior Selection: F criterion • For each break we have a T2 statistic and the appropriate tail probability (p value) calculated from the distribution of the statistic. In this case, this is an F distribution. • The best (pe,pb) is the one that leads to the maximum min p-value.

  47. Segmentation Analysis

  48. Comparative Analysis: BAC Array

  49. Comparative Analysis: Nimblegen

  50. Comparative Analysis: Affy 10K

More Related