1 / 38

Gene Set Enrichment Analysis Microarray Classification

Gene Set Enrichment Analysis Microarray Classification. STAT115 Jun S. Liu and Xiole Shirley Liu. Outline. Gene ontology Check differential expression and clustering results Gene set enrichment analysis Unsupervised learning for classification Clustering and KNN

viola
Télécharger la présentation

Gene Set Enrichment Analysis Microarray Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Gene Set Enrichment AnalysisMicroarray Classification STAT115 Jun S. Liu and Xiole Shirley Liu

  2. Outline • Gene ontology • Check differential expression and clustering results • Gene set enrichment analysis • Unsupervised learning for classification • Clustering and KNN • PCA (dimension reduction) • Supervised learning for classification • CART, SVM • Expression and genome resources

  3. GO • Relationships: • Subclass: Is_a • Membership: Part_of • Topological: adjacent_to; Derivation: derives_from • E.g. 5_prime_UTR is part_of a transcript, and mRNA is_a kind of transcript • Same term could be annotated at multiple branches • Directed acyclic graph

  4. Evaluate Differentially Expressed Genes • NetAffx mapped GO terms for all probesets Whole genome Up genes GO term X 100 80 Total 20K 200 • Statistical significance? • Binomial proportional test • p = 100 / 20 K = 0.005 • Check z table

  5. Evaluate Differentially Expressed Genes Whole genome Up genes GO term X 100 80 Total 20K 200 • Chi sq test: Up !Up Total GO: 80 (1) 20 (99) 100 !GO: 120 (199) 20K-120 (19701) 20K-100 Total: 200 20K-200 20K • Check Chi-sq table

  6. GO Tools for Microarray Analysis • 40 tools

  7. GO on Clustering • Evaluate and refine clustering • Check GO term for members in the cluster • Are GO term significantly enriched? • Can we summarize what this cluster of these genes do? • Are there conflicting members in the cluster? • Annotate unknown genes • After clustering, check GO term • Can we infer an unknown gene’s function based on the GO terms of cluster members?

  8. Gene Set Enrichment Analysis • In some microarray experiments comparing two conditions, there might be no single gene significantly diff expressed, but a group of genes slightly diff expressed • Check a set of genes with similar annotation (e.g. GO) and see their expression values • Kolmogorov-Smirnov test • One sample z-test • GSEA at Broad Institute

  9. Gene Set Enrichment Analysis • Kolmogorov-Smirnov test • Determine if two datasets differ significantly • Cumulative fraction function • What fraction of genes are below this fold change?

  10. Gene Set Enrichment Analysis • Set of genes with specific annotation involved in coordinated down-regulation • Need to define the set before looking at the data • Can only see the significance by looking at the whole set

  11. Gene Set Enrichment Analysis • Alternative to KS: one sample z-test • Population with all the genes follow normal ~ N(,2) • Avg of the genes (X) with a specific annotation:

  12. Dimension Reduction • High dimensional data points are difficult to visualize • Always good to plot data in 2D • Easier to detect or confirm the relationship among data points • Catch stupid mistakes (e.g. in clustering) • Two ways to reduce: • By genes: some experiments are similar or have little information • By experiments: some genes are similar or have little information

  13. Principal Component Analysis • Optimal linear transformation that chooses a new coordinate system for the data set that maximizes the variance by projecting the data on to new axes in order of the principal components • Components are orthogonal (mutually uncorrelated) • Few PCs may capture most variation in original data • E.g. reduce 2D into 1D data

  14. Principal Component Analysis • Achieved by singular value decomposition (SVD): X = UDVT • X is the original N  p data • E.g. N genes, p experiments • V is p  p project directions • Orthogonal matrix: UTU = Ip • v1 is direction of the first projection • Linear combination (relative importance) of each experiment or (gene if PCA on samples)

  15. PCA • U is N  p, relative projection of points • D is p  p scaling factor • Diagonal matrix, d1 d2 d3 …  dp 0 • ui1d1 is distance along v1 from origin (first principal components) • Expression value projected on v1 • v2is 2nd projection direction, ui2d2 is 2nd principal component, so on • Captured variances by the first m principal components

  16. PCA P P P P = N × P N × P Original data Projection dir Projected value scale X11V11 + X12V21 + X13V31 + …= X11’ = U11 D11 X21V11 + X22V21 + X23V31 + …= X21’ = U21 D11 1st Principal Component 2nd Principal Component X11V12 + X12V22 + X13V32 + …= X12’ = U12 D22 X21V12 + X22V22 + X23V32 + …= X22’ = U22 D22

  17. PCA v2 v2 v1 v1 v2 v1

  18. PCA on Genes Example • Cell cycle genes, 13 time points, reduced to 2D • Genes: 1: G1; 4: S; 2: G2; 3: M

  19. PCA Example Variance in data explained by the first n principle components

  20. PCA Example • The weights of the first 8 principle directions • This is an example of PCA to reduce samples • Can do PCA to reduce the genes as well • Use first 2-3 PC to plot samples, give more weight to the more differentially expressed genes, can often see sample classification v1 v2v3 v4

  21. Microarray Classification ?

  22. Classification • Equivalent to machine learning methods • Task: assign object to class based on measurements on object • E.g. is sample normal or cancer based on expression profile? • Unsupervised learning • Ignore known class labels, e.g. cluster analysis or KNN • Sometimes can’t separate even the known classes • Supervised learning: • Extract useful features based on known class labels to best separate classes • Can over fit the data, so need to separate training and test set (e.g. cross-validation)

  23. Clustering Classification • Which known samples does the unknown sample cluster with? • No guarantee that the known sample will cluster • Try different clustering methods (semi-supervised) • E.g. change linkage, use subset of genes

  24. K Nearest Neighbor • Used in missing value estimation • For observation X with unknown label, find the K observations in the training data closest (e.g. correlation) to X • Predict the label of X based on majority vote by KNN • K can be determined by predictability of known samples, semi-supervised again! • Offer little insights into mechanism

  25. Supervised Learning Performance Assessment • If error rate is estimated from whole learning data set, it will be over-optimistic (do well now, but poorly in future observations) • Divide observations into L1 and L2 • Build classifier using L1 • Compute classifier error rate using L2 • Requirement: L1 and L2 are iid (independent & identically-distributed) • N-fold cross validation • Divide data into N subsets (equal size), build classifier on (N-1) subsets, compute error rate on left out subset STAT115 03/18/2008

  26. Classification And Regression Tree • Split data using set of binary (or multiple value) decisions • Root node (all data) has certain impurities, need to split the data to reduce impurities

  27. CART • Measure of impurities • Entropy • Gini index impurity • Example with Gini: multiply impurity by number of samples in the node • Root node (e.g. 8 normal & 14 cancer) • Try split by gene xi (xi 0, 13 cancer; xi< 0, 1 cancer & 8 normal): • Split at gene with the biggest reduction in impurities

  28. CART • Assume independence of partitions, same level may split on different gene • Stop splitting • When impurity is small enough • When number of node is small • Pruning to reduce over fit • Training set to split, test set for pruning • Split has cost, compared to gain at each split

  29. Support Vector Machine • SVM • Which hyperplane is the best?

  30. Support Vector Machine • SVM finds the hyperplane that maximizes the margin • Margin determined by support vectors (samples lie on the class edge), others irrelevant

  31. Support Vector Machine • SVM finds the hyperplane that maximizes the margin • Margin determined by support vectors others irrelevant • Extensions: • Soft edge, support vectors diff weight • Non separable: slack var  > 0 Max (margin –  # bad)

  32. Nonlinear SVM • Project the data through higher dimensional space with kernel function, so classes can be separated by hyperplane • A few implemented kernel functions available in Matlab & BioConductor, the choice is usually trial and error and personal experience K(x,y) = (xy)2

  33. Most Widely Used Sequence IDs • GenBank: all submitted sequences • EST: Expressed Sequence Tags (mRNA), some redundancy, might have contaminations • UniGene: computationally derived gene-based transcribed sequence clusters • Entrez Gene: comprehensive catalog of genes and associated information, ~ traditional concept of “gene” • RefSeq: reference sequences mRNAs and proteins, individual transcript (splice variant)

  34. UCSC Genome Browser • Can display custom tracks

  35. Entrez: Main NCBI Search Engine

  36. Public Microarray Databases • SMD: Stanford Microarray Database, most Stanford and collaborators’ cDNA arrays • GEO: Gene Expression Omnibus, a NCBI repository for gene expression and hybridization data, growing quickly. • Oncomine: Cancer Microarray Database • Published cancer related microarrays • Raw data all processed, nice interface

  37. Outline • Gene ontology • Check diff expr and clustering, GSEA • Microarray clustering: • Unsupervised • Clustering, KNN, PCA • Supervised learning for classification • CART, SVM • Expression and genome resources

  38. Acknowledgment • Kevin Coombes & Keith Baggerly • Darlene Goldstein • Mark Craven • George Gerber • Gabriel Eichler • Ying Xie • Terry Speed & Group • Larry Hunter • Wing Wong & Cheng Li • Ping Ma, Xin Lu, Pengyu Hong • Mark Reimers • Marco Ramoni • Jenia Semyonov

More Related