1 / 56

Examples of Classifying Expression Data

Examples of Classifying Expression Data. 6.892 / 7.90 Computational Functional Genomics Spring 2002. Interpreting patterns of gene expression with self-organizing maps: Methods and application to hematopoietic differentiation.

truda
Télécharger la présentation

Examples of Classifying Expression Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Examples of Classifying Expression Data 6.892 / 7.90 Computational Functional Genomics Spring 2002

  2. Interpreting patterns of gene expression with self-organizing maps: Methods and application to hematopoietic differentiation Tamayo, Slonim, Mesirov, Zhu, Kitareewan, Dmitrovsky, Lander, Golub PNAS 96, pp. 2907-2912, March 1999

  3. Hierarchical clustering problems • Not designed to reflect multiple ways expression patterns can be similar • Clusters not be robust or unique • Points can be clustered based on local decisions that lock in structure

  4. Self-Organizing Maps (SOMs) • Mathematical space for SOMs • n genes with k samples define n points in k-dimensional space • Impose partial structure on the clusters to start • Choose a geometry of nodes – e.g. 3 x 2 grid • Mapped into k dimensional space at random • Each iteration moves nodes in direction of a randomly selected point • Closest node is moved the most • 20,000 – 50,000 iterations later have clustered the genes

  5. Example SOM iteration

  6. Iterative point moving fi+1(N)= fi(N) + L( d(N, Np) , i) (P – fi(N)) P is observation used in iteration i to update map points N map point being updated Np is closest point in map to P Learning rate L decreases with distance and i T is total number of iterations L(x, i) = 0.02T / (T + 100i ) for x <= p(i) L(x, i) = 0 otherwise p(i) decreases linearly with i p(0) = 3

  7. Data normalization • Genes were eliminated if they did not change significantly (eliminate attraction to invariant genes) • Expression levels are normalized to have mean 0 and variance 1 (focus on shape) • Yeast data – levels were normalized within each of the two cell cycles • Human data – expression levels were normalized within the time points

  8. SOM computation • Computation time is about 1 minute; 20,000 – 50,000 iterations for 416 to 1,036 genes • Web based interface used to visualize the data • Average expression pattern is displayed with error bars • Can also overlay members of a cluster on a single plot • Yeast cell cycle • 6 x 5 SOM • 416 genes • Computed in 82 seconds

  9. Cluster 29 detail –76 members exhibiting periodic behavior in late G1

  10. G1, S, G2, and M phase related clusters (C29, C14, C1, C5)

  11. Centroids for groups of genes identified by visual inspection by Cho et. al.

  12. PMA treated HL-60 cells SOM 567 genes passing the variation filter were grouped into a 4 x 3 SOM PMA causes macrophage differentiation (PMA = phorbol 12-myristate 13-acetate.) Cluster 11 – PMA induced genes

  13. Hematopoietic differentiation across four cells lines HL-60 U937 Jurkat NB4 n = 17 1,036 genes 6 x 4 SOM

  14. SOM conclusion • Successful at finding new structure • Inspection still necessary to find insights • Able to recover temporal response to perturbation • Can provide richer topology than linear ordering • However, topology needs to be provided in advance

  15. Plan • Overview of classification techniques • Mixture Model Clustering • Alon - Colon tumors • Weighted Voting of Selected Genes • Golub – Leukemia (ALL, AML) • Hierarchical Clustering • Alizadeh – Diffuse large B-cell lymphoma

  16. Statistical Pattern Recognition • A classifier is an algorithm that assigns an observation to a class • A class can be a letter (handwriting recognition), a person (face recognition), a type of cell, a diagnosis, or a prognosis • Data set -- data with known classes for training • Generalize data set knowledge to new observations • Classification is based on features • Feature selection is key

  17. Model Complexity • A model describes a data set and is used to make future decisions • If a model is too simple it gives a poor fit to the data set • If a model is too complex, it gives a poor representation of the systematic aspects of the data (overfit to data set)

  18. Types of classifiers • Discriminative • No assumptions about underlying model • Generative • Assumptions made about form of underlying model (e.g. variables are Gaussian) • Assumptions cause performance advantages – and disadvantages if the assumptions are incorrect

  19. Mixture Models for Clustering Alon, U et al. Broad patterns of gene expression revealed by clustering analysis of tumor and normal colon tissues probed by oligonucleotide arrays, PNAS 96, pp. 6745-6750 (June 1999)

  20. Problem Definition • 40 colon adenocarcinoma biopsy specimens • 22 normal tissue specimens • Cell lines derived from colon carcinoma (EB and EB-1) • Can we tell the cancer specimens from the normal specimens by expression analysis?

  21. Gene Pair Correlations Each gene: 30 genes sig. positive correlation. 10 genes sig. negative correlation. Dashed line is correlation with data set randomized 104 times Shaded area P < 10-3

  22. Mixture Model • Each gene is represented by a vector that has been normalized so that its sum is 0 and the magnitude is 1 • Mixture model used assumes two distributions with centroids Cj • Pj(Vk) is probability that Vk is in class j • Cj = Sk Vk Pj(Vk) / Sk Pj(Vk)

  23. Mixture Model is used for top down clustering • At end of iteration, each gene is assigned to the cluster with the highest probability • Makes hard boundary between clusters • Repeat process on both subclusters • Both genes and tissues are clustered using the same algorithm

  24. Results of clustering algorithm

  25. Excerpt from ribosomal gene cluster

  26. Expanded view of clustering Tumor tissues have arrows at left ** are EB and EB1 colon carcinomia cell lines

  27. Five of 20 most informative genes are muscle genes Muscle index is normalized average intensity of 17 muscle related ESTs

  28. Sensitivity of clustering to genes used Genes sorted by t test

  29. Conclusion • Epithelial origin tumors distinguished from muscle-rich normal tissue samples • Tumor cell lines distinguished • Need tissue purity of in vivo samples

  30. Weighted Voting for Classification Golub,T. et al. Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring, Science 286, pp. 531-537, October 15, 1999.

  31. Two challenges • Class discovery – defining previously unrecognized tumor subtypes • Class prediction – assignment of tumor samples to already defined classes

  32. Data source • 38 bone marrow samples • 27 acute myeloid leukemia (AML) • 11 acute lymphoblastic leukemia (ALL) • Hybridized to Affymetrix arrays • 6817 human genes

  33. Classifier architecture

  34. Pick informative feature set

  35. Correlation function • All variables are first log transformed • g is a vector of samples [e1 .. en] • c tells us the class of each sample [1 0 .. 0] • Thus we can compute m1(g) m2(g) s1(g) s2(g) • P(g,c) = (m1(g) - m2(g)) / (s1(g) + s2(g)) • N1(c,r) all genes g such that (P(g,c) = r) • N2(c,r) all genes g such that (P(g,c) = -r)

  36. ~1100 genes are informative-- number of genes within neighborhoods

  37. Weighted voting for features

  38. Weighted voting • vi = (xi – (maml + mall)/2) • wi = P(g,c) • Total votes • Class 1 – sum all positive wivi • Class 2 – sum all negative wivi

  39. Prediction Strength • PS = (Vwin – Vlose)/(Vwin + Vlose) • Vwin and Vlose are vote totals for winning and losing classes, respectively • Gives a “margin of victory” • Sample assigned to winning class if PS > 0.3

  40. Performance of 50 gene predictor – 100% accuracy

  41. Genes most correlated with AML/ALL class distinction

  42. Feature sets • All predictors that used between 10 and 200 genes were 100% accurate

  43. Using SOM to discover classes

  44. Bayesian perspective • Assuming • Class distributions are normal with equal variances • Weight for a gene is (m1 - m2) / s2

  45. Conclusion • Can classify AML and ALL with as little as 10 genes • “Many other gene selection metrics could be used; we considered several …. The best performance was obtained with the relative class separation metric defined above”

  46. Discovering new types of cancer • Alizadeh, A., Distinct types of diffuse large B-cell lymphoma identified by gene expression profiling, Nature 403, pp. 503-511 (February 3, 2000)

  47. Goal • Discover cause for different disease courses for diffuse large B-cell lymphoma (DLBCL) • 40% of patients respond to therapy • 60% succumb to disease • Provide diagnostic / prognostic tool • DLBCL is most common subtype of non-Hodgkin’s lymphoma

  48. Questions • Can we create a molecular portrait of distinct types of B-cell malignancy? • Can we identify types of malignancy not yet recognized? • Can we relate malignancy to normal stages in B-cell development and physiology?

  49. Lymphochip • 17,856 cDNA clones • 12,069 from germinal B-cell library • 2,338 from DLBCL, follicular lymphoma (FL), mantle cell lymphoma, and chronic lymphocytic leukaemia (CLL) • 3,186 genes important to lymphocyte and/or cancer biology • B- and T-lymphocyte genes that respond to mitogens or cytokines

  50. Data sources • Rearranged immunoglobulin genes in DLBCL are characteristic of germinal center of secondary lymphoid organs • 96 normal and malignant lymphocyte samples

More Related