1 / 55

Supervised Classification

Supervised Classification. Selection bias in gene extraction on the basis of microarray gene-expression data. Ambroise and McLachlan Proceedings of the National Academy of Sciences Vol. 99, Issue 10, 6562-6566, May 14, 2002 http://www.pnas.org/cgi/content/full/99/10/6562.

daisylutz
Télécharger la présentation

Supervised Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Supervised Classification

  2. Selection bias in gene extraction on the basis of microarray gene-expression data Ambroise and McLachlan Proceedings of the National Academy of Sciences Vol. 99, Issue 10, 6562-6566, May 14, 2002 http://www.pnas.org/cgi/content/full/99/10/6562

  3. We OBSERVE the CLASS LABELSz1, …, zn where zj= i if jth tissue sample comes from the ith class (i=1,…,g). AIM: TO CONSTRUCT A CLASSIFIER c(y) FOR PREDICTING THE UNKNOWN CLASS LABEL z OF A TISSUE SAMPLE y. e.g. g = 2 classes C1 - DISEASE-FREE C2 - METASTASES Supervised Classification of Tissue Samples

  4. Sample 1 Sample 2 Sample M Gene 1 Gene 2 Gene N Expression Signature Expression Profile

  5. Supervised Classification (Two Classes) . . . . . . . Sample 1 Sample n Gene 1 . . . . . . . Gene p Class 2 (poor prognosis) Class 1 (good prognosis)

  6. Microarray to be used as routine clinical screen by C. M. Schubert Nature Medicine 9, 9, 2003. The Netherlands Cancer Institute in Amsterdam is to become the first institution in the world to use microarray techniques for the routine prognostic screening of cancer patients. Aiming for a June 2003 start date, the center will use a panoply of 70 genes to assess the tumor profile of breast cancer patients and to determine which women will receive adjuvant treatment after surgery.

  7. Selection Bias Bias that occurs when a subset of the variables is selected (dimension reduction) in some “optimal” way, and then the predictive capability of this subset is assessed in the usual way; i.e. using an ordinary measure for a set of variables.

  8. Selection Bias Discriminant Analysis: McLachlan (1992 & 2004, Wiley, Chapter 12) Regression: Breiman (1992, JASA) “This usage (i.e. use of residual of SS’s etc.) has long been a quiet scandal in the statistical community.”

  9. Nature Reviews Cancer, Feb. 2005

  10. LINEAR CLASSIFIER FORM for the production of the group label z of a future entity with feature vector y.

  11. FISHER’S LINEAR DISCRIMINANT FUNCTION where and Sare the sample means and pooled sample and covariance matrix found from the training data

  12. Microarrays also to be used in the prediction of breast cancer by Mike West (Duke University) and the Koo Foundation Sun Yat-Sen Cancer Centre, Taipei Huang et al. (2003, The Lancet, Gene expression predictors of breast cancer).

  13. LINEAR CLASSIFIER FORM for the production of the group label z of a future entity with feature vector y.

  14. FISHER’S LINEAR DISCRIMINANT FUNCTION where and Sare the sample means and pooled sample and covariance matrix found from the training data

  15. SUPPORT VECTOR CLASSIFIER Vapnik (1995) whereβ0andβare obtained as follows: subject to relate to the slack variables separable case

  16. with non-zero only for those observations jfor which the constraints are exactly met (the support vectors).

  17. Support Vector Machine (SVM) by REPLACE where the kernel function is the inner product in the transformed feature space.

  18. HASTIE et al. (2001, Chapter 12) The Lagrange (primal function) is which we maximize w.r.t. β, β0,andξj. Setting the respective derivatives to zero, we get with and

  19. By substituting (2) to (4) into (1), we obtain the Lagrangian dual function We maximize (5) subject to In addition to (2) to (4), the constraints include Together these equations (2) to (8) uniquely characterize the solution to the primal and dual problem.

  20. Leo Breiman (2001)Statistical modeling: the two cultures (with discussion).Statistical Science 16, 199-231.Discussants include Brad Efron and David Cox

  21. GUYON, WESTON, BARNHILL & VAPNIK (2002, Machine Learning) LEUKAEMIA DATA: Only 2 genes are needed to obtain a zero CVE (cross-validated error rate) COLON DATA: Using only 4 genes, CVE is 2%

  22. Since p>>n, consideration given to selection of suitable genes SVM: FORWARD or BACKWARD (in terms of magnitude of weight βi) RECURSIVE FEATURE ELIMINATION (RFE) FISHER: FORWARD ONLY (in terms of CVE)

  23. GUYON et al. (2002) LEUKAEMIA DATA: Only 2 genes are needed to obtain a zero CVE (cross-validated error rate) COLON DATA: Using only 4 genes, CVE is 2%

  24. GUYON et al. (2002) “The success of the RFE indicates that RFE has a built in regularization mechanism that we do not understand yet that prevents overfitting the training data in its selection of gene subsets.”

  25. Example: Microarray DataColon Data of Alon et al. (1999) n=62 (40 tumours; 22 normals) tissue samples of p=2,000 genes in a 2,000  62 matrix.

  26. Figure 1: Error rates of the SVM rule with RFE procedure averaged over 50 random splits of colon tissue samples

  27. Figure 2: Error rates of the SVM rule with RFE procedure averaged over 50 random splits of leukemia tissue samples

  28. Figure 3: Error rates of Fisher’s rule with stepwise forward selection procedure using all the colon data

  29. Figure 4: Error rates of Fisher’s rule with stepwise forward selection procedure using all the leukemia data

  30. Figure 5: Error rates of the SVM rule averaged over 20 noninformative samples generated by random permutations of the class labels of the colon tumor tissues

  31. ADDITIONAL REFERENCES Selection bias ignored: XIONG et al. (2001, Molecular Genetics and Metabolism) XIONG et al. (2001, Genome Research) ZHANG et al. (2001, PNAS) Aware of selection bias: SPANG et al. (2001, Silico Biology) WEST et al. (2001, PNAS) NGUYEN and ROCKE (2002)

  32. Error Rate Estimation Suppose there are two groups G1 andG2 c(y)is a classifier formed from the data set (y1, y2, y3,……………, yn) The apparent error is the proportion of the data set misallocated byc(y).

  33. From the original data set, removey1to give the reduced set (y2, y3,……………, yn) Cross-Validation Then form the classifierc(1)(y )from this reduced set. Use c(1)(y1)to allocate y1 to either G1 or G2.

  34. Repeat this process for the second data point,y2. So that this point is assigned to either G1 or G2 on the basis of the classifier c(2)(y2). And so on up to yn.

  35. Ten-Fold Cross Validation Test T r a i n i n g

  36. BOOTSTRAP APPROACH Efron’s (1983, JASA) .632 estimator where B1 is the bootstrap when rule is applied to a point not in the training sample. A Monte Carlo estimate of B1 is where

  37. Toussaint & Sharpe (1975) proposed the ERROR RATE ESTIMATOR where McLachlan (1977) proposed w=wowhere wo is chosen to minimize asymptotic bias of A(w)in the case of two homoscedastic normal groups. Value of w0was found to range between 0.6 and 0.7, depending on the values of

  38. .632+ estimate of Efron & Tibshirani (1997, JASA) where (relative overfitting rate) (estimate of no information error rate) If r = 0, w = .632,and soB.632+ = B.632 r = 1, w = 1, and so B.632+ = B1

  39. Ten-Fold Cross Validation Test T r a i n i n g

  40. MARKER GENES FOR HARVARD DATA For a SVM based on 64 genes, and using 10-fold CV, we noted the number of times a gene was selected. No. of genes Times selected 55 1 18 2 11 3 7 4 8 5 6 6 10 7 8 8 12 9 17 10

  41. MARKER GENES FOR HARVARD DATA No. of Times genes selected 55 1 18 2 11 3 7 4 8 5 6 6 10 7 8 8 129 17 10

  42. Breast cancer data set in van’t Veer et al. (van’t Veer et al., 2002, Gene Expression Profiling Predicts Clinical Outcome Of Breast Cancer, Nature 415) These data were the result of microarray experiments on three patient groups with different classes of breast cancer tumours. The overall goal was to identify a set of genes that could distinguish between the different tumour groups based upon the gene expression information for these groups.

  43. Breast tumours have a genetic signature. The expression pattern of a set of 70 genes can predict whether a tumour is going to prove lethal, despite treatment, or not. “This gene expression profile will outperform all currently used clinical parameters in predicting disease outcome.” van ’t Veer et al. (2002), van de Vijver et al. (2002)

  44. van de Vijver et al. (2002) considered a further 234 breast cancer tumours but have only made available the data for the top 70 genes based on the previous study of van ‘t Veer et al. (2002)

  45. Nearest-Shrunken Centroids (Tibshirani et al., 2002) The usual estimates of the class means are shrunk toward the overall mean of the data, where and

More Related