1 / 127

Development and Validation of Prognostic Classifiers using High Dimensional Data

Development and Validation of Prognostic Classifiers using High Dimensional Data. Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov. Class Prediction. Predict which tumors will respond to a particular treatment

kamuzu
Télécharger la présentation

Development and Validation of Prognostic Classifiers using High Dimensional Data

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Development and Validation of Prognostic Classifiers using High Dimensional Data Richard Simon, D.Sc. Chief, Biometric Research Branch National Cancer Institute http://brb.nci.nih.gov

  2. Class Prediction • Predict which tumors will respond to a particular treatment • Predict which patients will relapse after a particular treatment

  3. Class Prediction • A set of genes is not a classifier • Testing whether analysis of independent data results in selection of the same set of genes is not an appropriate test of predictive accuracy of a classifier

  4. Components of Class Prediction • Feature (gene) selection • Which genes will be included in the model • Select model type • E.g. Diagonal linear discriminant analysis, Nearest-Neighbor, … • Fitting parameters (regression coefficients) for model • Selecting value of tuning parameters • Estimating prediction accuracy

  5. Class Prediction ≠ Class Comparison • The criteria for gene selection for class prediction and for class comparison are different • For class comparison false discovery rate is important • For class prediction, predictive accuracy is important • Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy. • Statisticians are used to inference, not prediction • Most statistical methods were not developed for p>>n prediction problems

  6. Feature (Gene) Selection • Select genes that are differentially expressed among the classes at a significance level  (e.g. 0.01) • The  level is a tuning parameter which can be optimized by “inner” cross-validation • For class comparison false discovery rate is important • For class prediction, predictive accuracy is important • For prediction it is usually more serious to exclude an informative variable than to include some noise variables • L1 penalized likelihood or predictive likelihood for gene selection with penalty value selected by cross-validation • Select smallest number of genes consistent with accurate prediction

  7. Optimal significance level cutoffs for gene selection. 50 differentially expressed genes out of 22,000 on n arrays

  8. Complex Gene Selection • Small subset of genes which together give most accurate predictions • Genetic algorithms • Little evidence that complex feature selection is useful in microarray problems • Failure to compare to simpler methods • Improper use of cross-validation

  9. Linear Classifiers for Two Classes

  10. Linear Classifiers for Two Classes • Fisher linear discriminant analysis • Diagonal linear discriminant analysis (DLDA) assumes features are uncorrelated • Compound covariate predictor (Radmacher) • Golub’s weighted voting method • Support vector machines with inner product kernel

  11. Fisher LDA

  12. The Compound Covariate Predictor (CCP) • Motivated by J. Tukey, Controlled Clinical Trials, 1993 • A compound covariate is built from the basic covariates (log-ratios) tj is the two-sample t-statistic for gene j. xijis the log-expression measure of sample i for gene j. Sum is over selected genes. • Threshold of classification: midpoint of the CCP means for the two classes.

  13. Linear Classifiers for Two Classes • Compound covariate predictor Instead of for DLDA

  14. Support Vector Machine

  15. Other Simple Methods • Nearest neighbor classification • Nearest k-neighbors • Nearest centroid classification • Shrunken centroid classification

  16. Nearest Neighbor Classifier • To classify a sample in the validation set as being in outcome class 1 or outcome class 2, determine which sample in the training set it’s gene expression profile is most similar to. • Similarity measure used is based on genes selected as being univariately differentially expressed between the classes • Correlation similarity or Euclidean distance generally used • Classify the sample as being in the same class as it’s nearest neighbor in the training set

  17. When p>>n • It is always possible to find a set of features and a weight vector for which the classification error on the training set is zero. • Why consider more complex models?

  18. Some predictive classification problems are easy • Toy problems • Most all methods work well • It is possible to predict accurately with few variables • Many real classification problems are difficult • Difficult to distinguish informative variables from the extreme order statistics of noise variables • Comparative studies generally indicate that simpler methods work as well or better for microarray problems because they avoid overfitting the data.

  19. Other Methods • Top-scoring pairs • CART • Random Forrest

  20. Dimension Reduction Methods • Principal component regression • Supervised principal component regression • Partial least squares • Compound covariate and DLDA • Supervised clustering • L1 penalized logistic regression

  21. When There Are More Than 2 Classes • Nearest neighbor type methods • Decision tree of binary classifiers

  22. Decision Tree of Binary Classifiers • Partition the set of classes {1,2,…,K} into two disjoint subsets S1 and S2 • e.g. S1={1}, S2 ={2,3,4} • Develop a binary classifier for distinguishing the composite classes S1 and S2 • Compute the cross-validated classification error for distinguishing S1 and S2 • Repeat the above steps for all possible partitions in order to find the partition S1and S2 for which the cross-validated classification error is minimized • If S1and S2 are not singleton sets, then repeat all of the above steps separately for the classes in S1and S2 to optimally partition each of them

  23. cDNA Microarrays Parallel Gene Expression Analysis Gene-Expression Profiles in Hereditary Breast Cancer • Breast tumors studied: • 7 BRCA1+ tumors • 8 BRCA2+ tumors • 7 sporadic tumors • Log-ratios measurements of 3226 genes for each tumor after initial data filtering RESEARCH QUESTION Can we distinguish BRCA1+ from BRCA1– cancers and BRCA2+ from BRCA2– cancers based solely on their gene expression profiles?

  24. BRCA1

  25. BRCA2

  26. Classification of BRCA2 Germline Mutations

  27. Evaluating a Classifier • “Prediction is difficult, especially the future.” • Neils Bohr

  28. Evaluating a Classifier • Fit of a model to the same data used to develop it is no evidence of prediction accuracy for independent data • Goodness of fit vs prediction accuracy

  29. Hazard ratios and statistical significance levels are not appropriate measures of prediction accuracy • A hazard ratio is a measure of association • Large values of HR may correspond to small improvement in prediction accuracy • Kaplan-Meier curves for predicted risk groups within strata defined by standard prognostic variables provide more information about improvement in prediction accuracy • Time dependent ROC curves within strata defined by standard prognostic factors can also be useful

  30. Time-Dependent ROC Curve • Sensitivity vs 1-Specificity • Sensitivity = prob{M=1|S>T} • Specificity = prob{M=0|S<T}

  31. Validation of a Predictor • Internal validation • Re-substitution estimate • Very biased • Split-sample validation • Cross-validation • Independent data validation

  32. Validating a Predictive Classifier • Fit of a model to the same data used to develop it is no evidence of prediction accuracy for independent data • Goodness of fit is not prediction accuracy • Demonstrating statistical significance of prognostic factors is not the same as demonstrating predictive accuracy • Demonstrating stability of selected genes is not demonstrating predictive accuracy of a model for independent data

  33. Split-Sample Evaluation • Training-set • Used to select features, select model type, determine parameters and cut-off thresholds • Test-set • Withheld until a single model is fully specified using the training-set. • Fully specified model is applied to the expression profiles in the test-set to predict class labels. • Number of errors is counted • Ideally test set data is from different centers than the training data and assayed at a different time

  34. log-expression ratios training set specimens test set Cross-Validated Prediction (Leave-One-Out Method) 1. Full data set is divided into training and test sets (test set contains 1 specimen). 2. Prediction rule is built from scratch using the training set. 3. Rule is applied to the specimen in the test set for class prediction. 4. Process is repeated until each specimen has appeared once in the test set.

  35. Leave-one-out Cross Validation • Omit sample 1 • Develop multivariate classifier from scratch on training set with sample 1 omitted • Predict class for sample 1 and record whether prediction is correct

  36. Leave-one-out Cross Validation • Repeat analysis for training sets with each single sample omitted one at a time • e = number of misclassifications determined by cross-validation • Subdivide e for estimation of sensitivity and specificity

  37. Cross validation is only valid if the test set is not used in any way in the development of the model. Using the complete set of samples to select genes violates this assumption and invalidates cross-validation. • With proper cross-validation, the model must be developed from scratch for each leave-one-out training set. This means that feature selection must be repeated for each leave-one-out training set. • Simon R, Radmacher MD, Dobbin K, McShane LM. Pitfalls in the analysis of DNA microarray data. Journal of the National Cancer Institute 95:14-18, 2003. • The cross-validated estimate of misclassification error is an estimate of the prediction error for model fit using specified algorithm to full dataset

  38. Permutation Distribution of Cross-validated Misclassification Rate of a Multivariate ClassifierRadmacher, McShane & SimonJ Comp Biol 9:505, 2002 • Randomly permute class labels and repeat the entire cross-validation • Re-do for all (or 1000) random permutations of class labels • Permutation p value is fraction of random permutations that gave as few misclassifications as e in the real data

  39. Prediction on Simulated Null Data • Generation of Gene Expression Profiles • 14 specimens (Pi is the expression profile for specimen i) • Log-ratio measurements on 6000 genes • Pi ~ MVN(0, I6000) • Can we distinguish between the first 7 specimens (Class 1) and the last 7 (Class 2)? • Prediction Method • Compound covariate prediction (discussed later) • Compound covariate built from the log-ratios of the 10 most differentially expressed genes.

More Related