1 / 12

Model Selection

Model Selection. Manu Chandran. Outline. Background and motivation Over view of techniques Cross validation Bootstrap method Setting up the problem Comparing AIC,BIC,Crossvalidation,Bootstrap For small data set - iris data set For large data set - ellipse data set

kamin
Télécharger la présentation

Model Selection

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Model Selection Manu Chandran

  2. Outline • Background and motivation • Over view of techniques • Cross validation • Bootstrap method • Setting up the problem • Comparing AIC,BIC,Crossvalidation,Bootstrap • For small data set - iris data set • For large data set - ellipse data set • Finding number of relevant parameters – cancer data set (from class text) • Conclusion

  3. Background and Motivation • Model Selection • Parameters to change • Overview of error measures and when is it used • AIC -> Low data count, strives for less complexity • BIC -> High data count, less complexity • Cross validation • Boot strap methods

  4. Motivation for Cross validation • Small number of data set • Enables re use of data. • Basic idea of cross validation • K fold cross-validation . • K = 5 in this example

  5. Simple enough! What more ? • Points to consider • Why is it important ? • Finding the Test Error? • Selection of K-fold • What K is good enough for given data set ? • How is it important – bias, variance • Selection of features in “low data-high feature” problem • Important do’s and don’ts in feature selection when using cross validation • Finds application in bio informatics, where more than often number of parameters too high than data.

  6. Overview of error terms • Recap from last class • In sample error : Errin • Expected Error : Err • Training error : err • True Error : ErrT • AIC and BIC attempts to find Errin • Crossvalidation attempts to find average error Err

  7. Selection of K • K = N , N fold CV or Leave One Out • Unbiased • High varaince • K = 5, 5 fold CV • Lower variance • High Bias • Subset p means • best set of linear predictors

  8. Selection of features using CV • Often finds application in bio informatics • One way of selecting predictors • Screen predictors which show high correlation with class labels • Build multivariate classifier • Use CV to find tuning parameter • Estimate prediction error of final model

  9. The problem in this method • The CV is done after feature selection. This means the test samples had an effect on selecting predictors • Right way to do cross validation • Divide samples into K cross validation folds at random • Say for K = 5 • Find predictors based on the 4 training data • Using these predictors, tune the classifier with these 4 sets • Test on the left out 5th set

  10. Correlation of predictors with outcome

  11. Boot strapping • Explanation of boot strapping

  12. Probability of having ith sample in boot strap sample • Given by Poisson distribution with  = 1 for large N • So Expectation of Error = 0.5*0.368 = 0.184 • Far below 0.5 • To avoid this leave one out boot strap is suggested

More Related