220 likes | 337 Vues
This research aims to improve the accuracy of predicting surgical outcomes by leveraging advanced machine learning techniques and feature selection methods. We explore the effectiveness of various features, including joint positions, velocities, and angles, in predicting outcomes for knee surgery. By combining traditional approaches with innovative algorithms like boosting and deep learning, we address challenges in feature representation and dimensionality, seeking to enhance predictive performance and understanding of data. This comprehensive approach promises faster, cost-effective, and more insightful predictions.
E N D
Playing with features forlearning and prediction Jongmin Kim Seoul National University
Problem statement • Predicting outcome of surgery
Predicting outcome of surgery • Ideal approach surgery Training Data . . . . ? Predicting outcome
Predicting outcome of surgery • Initial approach • Predicting partial features • Predict witch features?
Predicting outcome of surgery • 4 Surgery • DHL+RFT+TAL+FDO flexion of the knee ( min / max ) rotation of the foot ( min / max ) dorsiflexion of the ankle ( min )
Predicting outcome of surgery • Is it good features? • Number of Training data • DHL+RFT+TAL : 35 data • FDO+DHL+TAL+RFT : 33 data
Machine learning and feature Data Feature representation Learning algorithm Feature representation Learning algorithm
Features in motion • Joint position / angle • Velocity / acceleration • Distance between body parts • Contact status • …
Features in computer vision SIFT Spin image HoG RIFT GLOH Textons
Outline • Feature selection • - Feature ranking • - Subset selection: wrapper, filter, embedded • - Recursive Feature Elimination • - Combination of weak prior (Boosting) • - ADAboosting(clsf) / joint boosting (clsf)/ Gradientboost (regression) • Prediction result with feature selection • Feature learning?
Feature selection • Alleviating the effect of the curse of dimensionality • Improve the prediction performance • Faster and more cost-effective • Providing a better understanding of the data
Subset selection • Wrapper • Filter • Embedded
Feature learning? • Can we automatically learn a good feature representation? • Known as: unsupervised feature learning, feature learning, deep learning, representation learning, etc. • Hand-designed features (by human): • 1. need expert knowledge • 2. requires time-consuming hand-tuning. • When it’s unclear how to hand design features: automatically learned features (by machine)
Learning Feature Representations • Key idea: • –Learn statistical structure or correlation of the data from unlabeled data • –The learned representations can be used as features in supervised and semi-supervised settings
Learning Feature Representations Output Features Feed-back /generative /top-down path e.g. Decoder Encoder Feed-forward /bottom-up path Input (Image/ Features)
Learning Feature Representations • Predictive Sparse Decomposition [Kavukcuoglu et al., ‘09] Sparse Features z L1Sparsity Encoder filters W Sigmoid function σ(.) Dz σ(Wx) e.g. Decoder filters D Input Patch x
Stacked Auto-Encoders Class label Decoder Encoder Features Decoder Encoder Features Decoder Encoder [Hinton & SalakhutdinovScience ‘06] Input Image
At Test Time Class label • Remove decoders • Use feed-forward path • Gives standard(Convolutional)Neural Network • Can fine-tune with backprop Encoder Features Encoder Features Encoder [Hinton & SalakhutdinovScience ‘06] Input Image
Status & plan • Data 파악 / learning technique survey… • Plan : 11월 실험 끝 • 12월 논문 writing • 1월 시그랩submit • 8월에 미국에서 발표 • But before all of that….
Deep neural net vs. boosting • Deep Nets: • - single highly non-linear system • - “deep” stack of simpler modules • - all parameters are subject to learning • Boosting & Forests: • - sequence of “weak” (simple) classifiers that are linearly combined to produce a powerful classifier • - subsequent classifiers do not exploit representations of earlier classifiers, it's a “shallow” linear mixture • - typically features are not learned