1 / 30

A Bayesian Approach to Localized Multi-Kernel Learning Using the Relevance Vector Machine

A Bayesian Approach to Localized Multi-Kernel Learning Using the Relevance Vector Machine. R. Close, J. Wilson, P. Gader. Outline. Benefits of kernel methods Multi-kernels and localized multi-kernels Relevance Vector Machines (RVM) Localized multi-kernel RVM (LMK-RVM)

lucien
Télécharger la présentation

A Bayesian Approach to Localized Multi-Kernel Learning Using the Relevance Vector Machine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. A Bayesian Approach to Localized Multi-Kernel Learning Using the Relevance Vector Machine R. Close, J. Wilson, P. Gader

  2. Outline • Benefits of kernel methods • Multi-kernels and localized multi-kernels • Relevance Vector Machines (RVM) • Localized multi-kernel RVM (LMK-RVM) • Application of LMK-RVM to landmine detection • Conclusions

  3. Kernel Methods Overview Using a non-linear mapping a decision surface can become linear in a transformed space

  4. Kernel Methods Overview K If the mapping satisfies Mercer’s theorem (i.e., the it is finitely positive-definite) then it corresponds to an inner-product kernel

  5. Kernel Methods • Feature transformations increase dimensionality to create a linear separation between classes • Utilizing the kernel trick, kernel methods construct these feature transformations in an infinite dimensional space that can be finitely characterized • The accuracy and robustness of the model becomes directly dependent on the kernel’s ability to represent the correlation between data points • A side benefit is an increased understanding of the latent relationships between data points once the kernel parameters are learned

  6. Multi-Kernel Learning • When using kernel methods, a specificform of kernel function is chosen (e.g. a radial basis function). • Multi-kernel learning uses a linear combination of kernel functions • The weights may be constrained if desired • As the model is trained, the weights yielding the best input-space to kernel-space mapping are learned. • Any kernel function whose weight approaches 0 is pruned out of the multi-kernel function.

  7. Localized Multi-Kernel Learning • Localized multi-kernel (LMK) learning allows different kernels (or different kernel parameters) to be used in separate areas of the feature space.Thus the model is not limited to the assumption that one kernel function can effectively map the entire feature-space • Many LMK approaches attempt to simultaneously partition the feature-space and learn the multi-kernel Different Multi-kernels

  8. LMK-RVM • A localized multi-kernel relevance vector machine (LMK-RVM) uses the ARD (automatic relevance determination) prior of the RVM to select the kernels to use over a given feature-space. • This allows greater flexibility in the localization of the kernels and increased sparsity

  9. RVM Overview Posterior Likelihood ARD Prior

  10. RVM Overview Posterior Likelihood ARD Prior Note the vector hyper-parameter

  11. Automatic Relevance Determination • Values for and are determined by integrating over the weights, and maximizing the resulting marginal distribution. • Those training samples that do not help predict the output of other training samples have αvalues that tend toward infinity. Their associated w priors become δ functions with mean 0, that is, their weight in predicting outcomes at other points should be exactly 0. Thus, these training vectors can be removed. • We can use the remaining, relevant, vectors to estimate the outputs associated with new data. • The design matrix K=Φis now NxM, where M<<N.

  12. RVM for Classification • Start with a two-class problem • t  {0,1} • () is logistic sigmoid • Same as RVM for regression except must use IRLS to calculate the mode of the posterior distribution

  13. LMK-RVM • Using the multi-kernel with the RVM model, we start with:where wn is the weight on themulti-kernel associated with vector n and wi is the weight on the ith component of each multi-kernel. • Unlike some kernel methods (e.g. SVM) the RVM is not constrained to use a positive-definite kernel matrix, thus, there is norequirement that the weights be factorized aswnwi. So, in this setting • We show a sample application of LMK-RVM using two radial basis kernels at each training point with different spreads.

  14. Toy Dataset Example Kernels with larger σ Kernels with smaller σ

  15. GPR Data Experiments GPR experiments using data with120 dimension spectral features Improvements in classification happen off-diagonal

  16. GPR ROC

  17. WEMI Data Experiments WEMI experiments using data with 3 dimension GRANMA features Improvements in classification happen off-diagonal

  18. WEMI ROC

  19. Number of Relevant Vectors Number of relevant vectors averaged over all ten folds. GPR WEMI The off-diagonal shows a potentially sparser model

  20. Conclusions • The experiment using GPR data features showed that LMK-RVM can provide definite improvement in SSE, AUC, and the ROC • The experiment using the lower-dimensional WEMI data GRANMA features showed that using the same LMK-RVM method provided some improvement in SSE and AUC and an inconclusive ROC • Both set of experiments show the potential for sparser models when choosing when using the LMK-RVM • Question: is there an effective way to learn values for spreads in our simple class of localized multi-kernels?

  21. References [1] F. R. Bach, et al., "Multiple Kernel Learning, Conic Duality, and the SMO Algorithm," in International Conference on Machine Learning, Banff, Canada, 2004. [2] T. Damoulas, et al., "Inferring Sparse Kernel Combinations and Relevance Vectors: An Application to Subcellular Localization of Proteins," in Machine Learning and Applications, 2008. ICMLA '08. Seventh International Conference on, 2008, pp. 577-582. [3] G. Camps-Valls, et al., "Nonlinear System Identification With Composite Relevance Vector Machines," Signal Processing Letters, IEEE, vol. 14, pp. 279-282, 2007. [4] B. Wu, et al., "A Genetic Multiple Kernel Relevance Vector Regression Approach," in Education Technology and Computer Science (ETCS), 2010 Second International Workshop on, 2010, pp. 52-55. [5] R. A. Jacobs, et al., "Adaptive Mixtures of Local Experts," Neural Computation, vol. 3, pp. 79-87, 1991. [6] C. E. Rasmussen and Z. Ghahramani, "Infinite Mixtures of Gaussian Process Experts," in Advances in Neural Information Processing Systems, 2002.

  22. References [7] L. Yen-Yu, et al., "Local Ensemble Kernel Learning for Object Category Recognition," in Computer Vision and Pattern Recognition, 2007. CVPR '07. IEEE Conference on, 2007, pp. 1-8. [8] M. Gonen and E. Alpaydin, "Localized Multiple Kernel Learning," in 25th International Conference on Machine Learning, Helsinki, Finland, 2008. [9] M. Gonen and E. Alpaydin, "Localized Multiple Kernel Regression," in Pattern Recognition (ICPR), 2010 20th International Conference on, 2010, pp. 1425-1428. [10] M. E. Tipping, "The Relevance Vector Machine," Advances in Neural Information Processing Systems, vol. 12, pp. 652-658, 2000. [11] C. M. Bishop, "Relevance Vector Machines (Analysis of Sparsity)," in Pattern Recognition and Machine Learning, ed: Springer, 2007, pp. 349-353. [12] D. Tzikas, A. Likas, and N. Galatsanos. “Large Scale Multikernel Relevance Vector Machine for Object Detection,” International Journal on Artificial Intelligence Tools, 16(6):967-979, December 2007. [13] D. Tzikas, A. Likas, and N. Galatsanos, "Large Scale Multikernel RVM for Object Detection," presented at the Hellenic Conference on Artificial Intelligence, Heraclion, Crete, Greece, 2006.

  23. Backup Slides Expanded Kernels Discussion

  24. Kernel Methods Example: The Masked Class Problem • In both these problems linear classification methods have difficulty discriminating the blue class from the others! • What is the actual problem here? • No one line can separate the blue class from the other datapoints! Similar to the “single-layer” perceptron problem (The XOR problem)!

  25. Decision Surface in Feature Space Can classify the green and black class with no problem! Problems when we try to classify the blue class!!!!

  26. Revisit Masked Class Problem Are linear methods completely useless on this data? -No, we can perform a non-linear transformation on the data via fixed basis functions! -Many times when we perform this transformation features that where not linearly separable in the original feature space become linearly separable in the transformed feature space.

  27. Basis Functions Models can be extended by using fixed basis functions which allows for linear combinations of nonlinear functions of the input variables • Gaussian (or RBF) basis function: • Basis vector: • Dummy basis function used for bias parameter: • Basis function center ( ) governs location in input space • Scale parameter () determines spatial scale

  28. Features in Transformed Space are Linearly Separable Transformed datapoints are plotted in the new feature space

  29. Transformed Decision Surface in Feature Space Again, we can classify the green and black class with no problem! Now we can classify the blue class with no problem!!!

  30. Common Kernels • Squared Exponential: • Gaussian Process Kernel: • Automatic Relevance Determination (ARD) kernel • Other kernels: • Neural Network • Matern • -exponential, etc.

More Related