80 likes | 191 Vues
This continuation on Support Vector Machines (SVMs) delves into the concept of hinge loss and its role as a penalty for misclassifications. We explore how the hinge loss function quantifies the distance from the separating hyperplane, influencing model performance. Additionally, we discuss the dual representation of SVMs and introduce the kernel trick, enabling efficient computation without requiring the full dataset to be stored in memory. Understanding these principles is crucial for effectively utilizing SVMs in various machine learning tasks.
E N D
Hinge Loss + + + Penalty + - + - + + - - - + - H - - How far from sep. plane
Recall PLR Lins
Dual Representation Instead of holding in memory, Hold
- - - - - - - - - - - - - + + + + + - - - - -