140 likes | 254 Vues
This presentation by Xianwang Wang explores Local Fisher Discriminant Analysis (LFDA) as a vital method for supervised dimensionality reduction. The approach focuses on embedding high-dimensional data into a lower-dimensional space while preserving its intrinsic information. Key comparisons are drawn between LFDA, Fisher Discriminant Analysis (FDA), and Locality-Preserving Projection (LPP), emphasizing the advantages of LFDA in maintaining class separability compared to traditional FDA. Examples, including the Iris dataset, illustrate the practical application of these techniques in a real-world context.
E N D
Local Fisher Discriminant Analysis for Supervised Dimensionality Reduction Masashi Sugiyama Presented by Xianwang Wang
Dimensionality Reduction • Goal • Embed high-dimensional data to low-dimensional space • Preserve intrinsic information • Example High dimension 3-dimension
Categories • Nonlinear • ISOMAP • Locally Linear Embedding (LLE) • Laplacian Eigenmap (LE) • Linear • Principal Components Analysis (PCA) • Locality-Preserving Projection (LPP) • Fisher Discriminant Analysis (FDA) • Unsupervised • S-ISOMAP, S-LLE, PCA • Supervised • LPP, FDA
Formulation • Number of samples: • d-dimensional samples: • Class labels : • Number of samples in the class : • Data matrix : • Embedded samples:
Goal for linear dimensionality Reduction • Find a transformation matrix • Use Iris data for demos (http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data) • Attribute Information: • sepal length in cm • sepal width in cm • petal length in cm • petal width in cm • class: • Iris Setosa; Iris Versicolour; Iris Virginica
FDA(1) • Mean of samples in the class • Mean of all samples • Within-class scatter matrix • Between-class scatter matrix
FDA(2) • Maximize the following objective • Maximize the following constrained optimization problem equivalently • Use the lagrangian, • Apply KKT conditions • Demo
LPP • Minimize • Equivalently • We can get • Demo
Local Fisher Discriminant Analysis(LFDA) • FDA can perform poorly if samples in some class form several separate clusters • LPP can make samples of different classes overlapped if they are close in the original high dimensional space • LFDA combines the idea of FDA and LPP
LFDA(1) • Reformulating FDA
LFDA(2) • Definition of LFDA
LFDA(3) • Maximize the following objective • Equivalently, • Similarly, we can get • Demo
Conclusion • LFDA provided more separate embedding than FDA and LPP • FDA (globally), while LFDA(locally) • More discussion about efficiently computing of LFDA transformation matrix and Kernel LFDA in the paper