1 / 25

Hubness in the Context of Feature Selection and Generation

Hubness in the Context of Feature Selection and Generation. Milo š Radovanović 1 Alexandros Nanopoulos 2 Mirjana Ivanovi ć 1 1 Department of Mathematics and Informatics Faculty of Science, University of Novi Sad , Serbia 2 Institute of Computer Science University of Hildesheim, Germany.

vidal
Télécharger la présentation

Hubness in the Context of Feature Selection and Generation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Hubness in the Context of Feature Selection and Generation Miloš Radovanović1 Alexandros Nanopoulos2 Mirjana Ivanović1 1Department of Mathematics and Informatics Faculty of Science, University of Novi Sad, Serbia 2Institute of Computer Science University of Hildesheim, Germany

  2. k-occurrences (Nk) Nk(x), the number of k-occurrences of point x, is the number of times x occurs among k nearest neighbors of all other points in a data set • Nk(x) is the in-degree of node x in the k-NN digraph It was observed that the distribution of Nk can become skewed, resulting in the emergence of hubs – points with high Nk • Music retrieval [Aucouturier 2007] • Speech recognition [Doddington 1998] • Fingerprint identification [Hicklin 2005] FGSIR'10

  3. Skewness of Nk What causes the skewness of Nk? • Artefact of data? • Are some songs more similar to others? • Do some people have fingerprints or voices that are harder to distinguish from other people’s? • Specifics of modeling algorithms? • Inadequate choice of features? • Something more general? FGSIR'10

  4. FGSIR'10

  5. Contributions - Outline Demonstrate the phenomenon • Skewness in the distr of k-occurrences Explain its main reasons • No artifact of data • No specifics of models (inadequate features, etc.) • A new aspect of the „curse of dimensionality“ Impact on Feature Selection and Generation FGSIR'10

  6. Outline Demonstrate the phenomenon Explain its main reasons Impact on FSG Conclusions FGSIR'10

  7. Collection of 23 real text data sets SNk is standardized 3rd moment of Nk If SNk = 0 no skew, positive (negative) values signify right (left) skew High skewness indicates hubness FGSIR'10

  8. Collection of 14 real UCI data sets+ microarray data FGSIR'10

  9. Outline Demonstrate the phenomenon Explain its main reasons Impact on IR Conclusions FGSIR'10

  10. Where are the hubs located? Spearman correlation between N10 and distance from data set mean Hubs are closer to the data center FGSIR'10

  11. Centrality and its amplification • Hubs due to centrality • vectors closer to the center tend to be closer to all other vectors • thus more frequent k-NN • Centrality is amplified by dimensionality point A closer to center than point B ∑ sim(A,x) - ∑ sim(B,x) x x FGSIR'10

  12. Concentration of similarity Concentration: as dim grows to infinity • Ratio between standard deviation of pairwise similarities (distances) and their expectation shrinks to zero • Minkowski [François 2007, Beyer 1999, Aggarwal 2001] • Meaningfulness of nearest neighbors? Analytical proof for cosine sim [Radovanović 2010] FGSIR'10

  13. The hyper-sphere view E √V Hyper-sphere view • Most vectors are about equidistant from the center and from each other, and lie on the surface of a hyper-sphere • Few vectors lie at the inner part of hyper-sphere, closer to its center, thus closer to all others • This is expected for large but finite dimensionality, since is non negligible FGSIR'10

  14. What happens with real data? Spearman correlation between N10 anddistance from data/cluster center Real text data are usually clustered (mixture of distributions) Cluster with k-Means (#clusters = 3*Cls) Compare with Generalization of the hyper-sphere view with clusters FGSIR'10

  15. UCI data FGSIR'10

  16. Can dim reduction help? Intrinsic dimensionalityis reached FGSIR'10

  17. UCI data FGSIR'10

  18. Outline Demonstrate the phenomenon Explain its main reasons Impact on FSG Conclusions FGSIR'10

  19. “Bad” hubs as obstinate results • Based on information about classes,k-occurrences can be distinguished into: • “Bad” k-occurrences, BNk(x) • “Good” k-occurrences, GNk(x) • Nk(x) = BNk(x) + GNk(x) FGSIR'10

  20. How do “bad” hubs originate? • Mixture is important also: • High dimensionality and skewness of Nkdo not automatically induce “badness” • “Bad” hubs originate from a combination of high dimensionality and violation of the CA • Cluster Assumption (CA): Most pairs of vectors in a cluster should be of the same class [Chapelle 2006] FGSIR'10

  21. Skewness of Nkvs. #features Skewness stays relatively constant It abruptly drops when intrinsic dimensionality is reached Further feature selection may incur loss of information. FGSIR'10

  22. Badness vs. #features Similar observations When reaching intrinsic dimensionality, BNk ratio increases The representation ceases to reflect the information provided by labels very well FGSIR'10

  23. Feature generation When adding features to bring new information to the data: • Representation will ultimately increase SNkand, thus, produce hubs • The reduction of BNk ratio “flattens out” fairly quickly, limiting the usefulness of adding new features in the sense of being able to express the “ground truth” If instead of BNk ratio we use classifier error rate, the results are similar FGSIR'10

  24. Conclusion • Little attention by research in feature selection/ generation to the fact that in intrinsically high-dimensional data, hubs will : • Result in an uneven distribution of the cluster assumption violation (hubs will be generated that attract more label mismatches with neighborin points) • Result in an uneven distribution of responsibility for classification or retrieval error among data points. • Investigating further the interaction between: • hubness and • different notions of CA violation • Important new insights into feature selection/generation FGSIR'10

  25. Thank You!Alexandros Nanopoulosnanopoulos@ismll.de FGSIR'10

More Related