1 / 63

Extending metric multidimensional scaling with Bregman divergences

Extending metric multidimensional scaling with Bregman divergences. Jigang Sun and Colin Fyfe. Visualising 18 dimensional data. Outline. Bregman divergence. Multidimensional scaling(MDS). Extending MDS with Bregman divergences.

raja
Télécharger la présentation

Extending metric multidimensional scaling with Bregman divergences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Extending metric multidimensional scaling with Bregman divergences Jigang Sun and Colin Fyfe

  2. Visualising 18 dimensional data

  3. Outline • Bregman divergence. • Multidimensional scaling(MDS). • Extending MDS with Bregman divergences. • Relating the Sammon mapping to mappings with Bregman divergences. Comparison of effects and explanation. • Conclusion

  4. Strictly Convex function Pictorially, the strictly convex function F(x) lies below segment connecting two points q and p.

  5. Bregman Divergences is the Bregman divergence between x and y based on convex function, φ. Taylor Series expansion is

  6. Bregman Divergences

  7. Euclidean distance is a Bregman divergence

  8. Kullback Leibler Divergence

  9. Generalised Information Divergence • φ(z)=z log(z)

  10. Other Divergences • Itakura-Saito Divergence • Mahalanobis distance • Logistic loss • Any convex function

  11. Some Properties • dφ(x,y)≥0, with equality iff x==y. • Not a metric since dφ(x,y)≠ dφ(y,x) • (Though d(x,y)=(dφ(x,y)+dφ(y,x)) is symmetric) • Convex in the first parameter. • Linear, dφ+aγ(x,y)= dφ(x,y) + a.dγ(x,y)

  12. Multidimensional Scaling • Creates one latent point for each data point. • The latent space is often 2 dimensional. • Positions the latent points so that they best represent the data distances. • Two latent points are close if the two corresponding data points are close. • Two latent points are distant if the two corresponding data points are distant.

  13. Classical/Basic Metric MDS • We minimise the stress function data space Latent space

  14. Sammon Mapping (1969) Focuses on small distances: for the same error, the smaller distance is given bigger stress.

  15. Possible Extensions Bregman divergences in both data space and latent space Or even

  16. Metric MDs with Bregman divergence between distances Euclidean distance on latents. Any divergence on data Itakura-Saito divergence between them: (Sammon-like) to minimise divergence.

  17. Moving the Latent Points F1 for I.S. divergence, F2 for euclidean , F3 any divergence

  18. The algae data set

  19. The algae data set

  20. Two representations The standard Bregman representation: Concentrating on the residual errors:

  21. Basic MDS is a special BMMDS • Base convex function is chosen as • And higher order derivatives are • So • is derived as

  22. Sammon Mapping Select Then

  23. Example 2: Extended Sammon • Base convex function • This is equivalent to • The Sammon mapping is rewritten as

  24. Sammon and Extended Sammon • The common term • The Sammon mapping is thus an approximation to the Extended Sammon mapping via the common term. • The Extended Sammon mapping will do more adjustments on the basis of the higher order terms.

  25. An Experiment on Swiss roll data set

  26. Distance preservation

  27. Relative standard deviation

  28. Relative standard deviation • On short distances, Sammon has smaller variance than BasicMDS, Extended Sammon has smaller variance than Sammon, i.e. control of small distances is enhanced. • Large distances are given more and more freedom in the same order as above.

  29. LCMC: local continuity meta-criterion (L. Chen 2006) • A common measure assesses projection quality of different MDS methods. • In terms of neighbourhood preservation. • Value between 0 and 1, the higher the better.

  30. Quality accessed by LCMC

  31. Why Extended Sammon outperforms Sammon • Stress formation

  32. Features of the base convex function • Recall that the base convex function for the Extended Sammon mapping is • Higher order derivatives are • Even orders are positive and odd ones are negative.

  33. Stress comparison between Sammon and Extended Sammon

  34. Stress configured by Sammon, calculated and mapped by Extended Sammon

  35. Stress configured by Sammon, calculated and mapped by Extended Sammon • The Extended Sammon mapping calculates stress on the basis of the configuration found by the Sammon mapping. • For , the mean stresses calculated by the Extended Sammon are much higher than mapped by the Sammon mapping. • For , the calculated mean stresses are obviously lower than that of the Sammon mapping. • The Extended Sammon makes shorter mapped distance even more short, longer even more long.

  36. Stress formation by items

  37. Generalisation: from MDS to Bregman divergences • A group of MDS is generalised as • C is a normalisation scalar which is used for quantitative comparison purposes. It does not affect the mapping results. • Weight function for missing samples • The Basic MDS and the Sammon mapping belong to this group.

  38. Generalisation: from MDS to Bregman divergences • If C=1, then set • Then the generalised MDS is the first term of BMMDS and BMMDS is an extension of MDS. • Recall that BMMDS is equivalent to

  39. Criterion for base convex function selection • In order to focus on local distances and concentrate less on long distances, the base convex function must satisfy • Not all convex functions can be considered, such as F(x)=exp(x). • The 2nd order derivative is primarily considered. We wish it to be big for small distances and small for long distances. It represents the focusing power on local distances.

  40. Two groups of Convex functions • The even order derivatives are positive, odd order ones are negative. • No 1 is that of the Extended Sammon mapping.

  41. Focusing power

  42. Different strategies for focusing power • Vertical axis is logarithm of 2nd order derivative. • These use different strategies for increasing focusing power. • In the first group, the second order derivatives are higher and higher for small distances and lower and lower for long distances. • In the second group, second order derivatives have limited maximum values for very small distances, but derivatives are drastically lower and lower for long distances when λ increases.

  43. Two groups of Bregman divergences • Elastic scaling(Victor E McGee, 1966)

  44. Experiment on Swiss roll: The FirstGroup

  45. Experiment on Swiss roll: FirstGroup • For Extended Sammon, Itakura-Saito, • , local distances are mapped better and better, long distances are stretched such that unfolding trend is obvious.

  46. Distances mapping : FirstGroup

  47. Standard deviation : FirstGroup

  48. LCMC measure : FirstGroup

  49. Experiment on Swiss roll:SecondGroup

More Related