1 / 19

Semi-Supervised Learning Using Randomized Mincuts

Semi-Supervised Learning Using Randomized Mincuts. Avrim Blum, John Lafferty, Raja Reddy, Mugizi Rwebangira. Outline. Often have little labeled data but lots of unlabeled data. We want to use the relationships between the unlabeled examples to guide our predictions.

rhea
Télécharger la présentation

Semi-Supervised Learning Using Randomized Mincuts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Semi-Supervised Learning Using Randomized Mincuts Avrim Blum, John Lafferty, Raja Reddy, Mugizi Rwebangira

  2. Outline • Often have little labeled data but lots of unlabeled data. • We want to use the relationships between the unlabeled examples to guide our predictions. • Idea: “Similar examples should generally be labeled similarly."

  3. Learning using Graph Mincuts:Blum and Chawla (ICML 2001)

  4. Construct a Graph

  5. Add sink and source + -

  6. Obtain s-t mincut + - Mincut

  7. Classification + - Mincut

  8. Confidence on the predictions? • Plain mincut gives no indication of the examples that it is most confident about. Solution • Add random noise to the edges. • Run mincut several times. • For each unlabeled example take majority vote.

  9. Motivation • Margin of the vote gives a measure of the confidence. • Ideally we would like to assign a weight to each cut in the graph (a higher weight to small cuts) and then take a vote over all the cuts in the graph according to their weights. • We don’t know how to do this, but we can view randomized mincuts as an approximation to this.

  10. Related Work –Gaussian Processes • Zhu, Gharamani and Lafferty (ICML 2003). • Each unlabeled example receives a label that is the average of its neighbors. • Equivalent to minimizing the squared difference of the labels.

  11. How to construct the graph? • K-nn • Graph may not have small separators. • How to learn k? • Connect all points within distance δ • Can have disconnected components. • δ is hard to learn. • Minimum Spanning Tree • No parameters to learn. • Gives connected, sparse graph. • Seems to work well on most datasets.

  12. Experiments • ONE VS TWO: 1128 examples . • (8 X 8 array of integers, Euclidean distance). • ODD VS EVEN: 4000 examples . • (16 X 16 array of integers, Euclidean distance). • PC VS MAC: 1943 examples . • (20 newsgroup dataset, TFIDF distance) .

  13. ONE VS TWO

  14. ODD VS EVEN

  15. PC VS MAC

  16. Accuracy Coverage: PC VS MAC(12 labeled)

  17. Conclusions • We can get useful estimates of the confidence of our predictions. • Often get better accuracy than plain mincut. • Minimum spanning tree gives good results across different datasets.

  18. Future Work • Sample complexity lower bounds (i.e. how much unlabeled data do we need to see?). • Better way of sampling mincuts? Reference • A. Blum, J. Lafferty, M.R. Rwebangira, R. Reddy “Semi-supervised Learning Using Randomized Mincuts”, ICML 2004 (To appear)

  19. Questions?

More Related