1 / 45

Multiple Instance Learning for Sparse Positive Bags

Multiple Instance Learning for Sparse Positive Bags. Razvan C. Bunescu. Raymond J. Mooney. Machine Learning Group Department of Computer Sciences University of Texas at Austin. Machine Learning Group Department of Computer Sciences University of Texas at Austin. razvan@cs.utexas.edu.

audi
Télécharger la présentation

Multiple Instance Learning for Sparse Positive Bags

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Multiple Instance Learning for Sparse Positive Bags Razvan C. Bunescu Raymond J. Mooney Machine Learning Group Department of Computer Sciences University of Texas at Austin Machine Learning Group Department of Computer Sciences University of Texas at Austin razvan@cs.utexas.edu mooney@cs.utexas.com

  2. Two Types of Supervision • Single Instance Learning (SIL): • the traditional type of supervision in machine learning. • a dataset of positive and negative training instances. • Multiple Instance Learning (MIL): • a dataset of positive and negative training bags of instances. • a bag is positive if at least one instance in the bag is positive. • a bag is negative if all instances in the bag are negative. • the bag instance labels are hidden.

  3. MIL Background: Domains • Originally introduced to solve a Drug Activity prediction problem in biochemistry [Dietterich et al., 1997] • Content Based Image Retrieval [Zhang et al., 2002] • Text categorization [Andrews et al., 03], [Ray et al., 05].

  4. MIL Background: Algorithms • Axis Parallel Rectangles [Dietterich, 1997] • Diverse Density [Maron, 1998] • Multiple Instance Logistic Regression [Ray & Craven, 05] • Multi-Instance SVM kernels of [Gartner et al., 2002] • Normalized Set Kernel. • Statistic Kernel.

  5. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  6. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to:

  7. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to: Negative Bags

  8. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to: Positive Bags

  9. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. Regularization term minimize: subject to:

  10. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. Error on negative bags minimize: subject to:

  11. SIL Approach to MIL • Apply bag label to all bag instances. • Formulate as SVM problem. Error on positive bags minimize: subject to:

  12. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  13. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to:

  14. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to:

  15. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: subject to: 

  16. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: (X) subject to: 

  17. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: X subject to: 

  18. From SIL to the Normalized Set Kernel • Apply bag label to all bag instances. • Formulate as SVM problem. minimize: Normalized Set Kernel subject to: 

  19. The Normalized Set Kernel [Gartner et al., 2002] • A bag is represented as the normalized sum of its instances. • Use bags as examples in an SVM formulation. minimize: subject to:

  20. The Normalized Set Kernel [Gartner et al., 2002] • A bag is represented as the normalized sum of its instances. • Use bags as examples in an SVM formulation. minimize: subject to: 

  21. The Normalized Set Kernel (NSK) • A positive bag is the normalized sum of its instances. • Use positive bags and negative instances as examples. minimize: subject to:

  22. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  23. The Normalized Set Kernel (NSK) • A positive bag is the normalized sum of its instances. • Use positive bags and negative instances as examples. minimize: subject to: too strong, especially when sparse positive bags

  24. Inequality Constraints for Positive Bags Balancing constraint NSK constraint  implicitly assumes that all instances inside the bag X are positive

  25. Inequality Constraints for Positive Bags want balancing contraint to express that at least one instance in the bag X is positive sparse MIL constraint 

  26. The Sparse MIL (sMIL) minimize: subject to: larger for smaller bags  small positive bags are more informative than large positive bags

  27. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  28. Inequality Constraints for Positive Bags • sMIL is closer than NSK at expressing the constraint that at least one instance from a positive bag is positive. • However, sMIL does not guarantee that at least one instance is positive • Problem: constraint may be satisfied when all instances have negative scores that are very close to zero. • Solution: force all negative instances to have scores  –1 + X using the transductive constraint: sparse MIL constraint

  29. Inequality Constraints for Positive Bags sparse MIL constraint at least one instance is positive transductive constraint shared slacks  mixed integer programming problem

  30. Inequality Constraints for Positive Bags sparse MIL constraint at least one instance is positive transductive constraint independent slacks  easier problem, solve with CCCP [Yuille et al., 2002]

  31. The Sparse Transductive MIL (stMIL) minimize: subject to: solve with CCCP, as in [Collobert et al. 2006]

  32. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  33. A Balanced SVM Approach to MIL • SIL ideal when bags are dense in positive instances. • sMIL ideal when bags are sparse in positive instances. • If expected density of positive instances is known, design a method that: • converges to SIL when  1. • converges to sMIL when  0. • If is unknown, can set it using cross-validation.

  34. The Balanced MIL (sbMIL) • Input: • Training negative bags Xn, define Xn{x | x X Xn}. • Training positive bags Xp, define Xp{x | x X Xp} • Features (x), or kernel K(x,y). • Capacity parameter C  0 and balance parameter [0,1]. • Output: • Decision function f(x) w(x)+b. • (w,b)  solve_sMIL(Xn, Xp, , C). • order all instances xXp using f(x). • label instances xXp: • the top  |Xp| as positive. • the rest (1–) |Xp| as negative. • (w,b)  solve_SIL(Xn , Xp , , C).

  35. Outline • Introduction • MIL as SIL with one-side noise • The Normalized Set Kernel (NSK) • Three SVM approaches to MIL: • An SVM approach to sparse MIL (sMIL) • A transductive SVM approach to sparse MIL (stMIL) • A balanced SVM approach to MIL (sbMIL) • Experimental Results • Future Work & Conclusion

  36. Experimental Results: Datasets • [AIMed] An artificial, maximally sparse dataset: • Created from AIMed [Bunescu et al., 2005]: • A dataset of documents annotated for protein interactions; • A sentence example contains a pair of proteins – the sentence is positive iff it asserts an interaction between the two proteins; • Create positive bags of sentences: • choose bag size randomly between Smin and Smax. • start with exactly one positive instance, • randomly add negative instances. • Create negative bags of sentences: • choose bag size randomly between Smin and Smax. • randomly add negative instances. • Use subsequence kernel from [Bunescu & Mooney, 2005].

  37. Experimental Results: Datasets • [CBIR] Content Based Image Retrieval: • categorize images as to whether they contain an object of interest. • an image is a bag of image regions. • the number of regions varies widely between images. • for every image, expect that relatively few regions contain object of interest  naturally sparse positive bags. • Evaluate on[Tiger], [Elephant], [Fox] datasets from [Andrews et al., 2003]. • Use a quadratic kernel with the original feature vectors.

  38. Experimental Results: Datasets • [TST] Text categorization datasets: • Medline articles are bags of overlapping text passages. • Articles are annotated with Mesh terms – use them as classes. • Use [TST1] and [TST2] from [Andrews et al., 2003]. • [MUSK] Drug Activity prediction: • Bags of 3D low energy conformations for every molecule. • A bag is positive is at least one conformation binds to target. • [MUSK1] and MUSK2] datasets from [Dietterich et al., 1997] • A bag is positive if the molecule smells “musky”. • Use a quadratic kernel with the original feature vectors.

  39. Experimental Results: Systems • [SIL] The MIL as SIL with one-side noise. • [NSK] The Normalized Set Kernel. • [STK] The Statistic Kernel. • [sMIL] The SVM approach to sparse MIL. • [stMIL] The transductive SVM approach to sparse MIL. • [sbMIL] The balanced SVM approach to MIL.

  40. Experimental Results

  41. Experimental Results

  42. Experimental Results

  43. Future Work • Capture distribution imbalance in the MIL model: • instances belonging to the same bag are, in general, more similar than instances belonging to different bags. • Incorporate estimates of bag-level densitiy in the MIL model: • in some applications, estimates of density of positive instances are available for every bag.

  44. Conclusion • Proposed an SVM approach to MIL that is particularly effective when bags are sparse in positive instances. • Modeling a global density of positive instancs in positive bags further improves the accuracy. • Treating instances from positive bags as unlabeled data in a transductive setting is useful when negative instances in positive and negative bags come from the same distribution.

  45. Questions ?

More Related