1 / 82

Data Mining (and machine learning)

Data Mining (and machine learning). ROC curves Rule Induction Basics of Text Mining. Two classes is a common and special case. Two classes is a common and special case. Medical applications: cancer, or not? Computer Vision applications: landmine, or not?

chungm
Télécharger la présentation

Data Mining (and machine learning)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Data Mining(and machine learning) ROC curves Rule Induction Basics of Text Mining

  2. Two classes is a common and special case

  3. Two classes is a common and special case Medical applications: cancer, or not? Computer Vision applications: landmine, or not? Security applications: terrorist, or not? Biotech applications: gene, or not? … …

  4. Two classes is a common and special case Medical applications: cancer, or not? Computer Vision applications: landmine, or not? Security applications: terrorist, or not? Biotech applications: gene, or not? … …

  5. Two classes is a common and special case True Positive: these are ideal. E.g. we correctly detect cancer

  6. Two classes is a common and special case True Positive: these are ideal. E.g. we correctly detect cancer False Positive: to be minimised – cause false alarm – can be better to be safe than sorry, but can be very costly.

  7. Two classes is a common and special case True Positive: these are ideal. E.g. we correctly detect cancer False Positive: to be minimised – cause false alarm – can be better to be safe than sorry, but can be very costly. False Negative: also to be minimised – miss a landmine / cancer very bad in many applications

  8. Two classes is a common and special case True Positive: these are ideal. E.g. we correctly detect cancer False Positive: to be minimised – cause false alarm – can be better to be safe than sorry, but can be very costly. False Negative: also to be minimised – miss a landmine / cancer very bad in many applications True Negative?:

  9. Sensitivity and Specificity: common measures of accuracy in this kind of 2-class tasks

  10. Sensitivity and Specificity: common measures of accuracy in this kind of 2-class task Sensitivity = TP/(TP+FN) - how much of the real ‘Yes’ cases are detected? How well can it detect the condition? Specificity = TN/(FP+TN) - how much of the real ‘No’ cases are correctly classified? How well can it rule out the condition?

  11. YESNO

  12. YESNO

  13. Sensitivity: 100% Specificity: 25% YES NO YESNO

  14. Sensitivity: 93.8% Specificity: 50% YESNO

  15. Sensitivity: 81.3% Specificity: 83.3% YESNO YES NO

  16. Sensitivity: 56.3% Specificity: 100% YESNO YES NO

  17. Sensitivity: 100% Specificity: 25% YES NO YESNO 100% Sensitivity means: detects allcancer cases (or whatever) but possibly with many false positives

  18. Sensitivity: 56.3% Specificity: 100% YESNO YES NO 100% Specificity means: misses some cancer cases (or whatever) but no false positives

  19. Sensitivity and Specificity: common measures of accuracy in this kind of 2-class tasks Sensitivity = TP/(TP+FN) - how much of the real TRUE cases are detected? How sensitive is the classifier to TRUE cases? A highly sensitive test for cancer: if “NO” then you be sure it’s “NO” Specificity = TN/(TN+FP) - how sensitive is the classifier to the negative cases? A highly specific test for cancer: if “Y” then you be sure it’s “Y”. With many trained classifiers, you can ‘move the line’ in this way. E.g. with NB, we could use a threshold indicating how much higher the log likelihood for Y should be than for N

  20. ROC curves David Corne, and Nick Taylor, Heriot-Watt University - dwcorne@gmail.com These slides and related resources: http://www.macs.hw.ac.uk/~dwcorne/Teaching/dmml.html

  21. Rule Induction • Rules are useful when you want to learn a clear / interpretable classifier, and are less worried about squeezing out as much accuracy as possible • There are a number of different ways to ‘learn’ rules or rulesets. • Before we go there, what is a rule / ruleset?

  22. Rules IF Condition … Then Class Value is …

  23. Rules are Rectangular YESNO IF (X>0)&(X<5)&(Y>0.5)&(Y<5) THEN YES 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  24. Rules are Rectangular YESNO IF (X>5)&(X<11)&(Y>4.5)&(Y<5.1) THEN NO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  25. A Ruleset IF Condition1 … Then Class = A IF Condition2 … Then Class = A IF Condition3 … Then Class = B IF Condition4 … Then Class = C …

  26. What’s wrong with this ruleset? (two things) YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  27. What about this ruleset? YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  28. Two ways to interpret a ruleset:

  29. Two ways to interpret a ruleset: As a Decision List IF Condition1 … Then Class = A ELSE IF Condition2 … Then Class = A ELSE IF Condition3 … Then Class = B ELSE IF Condition4 … Then Class = C … ELSE … predict Background Majority Class

  30. Two ways to interpret a ruleset: As an unordered set IF Condition1 … Then Class = A IF Condition2 … Then Class = A IF Condition3 … Then Class = B IF Condition4 … Then Class = C Check each rule and gather votes for each class If no winner, predict background majority class

  31. Three broad ways to learn rulesets

  32. Three broad ways to learn rulesets 1. Just build a decision tree with ID3 (or something else) and you can translate the tree into rules!

  33. Three broad ways to learn rulesets 2. Use any good search/optimisation algorithm. Evolutionary (genetic) algorithms are the most common. You will do this coursework 3. This means simply guessing a ruleset at random, and then trying mutations and variants, gradually improving them over time.

  34. Three broad ways to learn rulesets 3. A number of ‘old’ AI algorithms exist that still work well, and/or can be engineered to work with an evolutionary algorithm. The basic idea is: iterated coverage

  35. Take each class in turn .. YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  36. Pick a random member of that class in the training set YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  37. Extend it as much as possible without including another class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  38. Extend it as much as possible without including another class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  39. Extend it as much as possible without including another class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  40. Extend it as much as possible without including another class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  41. Next class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  42. Next class YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  43. And so on… YESNO 5 4 3 2 1 0 0 1 2 3 4 5 6 7 8 9 10 11 12

  44. Text as Data: what and why?

  45. 2012 Students’ implementation choices for DMML CW1 2014 “Word Clouds” - word frequency patterns provides useful information

  46. Classify sentiment • “Word Clouds” • - word frequency patterns • provides useful information • …which can be used to predict • a class value / category / signal • … in this case • the document(s) are “tweets • mentioning our airline over • past few hours” • class value is a satisfaction • score, between 0 and 1 ACS Index Twitter sentiment http://www.inside-r.org/howto/mining-twitter-airline-consumer-sentiment

  47. http://necsi.edu/research/social/newyork/sentimentmap/ sentiment map of NYC more info from tweets, this time, a “happiness” score.

  48. “similar pages” Based on distances between word frequency patterns

  49. Predicting relationship between two people based on their text messages

  50. Can you predict class: Desktop, Laptop or LED-TV from word frequencies of product description on amazon ?

More Related