1 / 21

I256: Applied Natural Language Processing

I256: Applied Natural Language Processing. Marti Hearst Sept 27, 2006. Evaluation Measures. Evaluation Measures. Precision: Proportion of those you labeled X that the gold standard thinks really is X #correctly labeled by alg/ all labels assigned by alg

prema
Télécharger la présentation

I256: Applied Natural Language Processing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. I256: Applied Natural Language Processing Marti Hearst Sept 27, 2006

  2. Evaluation Measures

  3. Evaluation Measures • Precision: • Proportion of those you labeled X that the gold standard thinks really is X • #correctly labeled by alg/ all labels assigned by alg • #True Positive / (#True Positive + #False Positive) • Recall: • Proportion of those items that are labeled X in the gold standard that you actually label X • #correctly labeled by alg / all possible correct labels • #True Positive / (#True Positive + # False Negative)

  4. F-measure • Can “cheat” with precision scores by labeling (almost) nothing with X. • Can “cheat” on recall by labeling everything with X. • The better you do on precision, the worse on recall, and vice versa • The F-measure is a balance between the two. • 2*precision*recall / (recall+precision)

  5. Evaluation Measures • Accuracy: • Proportion that you got right • (#True Positive + #True Negative) / N N = TP + TN + FP + FN • Error: • (#False Positive + #False Negative)/N

  6. Prec/Recall vs. Accuracy/Error • When to use Precision/Recall? • Useful when there are only a few positives and many many negatives • Also good for ranked ordering • Search results ranking • When to use Accuracy/Error • When every item has to be judged, and it’s important that every item be correct. • Error is better when the differences between algorithms are very small; let’s you focus on small improvements. • Speech recognition

  7. Evaluating Partial Parsing • How do we evaluate it?

  8. Evaluating Partial Parsing

  9. Testing our Simple Fule • Let’s see where we missed:

  10. Update rules; Evaluate Again

  11. Evaluate on More Examples

  12. Incorrect vs. Missed • Add code to print out which were incorrect

  13. Missed vs. Incorrect

  14. What is a good Chunking Baseline?

  15. The Tree Data Structure

  16. Baseline Code (continued)

  17. Evaluating the Baseline

  18. Cascaded Chunking

  19. Next Time • Summarization

More Related