1 / 20

Unsupervised and Supervised Tracking

UMASS-Amherst at TDT 2004. Unsupervised and Supervised Tracking. Hema Raghavan. Outline. Create a training corpus Unsupervised tracking Supervised Tracking Discussion. Creating a training corpus. For Tracking 50% topics are English 50% are multilingual

nhi
Télécharger la présentation

Unsupervised and Supervised Tracking

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. UMASS-Amherst at TDT 2004 Unsupervised and SupervisedTracking Hema Raghavan

  2. Outline • Create a training corpus • Unsupervised tracking • Supervised Tracking • Discussion

  3. Creating a training corpus • For Tracking • 50% topics are English • 50% are multilingual • Created a training corpus (supervised and unsupervised) • 30 topics from TDT4 • 50% stories with primarily English topics. • 50% multilingual stories

  4. Unsupervised Tracking Ideas Ideas • Models • Vector Space • Relevance Models • Adaptation • Native Language comparisons

  5. Unsupervised Tracking Models • Vector Space • TF-IDF • IDF is incremental • Relevance Models • State of the art, high performance system • Adaptation

  6. Native Language Hypothesis • TDT tasks involve comparisons of models: • Story link detection: sim(Si, Sj) • Topic tracking: sim(Si, Tj) • It is more effective to measure similarity between models in the original language of the stories, than after machine translation into English • Quality of translation • Differences in score distributions • Trivially obvious? Hard to demonstrate in tracking

  7. Topic tracking with Native Models [SIGIR 2004]

  8. Unsupervised Tracking Results(training set: nwt+TDT4)

  9. Submitted Runs • TF-IDF (UMASS4) • TF-IDF + adaptation (UMASS1) • TF-IDF + adaptation + native models (UMASS2) • Relevance Models + adaptation (UMASS5) • All submissions for primary evaluation condition.

  10. Unsupervised Tracking Results

  11. Supervised Tracking • Creating a newswire only training corpus. • Ideas • Models • Vector Space • Relevance Models • Native Language comparisons • Incremental Thresholds • Negative Feedback

  12. Incremental Thresholds • Utility • Relevance judgments for both Hits and False-Alarms • Increment the YES/NO threshold by when Utility falls below zero.

  13. Negative Feedback • Relevance judgments for both Hits and False-Alarms • for a hit. • for a false alarm.

  14. From Unsupervised to Supervised

  15. Native Language Comparisons

  16. Submitted Runs • Rel. Models (UMASS-2) • Optimized for TDT cost • Rel. Models + Inc. Thresholds (UMASS-1) • TF-IDF + adaptation + neg. feedback + inc thresholds (UMASS-3) • TF-IDF + adaptation + native models (UMASS-4) • TF-IDF + adaptation + native models + neg feedback + increase thresh. (UMASS-7) Optimized for T11SU

  17. Supervised Tracking Results Cost: 0.0467

  18. Results and Discussion • Supervision clearly helps. • Relevance models – a clear winner. • Negative Feedback helps. • Training set did not reflect test very well. • Min-cost versus T11SU

  19. Future Work • Exploration Exploitation trade-off. • What about feedback that is less on demand? • more realistic • Can add costs for judgments. • What about feedback like in the HARD task – Clarification forms?

More Related