1 / 11

Experiments for the CL-SR task at CLEF 2006

Experiments for the CL-SR task at CLEF 2006. Muath Alzghool and Diana Inkpen. University of Ottawa Canada. Track: Cross Language Spoken Retrieval (CL-SR). Experiments. Results for sumbitted runs - English collection Results for sumbitted runs - Czech collection

sukey
Télécharger la présentation

Experiments for the CL-SR task at CLEF 2006

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Experiments for the CL-SR task at CLEF 2006 Muath Alzghool and Diana Inkpen University of Ottawa Canada Track: Cross Language Spoken Retrieval (CL-SR)

  2. Experiments • Results for sumbitted runs - English collection • Results for sumbitted runs - Czech collection • Segmentation issues, evaluation score • Results for different systems: Smart, Terrier • Query expansion • Log likelihood collocations scores • Terrier: divergence from randomness • Small improvements

  3. Results for the submitted runs for the English collection

  4. Results for the submitted runs for the Czech collection

  5. MAP scores for Terrier and SMART, with or without relevance feedback, for English topics

  6. Experiments • Various ASR transcripts (2003, 2004, 2006) • New ASR 2006 transcripts do not help • Combinations do not help • Automatic keywords help • Cross-language • Results good for French to English topic translations • Not for Spanish, German, Czech • Manual summaries and manual keywords • Best results

  7. MAP scores for Terrier, with various ASR transcript combinations

  8. MAP scores for Smart, with various ASR transcript combinations

  9. Results of the cross-language experiments • Indexed fields ASRTEXT2004, and autokeywords • using SMART with the weighting scheme lnn.ntn

  10. Results of indexing the manual keywords and summaries, using SMART with weighting scheme lnn.ntn, and Terrier with In(exp)C2

  11. Conclusion and future work • Low retrieval results, except when using manual summaries and keywords • Future work • Filter out potential speech errors – semantic outliers with low PMI score (in a large Web corpus) with neighboring words • Index using speech lattices

More Related