1 / 24

Automatic indexing

Automatic indexing. Salton: When the assignment of content identifiers is carried out with the aid of modern computing equipment the operation becomes automatic indexing. Approaches and Methods. Initial approach Create an inverted file On-the-fly (natural language processing) Methods

arva
Télécharger la présentation

Automatic indexing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Automatic indexing Salton: When the assignment of content identifiers is carried out with the aid of modern computing equipment the operation becomes automatic indexing

  2. Approaches and Methods • Initial approach • Create an inverted file • On-the-fly (natural language processing) • Methods • All words, remove stop words • Word frequencies (Wilson’s objective method of determining aboutness) • More sophisticated IR methods • Semantic/linguistical analysis, • co-occurrence/similarity measures, etc.

  3. Basic arrangement of automatic indexes • Inverted file: contains all the index terms automatically drawn from the document records according to the indexing technique used. • Position of term • record number • Field number • Number of occurrences • Position in the field (digits 45-57)

  4. Access of Inverted Files • Sequential access - alphabetical ordering • Binary chain access - binary search tree • Hashing

  5. Pros and Cons of Automatic Indexing • Pros • Consistency • Cost reduction • Time reduction • Cons / limitations • Human intellect • Term relationships • Misleading in retrieval • Good algorithms, but generally domain-specific

  6. Natural language vs. Controlled Vocabulary • Natural language continuum • <-basic key word------------IR------------full NLP->

  7. Pros Cons Production cost Cost to the end-user Facilitate specificity in terms of access Exhaustivity indexing Handling of errors Natural language vs. Controlled Vocab.

  8. What is Automatic Classification? • Automatic manipulation of a document’s contents to support logical grouping with other similar documents for organization and/or retrieval activities. Can include the assignment of, or manipulation of, classification notation.

  9. Why Automatic Classification? • Classification is time consuming and expensive • Knowledge structuring • To much information • Status of automatic classification • Fairly experimental, although not completely… • Operational systems for e-mail • Web retrieval harvesting METADATA (semi-automatic)

  10. Automatic Classification on the Web • Automatic Classification of Web resources using Java and Dewey Decimal Classification http://www.scit.wlv.ac.uk/~ex1253/classifier/ • SOSIG: Associated Research and Development http://www.sosig.ac.uk/about_us/research.html

  11. Why Automatic Classification? • Your articles…. • How defined • What was the purpose • How was the automatic classification done, or discussed • Outcome

  12. Recall Number of relevant documents retrieved out of all the possible relevant documents in system. [quantity—did you get it all?] Precision Percentage of documents retrieved that were relevant [quality of what you found]

  13. Tradeoff between Recall and Precision We can easily recall everything that matches a particular text string or pattern; however, we cannot search through all the matching results (too many) We can do an OK job limiting to most relevant, but as we “tune” result to be more relevant, we leave out more and more matching results.

  14. Major Issues • Information is mostly online • Information is increasing available in full-text (full-content) • There is an explosion in the amount of information being produced. • So much so that even in fields like medical literature where there are major efforts like NLM Medline to index content, we cannot keep up.

  15. What this means • Need ways to index without requiring paid experts • Automatic indexing, classification, keyword extraction, and even relationship and fact extraction. • Need to take advantage of experts who are reading the materials to comment on it and provide rankings, summarizations, keywords, “factoids”. (like Amazon)

  16. Future Search • Full text searching of content, and of associated annotations on content, and metadata (including reader rankings, tags, etc). Like Connotea, NeoNote, etc. • Faceted based searching (Endeca, e.g. Home Depot, NCSU library). • Clustered based searching (Clusty)

  17. Study on gene name searching • Looks at full text searching • Tradeoff between precision and recall • (Hemminger 2007).

  18. Article Discovery Study

  19. Article Review Study • Two literature cohorts, • Schizophrenia (Pat Sullivan) • Arabidopsis (Todd Vision) • Each cohort had three readers • Readers are asked to “review the article and judge its relevance to them as someone new to the gene in this biological setting, trying to build an understanding of the state of knowledge in that research area.”

  20. Metadata Articles More Valuable • In both cases and for all observers, their mean quality rating values were lower (more useful) for the metadata discovered articles. There were statistically significant differences between the mean quality rating for the metadata discovered articles versus the full-text discovered articles for the both the Arabidopsis and Schizophrenia sets at the p < 0.05 level

  21. Precision and Recall

  22. Article Features that correlate with Value: Number of Hits • The number of hits or matches of the search term within the returned document is a commonly used feature to rank returned articles. To test the value of this feature, the number of hits was correlated with the mean quality ranking for each article (averaged across all observers). The results clearly show a relationship where articles with many matches of the search term, tend to be much more highly valued.

  23. Improving Relevance for Metadata Searching • Repeating the calculations on the schizophrenia and Arabidopsis article review sets, but limited to only matches with high hit counts (Schizophrenia ≥ 20 hits and Arabidopsis ≥ 15 hits) shows that precision for the full text is now the same (100% in Aradidopsis) or slightly better than that of the metadata retrieved articles (95% versus 94.4% in schizophrenia). However, the number of additional cases discovered by full-text searching is now only slightly better, finding 5% more cases in schizophrenia and 28% more in Arabidopsis.

  24. Conclusions • This suggests that rather than accepting metadata searching as a surrogate for full-text searching, it may be time to make the transition to direct full text searching as the standard. This could be accomplished by using certain features of the full-text article, such as number of hits of the search string or whether the search string is found in the metadata (i.e. our current metadata search) as filters that allow us to increase the precision of our results. (and put the user in control of the filtering).

More Related