390 likes | 410 Vues
Explore the combination of controlled and social tagging for resource discovery. Investigate the impact of established vocabularies on social tagging effectiveness, consistency, and relevance for improved retrieval. Test in two contexts.
E N D
An evaluation of enhancing social tagging with a knowledge organization system Brian Matthews, STFC K. Golub, C. Jones, J. Moon, M. L. Nielsen, B. Puzoń, D. Tudhope
Science and Technology Facilities Council • Provide large-scale scientific facilities for UK Science • particularly in physics and astronomy • E-Science Centre – at RAL and DL • Provides advanced IT development and services to the STFC Science Programme
STFC e-Science and KOS • Remit includes • Library and institutional publication repository • Management of scientific data • Data Interoperability • Digital preservation • Keep the results alive and available • Requires semantically rich metadata • Research into the Semantic Web SKOS • Involved early on in the SKOS activity in the W3C • In SWAD-Europe Project • Then led by Alistair Miles, STFC • Through recommendation process • Now at Proposed Recommendation phase (15/06/09) • Out to vote - Last chance to comment
EnTag Project Enhanced tagging for discovery • JISC funded project • Partners • UKOLN – Koralyka Golub • University of Glamorgan – Doug Tudhope, Jim Moon • STFC – Brian Matthews, Cathy Jones, Bartek Puzon • Intute • Non-funded • OCLC Office of Research, USA • Danish Royal School of Library and Information Science - Marianne Lykke Nielsen Period: 1 Sep 2007 -- 30 Sep 2008 http://www.ukoln.ac.uk/projects/enhanced-tagging/
Controlled Vocabulary • Traditional way of providing subject classification • For shelf-marking • For searching • For association of resources • Different types used, such as • Subject Classification • Keyword lists • Thesaurus
HASSET (I) • UK Data Archive, Univ of Essex • Humanities and Social Science Electronic Thesaurus • Some 1000’s of terms • Structure based on British Standard 5723:1987/ISO 2788-1986 (Establishment and development of monolingual thesauri). • preferred terms, broader-narrower relations, associated terms http://www.data-archive.ac.uk/search/hassetSearch.asp
Observations on controlled vocabularies • Precise classification of resources • Good for precision and recall • Can exploit the hierarchy to modify query • Using the broader/narrower/related terms • Expensive • Requires investment in specialist expertise to devise the vocabulary • Requires investment in specialist expertise to classify resources. • Hard to maintain currency
Social Tagging • The Web 2.0 way of providing search terms • People “tag” resources with free-text terms of their own choosing • Tags used to associate resources together • del.icio.us, flickr • “Folksonomy” • the terms a community choses to use to tag its resources.
Observations on Social Tagging • People often use the same tags or keywords (e.g. Preservation, Digital Library) • this makes things which mean the same thing to people easier to find • Cheap way of getting a very large number of resources classified • Represents the “community consensus” in some sense • “The Wisdom Of Crowds” • Has currency as people update • Tag clouds of popular tags • However, people often use similar but not the same tags: • e.g. Semantic Web, SemanticWeb, SemWeb, SWeb • People make mistakes in tags • mispellings, using spaces incorrectly. • Some tags are more specific than others – tendency to shallow? • E.g. controlled vocabulary, thesaurus, HASSET • Personal meaning • Mine, favourite, useful • People often associate the same words together with particular ideas • these are captured in clusters
EnTag Purpose Investigate the combination of controlled and social tagging approaches to support resource discovery in repositories and digital collections Aim to investigate • whether use of an established controlled vocabulary can help move social tagging beyond personal bookmarking to aid resource discovery To Improve tagging • Relevance of tags • Consistency • Efficiency To Improve retrieval • Effectiveness (degree of match between user and system terminology) In two different contexts: • Tagging by readers • Tagging by authors
Testing Approach Main focus: • free tagging with no instructions Versus • tagging using a combined system and guidance for users Two demonstrators • Intute digital collection http://www.intute.ac.uk • Major development • Tagging by reader • Using a cohort of students to evaluate STFC repository http://epubs.stfc.ac.uk/ • Complementary development • Tagging by author • A more qualitative approach
Intute study Demonstrator • 11,042 stripped records in Politics • Free tagging or DDC / LCSH / Relative Index • Searching, simple and enhanced interfaces Questions • Choice of tag • Retrieval implication Participants • 28 UK politics students with little tagging experience • Thus subjects were searchers. Data collection • Logging • Three questionnaires Four tagging tasks • Two controlled, two free • Tag 15 documents in each task • 5 to 10 min per document • Open document but focus • Try consider enhanced suggestions where appropriate • Paper at JCDL 09
STFC Author study • A study on a Authors of papers • Smaller number - c.10-12. • Regular depositors ( > 10 papers each) • Subject experts • Expect that they would want their papers accurately tagged so that they are precisely found • A more qualitative study
Study Approach • Questions • Do authors appreciate the purpose and use of tags? • Value of controlled vocabulary – better tags? • User interface and ease of use • Supervised sessions • 40 minute observed trial • Statistics Logging • Task worksheet • Tagging own papers – a number of their choice • Tag cloud, own tags, controlled vocabulary • Questions afterwards • ACM Computing Classification Scheme used • Imported in SKOS
Limitations • A number of limitations of this approach: • Small sample size • Small number of papers tagged • Inappropriate controlled vocabulary • Computing and IT specialists too familiar with the concept of semantic annotation. • Single, observed use of the tool – not real life. • Nevertheless, it was felt that the results of the study were illuminating and useful.
Figure 5 Figure 6 Figure 4 Some statistics Average 6 terms per item, 2/3 being free text. Little correlation with experience Tagging time reduces with practice.
Term Choice • Chose terms from the bottom of the hierarchy if possible. • Often preferred an appropriate term from the thesaurus over their own • Appreciated the better IR properties • Would like definitions of terms to be available • Would like automatic suggestions • Very little use of the Tag Cloud • Presentation of cloud? • Unfamiliarity? • Limited population?
User interface • Tool generally (though not universally) thought to be easy to use • Some wanted it to be simpler • More suited for a library professional? • Wanted more automation • Tag cloud interface not right • Would be willing to use • Especially if benefit in improved retrieval could be established.
Preferred style Most depositors had a strong preference for the way they interact with the system. • Free text taggers • Enter tags, don’t really use the vocabulary • Thesaurus browsers: • systematically browse controlled vocabulary, • Thesaurus searchers: • Use the vocabulary search tool for preference • only enter free-text term when they can’t find an appropriate term. Speculate that there would also be those who prefer to start from the tag cloud? Contrast to Intute study here.
ACM Computing Classification Scheme • General recognition of this scheme • Used in journals to classify papers • Meant that there was acceptance of its authority • Willingness to use it • Feeling that it was abstract and academic • Feeling that it was not up to date and had much missing.
Comparison of Intute and STFC Results Different user groups and approach to studies Similarities between the Intute STFC users could be identified • Users appreciated the benefits of consistency and vocabulary control • willing to engage with the tagging system; • Support for automated suggestions • Appropriateness of the controlled vocabulary important • Tag cloud hard to use effectively • The user interface and interaction important.
Observations Users are willing to add tags using a controlled vocabulary in conjunction with free text • By and large they understand why its useful • Good search terms = good retrieval • But they need help • Automation, suggestions, good interfaces • Support for different styles of interaction • Produce “better” tags (?) • Also need flexible and targeted controlled vocabularies • “Web 2.0” features need to be thought through very carefully • “tag clouds” not a success • Need much better structuring and presentation • integrated Interaction between tag clouds and structured vocabularies needs further investigation • Develop a flexible user focussed vocab from tags • “structured folksonomy”
Conclusions • Controlled vocabulary and tags complement each other • Controlled vocabulary suggestions valued if appropriate • Future work: • Qualitative analysis • Enhancements • Controlled vocabulary • Auto suggestions • Interface • Motivation for tagging • Would users (enhanced) tag in “real life” ?
Questions? brian.matthews@stfc.ac.uk