1 / 19

Assessing Digital Output in New Ways

Assessing Digital Output in New Ways. Mike Taylor Research Specialist http://orcid.org/0000-0002-8534-5985 mi.taylor@elsevier.com.

kylee
Télécharger la présentation

Assessing Digital Output in New Ways

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Assessing Digital Output inNew Ways Mike Taylor Research Specialist http://orcid.org/0000-0002-8534-5985 mi.taylor@elsevier.com

  2. Looking at emerging alternative metrics for measuring author impact and usage data, this presentation will focus on methods for capturing more granular data around researchers and topics, including new assessment tools, usage data sets, and understanding how that impacts overall author contributions and understanding.

  3. Some words on terms • Alternative metrics, altmetrics, article data, usage data, assessments, metrics, impact, understanding, attention, reach… • If this seems confusing… • Altmetrics is at the big bang stage – this universe has not yet cooled down and coalesced

  4. What is the data? • A set of altmetric data is about a common document and represents usage, recommendation, shares, re-usage • Identified by DOI, URL, ID • It does not show common intent: a tweet is not the same as a Mendeley share is not the same as a Data Dryad data download is not the same as mass media coverage or a blog • Although I talk about journals, articles data, this data can be derived from any digital output • Books, conference presentations, policy papers, patents

  5. What are metrics? Metrics are an interpretive layer derived from this data • Usage • Attention • Engagement • Scholarly impact • Social impact

  6. Various providers… • Altmetric.com • Plum Analytics • PLOS / PLOS code • GrowKudos • Impactstory.org • Altmetrics is not Altmetric.com Each has strengths and weaknesses, no canonical source

  7. Bringing together sources… • Altmetrics isn’t one thing, so attempting to express it as one thing will fail. • Elsevier (and others) favour intelligent clusters of data: social activity, mass media, scholarly activity, scholarly comment, re-use • Elsevier believes that more research is needed, and that best indicators for scholarly impact are scholarly activity and scholarly comment

  8. Different data have different characteristics Example from 13,500 papers: • Highly tweeted stories focus on policy, gender, funding, ‘contentious science’ issues, mostly summaries on Nature News • Highly shared papers in Mendeley are hard core original research • Different platforms have discipline bias • Scholarly blogs both lead interest and respond • Data from Altmetric.com

  9. The importance of openness • Communities have to agree to agree • Innovation and co-operation • When adapting metrics from data, there needs to be broad consensus that what we say we’re measuring is what’s being measured • We need to reflect and adapt

  10. Gaming / cheating • If people take this data seriously, will they cheat? • Eg, Brazilian citation scandal, strategies used by people to increase IF of journals • Expertise in detecting fraudulent downloads (eg, SSRN), self-tweeting – when is ‘normal’ corrupt? • One thing to buy 1000 tweets, another to buy 10 blogs, or mass media coverage • Do those twitter accounts have scholarly followers? • Pattern analysis, usage analysis, network analysis • Public data = public analysis = public response

  11. Other criticisms • Biggest criticisms are when people try and conflate all the data into a single thing • Easy point of attack – tweets are all about “sex and drugs and rock ‘n’ roll papers”* • Using clusters is more intelligible to academic community – eg, re-use, scholarly activity, scholarly comment (blogs, reviews, discussions) • * this isn’t true anyway

  12. Making altmetrics work • Altmetrics has got where it is today on the basis of standards • Without ISSNs and DOIs, the world is a harder place x 1000 • Elsevier is supporting research to discover scholarly impact in areas that don’t use DOIs • (Other standards exist: PubMed IDs, Arxiv IDs)

  13. Expanding views of altmetrics • Increasingly, we’re seeing altmetrics being used to describe objects other than articles, but also institutions, journals, data and people • For institutions, Snowball Metrics has recently adopted the same formulation for grouping altmetricsas Elsevier (www.snowballmetrics.com)

  14. Making data count • More funders are insisting on open data • And the way to understand whether its being used … is data metrics – combining altmetrics and traditional (web-o-)metrics • Downloads, citations, shares, re-uses… • Downside: data repository is fragmented, 600+ repositories registered at databib.org • Upside:Datacite, ODIN, ORCID, DOI, RDA, Draft Declaration of Data Citation Principles

  15. Measuring the effect of research on society • Governments don’t operate like scholars • Rhetoric, argument, polemics • Personal reputation is important • Laws don’t contain citations • The relationship is fuzzy – less a chain of evidence, more a miasma of influence • Elsevier is sponsoring work to understand this relationship

  16. Shaping communications • Standards are vital to altmetrics • NISO are involved in shaping the conversation around what implicit standards need to be developed • (My example) – is a retweet the same as a tweet, do we count replies or favourites? And how about modified tweets, conversations?

  17. NISO’s White Paper • www.niso.org • Comments requested until July 18th • End of stage 1 • Some of the observations: 1. Develop specific definitions for alternative assessment metrics. 2. Agree on proper usage of the term “Altmetrics,” or on using a different term. 3. Define subcategories for alternative assessment metrics, as needed. 4. Identify research output types that are applicable to the use of metrics. 5. Define relationships between different research outputs and develop metrics for this aggregated model.

  18. 6. Define appropriate metrics and calculation methodologies for specific output types, such as software, datasets, or performances. 7. Agree on main use cases for alternative assessment metrics and develop a needs-assessment based on those use cases. 8. Develop statement about role of alternative assessment metrics in research evaluation. 9. Identify specific scenarios for the use of altmetrics in research evaluation (e.g., research data, social impact) and what gaps exist in data collection around these scenarios. 10. Promote and facilitate use of persistent identifiers in scholarly communications. 11. Research issues surrounding the reproducibility of metrics across providers. 12. Develop strategies to improve data quality through normalization of source data across providers. 13. Explore creation of standardized APIs or download or exchange formats to facilitate data gathering. 14. Develop strategies to increase trust, e.g., openly available data, audits, or a clearinghouse. 15. Study potential strategies for defining and identifying systematic gaming. 16. Identify best practices for grouping and aggregating multiple data sources. 17. Identify best practices for grouping and aggregation by journal, author, institution, and funder. 18. Define and promote the use of contributorship roles. 19. Establish a context and normalization strategy over time, by discipline, country, etc. 20. Describe how the main use cases apply to and are valuable to the different stakeholder groups. 21. Identify best practices for identifying contributor categories (e.g., scholars vs. general public). 22. Identify organizations to include in further discussions. 23. Identify existing standards that need to be applied in the context of further discussions. 24. Identify and prioritize further activities. 25. Clarify researcher strategy (e.g., driven by researcher uptake vs. mandates by funders and institutions).

  19. Your role in improving the (altmetrics) world • Use DOIs when you communicate • Use ORCIDs • Develop, deploy and document APIs • (that use DOIs, that use ORCIDs) • Tell the world about your #altmetrics

More Related