1 / 37

Bibliometrics and science evaluation

Bibliometrics and science evaluation. Olle Persson Inforsk, Dept Sociology Umeå university Roadshow 2006. General background. 1610 Anagram.

misty
Télécharger la présentation

Bibliometrics and science evaluation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bibliometrics and science evaluation Olle PerssonInforsk, Dept Sociology Umeå university Roadshow 2006

  2. General background

  3. 1610 Anagram Galileo was continuing his observations using the telescope. He did not publish his results in book form but in August and December 1610 Kepler received information about discoveries concerning the possibility of moons around Saturn and the phases Venus, respectively. The message about Saturn read: "smaismrmilmepoetaleumibunenugttauiras" "Altissimum planetum tergeminum observavi" ["I have observed the highest of the planets three-formed"]

  4. Intellectual Property Right

  5. 1665 The scholarly journal

  6. "Science is public knowledge" John Ziman

  7. Main functions of scholarly publishing • Priority of discovery • Quality assessment • Communication

  8. Norms of Science - CUDOS Communism Universialism Disinterestedness Organized Scepticims Robert K Merton

  9. Little ScienceBig ScienceDerek J De Solla Price 1963

  10. Institute for Scientific InformationEugene Garfield Science Citation Index, Social Sciences Citation Index, Arts & Humanities Citation Index

  11. Bibliometric design issues

  12. Building bibliometric data for performance studies • Coverage • Journal market approach • Self reporting • Data quality • Addresses • Author names • Journal names etc

  13. Counting Schemes What to count? • all publications • articles, reviews, notes, letters, meeting abstracts • self citations How to count? • Whole counts • Fractional counts • Time window

  14. Impact measures • Cites/paper • Journal impact factor • Normalized citation rate • Citations during yrs 0, 1, 2 • Relative to journal • Relative to subfield (“crown indicator”) • Benchmarking

  15. A few critical examples

  16. Quantity vs Quality Linda Butler: http://www.indicators-conference.isi.fhg.de/Datei/Butler.pdf

  17. Predictive value of bibliometric indicators Using author performance data from 1996-1998… …to predict author performance during2000-2002 …in 2001… The point in time when the prediction is made. In 2001 we have only knowledge of NMCR for years 1996-1998 UmU WoS papers 1996-2002

  18. Then to evaluate the predictive value of this excellence approach we will estimate the size of these four groups • Strong before and strong after • Weak before and weak after • Strong before and weak after • Weak before and strong after Researchers in group 1 and 2 are successfully identified. Group 3 consists of people not living up to the expectations. Group 4 are those researchers assessed as less good but turn out to be among the strong researchers after a period of time.

  19. However, when following up papers of a research organization, we will also find a group of authors that were unknown before, some of them strong and some weak performers (Group 5 and 6). 5. Unknown before and strong after 6. Unknown before and weak after

  20. Wherever we put the cut off we will have problems predicting high impact authors y = 0,5204x + 0,589; r2 = 0,2444, r=0,49 Strong to Strong Weak toStrong Weak to Weak Strong to Weak

  21. Conclusions • The main conclusion of this exercise is that past bibliometric performance does predict future bibliometric performance reasonably well. • However, one must be open to the fact that some authors, for various reasons, do perform better or worse than expected. • Using aggregates of authors, such as those formed by co-authorship links, improves the prediction somewhat. • The strength of correlations found in this study is of the same magnitude as found in a study of field normalized citation impact and peer evaluation among research groups in physics (Rinia et al 1998). • If bibliometric performance predicts peer rating as well as future bibliometric impact, the combined use of peer review and bibliometric assessment would be the ideal approach. What peer reviews fail to see bibliometrics may discover and vice versa.

  22. Some recommendations • Use compound bibliometric indicators • publication activity • normalized citation impact • “sustainable” activity and impact • … and also in combination with • other, and future oriented, performance indicators • and peer reviews

  23. Online Bibliometrics

  24. Hirsch – index (WoS 1986-2006) Nurse P (Nobel prize 2002) H-index=74 Blobel G (Nobel prize 1999) H-index=79

  25. Mapping knowledge domains

  26. Most cited universities and authors in Library & Information Science 1990-2004 DataGenuine articles from the following 27 LIS journals were downloaded from Web of Science (1990-2005 April). Citation links among the articles were created using Bibexcel. The main organization of each author address was extracted. Self citations for institutions and authors were eliminated in cases where the set of citing and cited institutions/authors overlapped. This is identical to Glänzel et al (2004): "self-citation occurs whenever the set of co-authors of the citing paper and that of the cited one are not disjoint, that is, if these sets share at least one author."

  27. 1st Author Co-citation map of LIS 190-2004

  28. Citat mellan svenska sociologer 1990-2005

  29. Citation based mapping Figure 1.The reference-network graph for Scientometrics 1978-2004

  30. Figure 2. Some highlighted features of the reference-network graph for Scientometrics 1978-2004

More Related