1 / 5

Round Table Discussion Bibliometric Indicators in Research Evaluation and Policy

Colloque Evolution des publications scientifiques Académie des sciences, 14-15 mai 2007 Pierre Braunstein Académie des sciences Institut de Chimie (UMR 7177 CNRS) Université Louis Pasteur - Strasbourg. Round Table Discussion Bibliometric Indicators in Research Evaluation and Policy. CHEMISTRY

tacita
Télécharger la présentation

Round Table Discussion Bibliometric Indicators in Research Evaluation and Policy

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Colloque Evolution des publications scientifiquesAcadémie des sciences, 14-15 mai 2007Pierre BraunsteinAcadémie des sciencesInstitut de Chimie (UMR 7177 CNRS)Université Louis Pasteur - Strasbourg Round Table Discussion Bibliometric Indicators in Research Evaluation and Policy

  2. CHEMISTRY • Core chemistry • Numerous interfaces • chemistry/mathematics • chemistry/biology • Increasing number of international collaborations • The increasing number of journals leads to their selection according to some criteria (tradition, userfriendly submission process, rapidity, etc …) • The highest Impact Factor in chemistry is 20 (review articles) • The ranking of authors on a publication is highly variable

  3. Do impact factors reflect what is important or what is fashionable ? • Areas in which much is published generate more citations and therefore higher impact factors • The period covered (2 years) is too short to be really significant (compare e.g. an article published in December of year N vs. January of year N+1. • The ISI classification « General » vs. « Specialized » journals is very far from being satisfactory. Any ranking based on this splitting is rather useless. • Total citations counts (or H factor = number of articles cited more than H times) provide a usefull image of the impact of the scientific contributions of an author, in particular a senior one (but remember the « cold fusion » effect!). However, since practices/numbers vary considerably from one discipline to another, comparisons become difficult/impossible at scientific interfaces.

  4. Indicators can be useful, they will be used anyway, we better understand their meaning, strengths and weaknesses in order to improve the system. A European model? • A calibration system is required and various benchmarking procedures will minimize the misinterpretations • Indicators can assist in identifying performances within a similar area, at the national and international levels, particularly for more senior scientists (ex. 1000 most cited chemists). They have no absolute value. • Other criteria are needed to identify the younger, most promising scientists: key role of the community! • There must be feedback between the evaluators and the individuals or labs or institutions being evaluated to explain and communicate the source of indicators. This must be a multicriteria and transparent mechanism

  5. BIBLIOMETRICS • BIBLIO MIXI CATORS

More Related