1 / 25

wolfgang glänzel, koenraad debackere

wolfgang glänzel, koenraad debackere on the ‘multidimensionality’ of ranking and the role of bibliometrics in university ranking. k.u.leuven, steunpunt o&o indicatoren, leuven (belgium) hungarian academy of sciences, iprs, budapest (hungary). STRUCTURE OF THE PRESENTATION What is ranking?

xena-gross
Télécharger la présentation

wolfgang glänzel, koenraad debackere

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. wolfgang glänzel, koenraad debackere on the ‘multidimensionality’ of ranking and the role of bibliometrics in university ranking k.u.leuven, steunpunt o&o indicatoren, leuven (belgium)hungarian academy of sciences, iprs, budapest (hungary)

  2. STRUCTURE OF THE PRESENTATION • What is ranking? • University ranking. Ranking of selected activities vs. integrated ranking • Bibliometrics and the “multi-dimensionality” of research activity • Conclusions glänzel & debackere: the ‘multidimensionality’ of ranking 2

  3. WHAT IS RANKING? Ranking is positioning comparable objects on an ordinal scale based on a (non-strict) weak order relation among (statistical) functions of, or a combination of functions of measures or scores associated with those objects. These functions, usually based on evaluation, are called indicators. Different indicators Xi representing different aspects usually form components of a composite indicatorY being the basis of the ranking, particularly, Y = S liXi with li being weight and S li= 1. glänzel & debackere: the ‘multidimensionality’ of ranking 3

  4. Problems in using composite indicators • Possible interdependence of components • Altering weights can result in different ranking • Results might be obscure and irreproducible • Random errors of statistical functions are usually ignored • The multi-dimensional space is crached into linearity • Valuable information is definitely lost glänzel & debackere: the ‘multidimensionality’ of ranking 4

  5. Problems in using composite indicators • Besides the aforementioned statistical and methodological problems, several data related issues are relevant as well: • The “cleanness” and hence the reliability of the data used (correct address information, correct country allocation, correct institutional allocation …) • The types of bibliographic documents selected (article, note, letter, review …) in case bibliometric data are used • The time-variant nature of the underlying data sources glänzel & debackere: the ‘multidimensionality’ of ranking 5

  6. UNIVERSITY RANKING. RANKING OF SELECTED ACTIVITIES VS. INTEGRATED RANKING • 1. Evaluation of education • In 1993 a national education-related university ranking was published in Germany. The ranking was survey-based. Questionnaires have been sent to students and professors. • A breakdown by fields was presented as well to give a more differentiated picture, to reveal “strengths and weaknesses” and to help students and academic staff make a selection. glänzel & debackere: the ‘multidimensionality’ of ranking 6

  7. The Spiegel Ranking of German Universities in 1993 glänzel & debackere: the ‘multidimensionality’ of ranking 7

  8. Because of differences and peculiarities of national educational and accreditation systems such endeavours are practically restricted to the national level. • 2. Research performance • With the “Shanghai Ranking”, firstpublished in 2003, the focus was shifted to research assessment. This world-wide ranking was to a large extent facilitated by the availability of the multidisciplinary bibliographic databases SCIE, SSCI and their derivatives. glänzel & debackere: the ‘multidimensionality’ of ranking 8

  9. 3. “Holistic approach” • The broader approach chosen by THES-QS, which is largely relying on peer review score, could not overcome the limitations of previous attempts and remained controversial as well. • Integrated quantification of university performance and a world-wide ranking based on all HEI activities, including education, research and third-stream activities remains, however, at least for the present utopistic. glänzel & debackere: the ‘multidimensionality’ of ranking 9

  10. The question arises of is there really a need for an integrated ranking. The evaluation of selected activities within the HEI missions (like quality of education, research performance or the assessment of important third-stream activities) might provide more valuable information for the interested users in all relevant sectors and domains. glänzel & debackere: the ‘multidimensionality’ of ranking 10

  11. Research output Scientific communication Bibliometrics BIBLIOMETRICS AND THE “MULTI-DIMENSIONALITY” OF RESEARCH ACTIVITY University activities • Education • Research • Third mission glänzel & debackere: the ‘multidimensionality’ of ranking 11

  12. Although measuring only one, however, important part of research acitivities, bibliometrics proved an efficient tool in research assessment. • As in the case of all HEIs rankings, first and foremost the following two issues have to be solved for the bibliometric approach as well. • Correct institutional assignment • A selection of standard indicators that guaratee robustness and reproducility of results glänzel & debackere: the ‘multidimensionality’ of ranking 12

  13. In order to obtain a more realistic and differentiated picture of research at higher-educational institutions, the following three scenatios are suggested. The proposed issues are preconditions for the correct use and interpretation of bibliometrics-based indicators and should best be applied in combination. • I. Breakdown by fields • II. Clustering of similar objects • III. Standardisation of indicators glänzel & debackere: the ‘multidimensionality’ of ranking 13

  14. I. Breakdown by fields • Research performance of a university might differ among their faculties, departments and thus in different science fields. • Also, publication and citation behaviour generally differs among individual fields. Example: While in biosciences a paper published in 2004, on the average, received 8.21 citations during the first three years, the citation mean in engineering amounted to 1.66 in the same period. • Similar deviations between the disciplines can be found in publication activity as well. glänzel & debackere: the ‘multidimensionality’ of ranking 14

  15. Overall gross publication and citation counts can be misleading in ranking institutions with multidisciplinary research profiles.This effect can be reduced by breaking down their research activity by fields. • The breakdown might also help reveal institutional “strengths and weaknesses”. glänzel & debackere: the ‘multidimensionality’ of ranking 15

  16. II. Clustering of similar objects • Ranking HEIswith different profiles as, for instance, based on the comparison of medical schools with business schools, still remains an exercise of “comparing apples with pears”, even if their publication output is broken down by fields. • More than 2000 European research institutions have therefore been clustered according to their publication profiles. Thestopping rule introduced by Duda & Hart (1973) was applied to determine and optimise the number of clusters. glänzel & debackere: the ‘multidimensionality’ of ranking 16

  17. Optimum number of classes according to Duda & Hart (1) (2) Optimum Source: Thijs & Glänzel, 2008 based on WoS, Thomson Scientific glänzel & debackere: the ‘multidimensionality’ of ranking 17

  18. Two optimum solutions were found: • 2 clusters (i.e. medical and non-medical institutions) and the following 8 clusters. Source: Thijs & Glänzel, 2008 based on WoS, Thomson Scientific glänzel & debackere: the ‘multidimensionality’ of ranking 18

  19. The following figure and table shows that the breakdown by fields and clustering are useful scenarios but might not suffice alone. • Therefore, gross counts can still be subject to biases caused by institutional activity profiles. Universities with medical faculties have usually larger publication output and higher citation impact than universities with focus on natural and applied sciences. • A possible solution will be described in the third section. glänzel & debackere: the ‘multidimensionality’ of ranking 19

  20. Examples for different national cluster profiles Source: Thijs & Glänzel, 2008 based on WoS, Thomson Scientific glänzel & debackere: the ‘multidimensionality’ of ranking 20

  21. Significant deviation based on 2-test Examples for the deviating field structure of differentclusters Source: Thijs & Glänzel, 2008 based on WoS, Thomson Scientific C1 - analytical, inorganic & nuclear chemistry C2 - applied chemistry & chemical engineering C3 - organic & medicinal chemistry C4 - physical chemistry C5 - polymer science C6 - materials science glänzel & debackere: the ‘multidimensionality’ of ranking 21

  22. III. Standardisation of indicators In order to overcome profile-specific biases (which might occur even within the same profile cluster) a strict standardisation and normalisation of indicators should be applied. This effect will be demostrated on the following example: The observed citation impact of 39 European universities of 13 selected countries (the 3 biggest HEIs each of the corresponding country) is plotted against its expectation, once without and another time with subject normalisation. (Publication period: 2001-2003 and a 3-year citation window for each publication year) The changing “ranks” of medical and technical universities in the two-dimensional are quite impressing. glänzel & debackere: the ‘multidimensionality’ of ranking 22

  23. Real vs. expected performance (not normalised) Source: WoS, Thomson Scientific Source: WoS, Thomson Scientific Real vs. expected performance (field-normalised) glänzel & debackere: the ‘multidimensionality’ of ranking 23

  24. CONCLUSIONS • The idea of ranking HEIs according to simple, seemingly objective and robust indicators is perhaps tempting; but robustness is easily lost by building composite indicators with partially interdependent or even incompatible components and arbitrary weights. However,reality is more complex than to be described this way. • Instead of any linear ranking of HEIs, a more detailed, complex analysis is necessary to grasp and to reflect several important aspects of performance among the manifold of university activities. glänzel & debackere: the ‘multidimensionality’ of ranking 24

  25. Bibliometrics can contribute to evaluate at least one of these aspects. One lesson from bibliometrics is that standardisation and normalisation helps eliminate biases and facilitates longitudinal ranking analysis as well. • Another lesson from bibliometrics is that even normalisation of indicators cannot disguise that comparing HEIs with completely different profiles still remains an exercise of “comparing apples with pears”. glänzel & debackere: the ‘multidimensionality’ of ranking 25

More Related