1 / 24

GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT

GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT. EUA Rankings Review Lesley Wilson EUA Secretary General BFUG Transparency Tools WG, 12/10/2011, Krakow. I. Purpose and principles of the review. Analyse and compare the most popular global university rankings that have emerged since 2003

warreng
Télécharger la présentation

GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. GLOBAL UNIVERSITY RANKINGS AND THEIR IMPACT EUA Rankings Review Lesley Wilson EUA Secretary General BFUG Transparency Tools WG, 12/10/2011, Krakow

  2. I. Purpose and principles of the review • Analyse and compare the most popular global university rankings that have emerged since 2003 • Analysis the methodologies used, their limitations and their impact • Using only publicly available and freely accessible information on each ranking • Efforts were made to discover • what is actually measured, • how the scores for indicators are calculated • how the final scores are calculated, and • what the results actually mean.

  3. II. Rankings included in the first Review Academic rankings with the main purpose of producing university league tables • Academic Ranking of World Universities (Shanghai) • Times Higher Education World University Ranking – • with Quacquarelli Symonds (until 2009) • with Thomson Reuters • Best Universities Ranking – US News & World Report with Quacquarelli Symonds • Global Universities Ranking – Reitor (Рейтор, Russia)

  4. Selection of rankings (contd.) Rankings concentrating on research only • Leiden Ranking (Leiden University) • Ranking of Research Papers for World Universities – (HEEACT, Taiwan) • Assessment of University-Based Research – EU Multirankings – using a number of indicators without the intention of producing league tables • CHE/die Zeit University Ranking (CHE, Germany) • CHE Excellence Ranking • U-Map classification (CHEPS) • European Multidimensional University Ranking System (U-Multirank)

  5. Selection of rankings (contd.) Web rankings • Webometrics Ranking of World Universities (Webometrics, Spain) Benchmarking based on learning outcomes • Assessment of Higher Education Learning Outcomes Project (AHELO – OECD)

  6. III. Characteristics – 1. Global rankings cover not more than 3-5% of world’s universities

  7. 2. The scores almost disappear at the count of 400 universitiesWhat does this mean for the scores of the remaining 16’600 universities?

  8. 3. Main Considerations emerging from the analysis • The indicators used cover elite universities only • Indicator scores are not the indicator values themselves but the results of mathematical operations • Composite scores always contain subjective elements (reflecting qualitative choices made by each ranking provider) • Choosing between simple counts or calculating relative values is not neutral (size or concentration)

  9. 4. The dominance of research: indicators used • Number of publications (or publications/staff): • ARWU 2 indicators, total weight 40%: publications in Nature and Science & publications indexed SCI & SSCI • HEEACT – 4 indicators overall weight of 50% • THE-QS – none, THE-TR 1 indicator weight of 4.5% • Number of Citations (or citations/publication) • Other research indicators: • Research reputations surveys • Research income, Intensity of PhD production

  10. 5. Rankings & the teaching mission of universities • ARWU - Quality of education = alumni that have been awarded a Nobel Prize/ Field medal • THE-QS – staff/student ratio, employer survey (answers world wide (3281 in 2009, 2339 in 2008) • CHE – a number of indicators based of student opinion + reputation surveys of professors, • THE-TR: reputational survey on teaching, student/staff ratio, income per academic, partly, PhDs awarded per academic, PhDs vs. bachelor degrees awarded • U-Map and U-multirank – a number of indicators characterizing learning environment rather than its quality

  11. IV - Main Biases and Flaws identified 1. Natural sciences and medicine vs. social sciences bias • Bibliometric indicators primarily cover journal publications • Natural and life scientists primarily publish in journals, • Engineering scientists - in conference proceedings, • Social scientists and humanists – in books

  12. Example:21 broad subject areas defined by ISI

  13. 2. Different publication and citation cultures in different fields • Table from presentation of Cheng at IREG 2010 conference in Berlin

  14. 3. Field-normalisation – solutions and issues • Field-normalised citations per publication indicator (Leiden ‘Crown indicator’) • New attempt (2010) - mean-normalised citation score (MNCS)

  15. 4. Impact factor – to be used with care • ... especially in social sciences and humanities, expert rankings do not correlate very well with impact factors. (EC WG on assessment of university research, 2010) • “the impact factor should not be used without careful attention to the many phenomena that influence citation rates, as for example the average number of references cited in the average article. The impact factor should be used with informed peer review” (Garfield, 1994) • “by quantifying research activity and impact solely in terms of peer-publication and citations, rankings narrowly define ‘impact’ as something which occurs only between academic ‘peers’” (Hazelkorn, 2011).

  16. 5. ‘Peer review’ biases and flaws • Why are reputation surveys called “Peer reviews”? • ‘Peers’ are influenced by previous reputation of the institution i.e. a university which ranks highly in one ranking is likely to obtain a high reputation score • Limiting the number of universities nominated (THE rankings) makes approach elitist – and probably previous reputation dependent • Using pre-selected lists rather than allowing ‘peers’ free choice means that large numbers of institutions are left out • Is a 5% response rate a sufficient result?

  17. V. Consequences: Even keeping current position requires great effort (1) • Emphasis on the phenomenon of the “world class research university” • The ‘Red Queen effect’ (Salmi, 2010). “In this place it takes all the running you can do, to keep in the same place”, says the Red Queen in Lewis Carroll’s ‘Through the Looking Glass’. • League tables as the simple way to judge countries and universities, also responding to the public demand for information that neither HEIs nor governments are able to provide

  18. 2. The risks of overemphasising rankings • Rankings encourage universities to improve their scores. • Universities are strongly tempted to improve performance specifically in areas that are measured in rankings, e.g. research in sciences and medicine. • By concentrating funds and efforts on rankings universities pay less attention to issues not ‘rewarded by global rankings, e.g. quality of teaching, regional involvement, widening access, lifelong learning, etc. • Similarly rankings threaten to undermine other policy objectives

  19. 3. Improving rankings: an option? • Improvements will not come from extending 5 distant proxies to 25 – they will still remain proxies... • Improve coverage of teaching – most probably through measuring learning outcomes, • Lift biases, eradicate flaws of bibliometric indicators: field, language, regional, but first of all – address non-journal publications properly! • Rankings often claim that they help students to make their choices – but the student body is highly diverse, which students and what do they need? Is this what is measured? • Rankings cover only top institutions but impact all universities – can rankings be changed to address a larger number?

  20. Improving rankings (cont): Recent Developments • EC WG Report on the assessment of university-based research (AUBR): Analysis of research indicators & their suitability, proposals for a methodology • U-Map uses indicators that characterise the focus and intensity of various aspects in HEIs • U-Multirank - a multidimensional ranking including all aspects of an HEI’s work – education, research, knowledge exchange and regional involvement. No composite score is produced. • OECD’s AHELO project attempts to compare HEIs internationally on the basis of learning outcomes • 3 testing instruments are being developed: • for measuring generic skills & • discipline-specific skills, in economics and engineering.

  21. VI - Main conclusions 1. The arrival on the scene of global rankings has galvanised the world of higher education. Since then universities cannot avoid national and international comparisons, and this has causedchanges inthe way universities function. 2. De facto, the methodologies of global rankings give stable results for only 700-1000 universities. The majority of universities are left out of the equation – but the result is that all HEIs are often judged according to criteria that are appropriate for the top research universities only. 3. Rankings so far cover only some university missions.  

  22. Main conclusions (contd.) 4. Rankings, it is claimed, make universities more ‘transparent’. However, the methodologies, especially those of the most popular league tables, still lack transparency themselves. 5. The lack of suitable indicators is most apparent in measuring teaching performance. The situation is better when evaluating research, but even bibliometric indicators have their biases and flaws. The real problem is the use of inadequate proxies, or the partial omission of information due to methodological constraints.

  23. Main conclusions (contd.) 6. At present, it is difficult to argue that the benefits offered by the information that rankings provide, as well as the increased ‘transparency,’ are greater than the negative effects of the ‘unwanted consequences’ of rankings. 7. New development, e.g. U-Map, U-Multirank and AHELO, all aim to improve the situation. These new tools are still at various stages of development and still have to overcome difficult issues, particularly problems of data collection and the development of new proxies. 8. Higher education policy decisions should not be based solely on rankings data.

  24. Each in his own opinion Exceeding stiff and strong, Though each was partly in the right, And all were in the wrong! by John Godfrey Saxe (1816–1887)

More Related