1 / 26

Measuring academic research in canada : AleX Usher Higher Education Strategy Associates

Measuring academic research in canada : AleX Usher Higher Education Strategy Associates. IREG-7 Warsaw, Poland – May 17, 2013. The Problem. When making institutional comparisons, biases can occur both because of institutional size and distribution of fields of study

jory
Télécharger la présentation

Measuring academic research in canada : AleX Usher Higher Education Strategy Associates

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Measuring academic research in canada:AleX UsherHigher Education Strategy Associates IREG-7 Warsaw, Poland – May 17, 2013

  2. The Problem When making institutional comparisons, biases can occur both because of institutional size and distribution of fields of study Can we find a way to compare institutional research output in a way that controls for size and field of study?

  3. YES

  4. Basic methodology • Simple 2-indicator system: publication (H-index) and research income (granting councils) • Data gathered at the level of the individual researcher, not institution • Every researcher given a score for his/her performance relative to the average of his/her discipline. Scores are then summed and averaged.

  5. Publication Metric: H-Index “A scientist has index h if h of his/herNppapers have at leasth citations each, and the other (Np − h) papers have no more thanh citations each.” • (i.e., the largest possible number N where a scientist has a total of N papers with N or more citations) Ex. 2 Publication 1: 10 citations Publication 2: 2 citations Publication 3: 2 citations Publication 4: 2 citations Ex. 1 Publication 1: 5 citations Publication 2: 4 citations Publication 3: 3 citations Publication 4: 2 citations H-Index: 3 H-Index: 2

  6. H-Index (pros and cons) • Pros • Discounts publications with little or no impact • Discounts sole publications with very high impact Cons • Requires a large, accurate, cross-referenced database (labour) • Age bias (less concern on aggregates) • Differences in publication cultures (can be fixed) • Not very useful in disciplines with low publication cultures

  7. The HiBar Database Faculty lists Standardized discipline names

  8. Example: Dr. Joshua Barker

  9. The Canadian Prestige Hierarchy

  10. Science-Engineering H-Index

  11. Arts H-Index

  12. Medicine We did not cover medical fields Impossible to do so because manner in which certain institutions choose to list staff at associated teaching hospitals made it impossible to generate equivalent staff lists.

  13. Research Income Collected data on peer-evaluated individual grants (i.e. major institutional allocations for equipment, etc excluded) made by two main granting councils (SSHRC and NSERC) over a period of three years Data then field-normalized as per process for H-Index.

  14. Research Income (pros and cons) • Pros • Publicly available, 3rd party data, with personal identifiers • Based on a peer-review system designed to reward excellence Cons • Issues with respect to cross-institutional awards • Ignores income from private sources which may be substantial

  15. Science-Engineering Income

  16. Arts Income

  17. Science-Engineering Total

  18. Arts Total

  19. Controversies (1) • The double-count issue. In an initial draft, we included a record count of staff rather than a head count (former is higher because of cross-appointments). Led to questions • The part-time professor issue. Many objected to our inclusion of part-time staff in the total. So we re-did the numbers without them…

  20. NSERC Scores (revised)

  21. SSHRC Scores (revised)

  22. The Philosophical Part

  23. Who is a university? • Whose performance gets included in a ranking says something about who one believes embodies a university. Should it include: • FT faculty only? • PT faculty? Emeritus faculty? • Graduate students? • At the moment, most ranking systems decision driven by data collection methodology.

  24. Do all subjects matter equally? • Field-normalization implies that they do. But is this correct? Are some fields more central to the creation of knowledge than others? Should some fields be privileged when making inter-institutional comparisons?

  25. Does Size Matter? • Does aggregation of talent bring benefits of its own, independent of the quality of people being aggregated?

  26. Where Does Greatness Lie? • On whose work should institutional reputation be based? Its best scholars, or all of its scholars? • Norming for size implicitly rewards schools with good average professors. Failure to norm more likely to reward a few “top” professors

More Related