1 / 36

Measuring differentiation in knowledge production at SA universities

Johann Mouton, CREST HESA Conference on Research and Innovation 4 April 2012. Measuring differentiation in knowledge production at SA universities. Preliminary comments .

matteo
Télécharger la présentation

Measuring differentiation in knowledge production at SA universities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Johann Mouton, CREST HESA Conference on Research and Innovation 4 April 2012 Measuring differentiation in knowledge production at SA universities

  2. Preliminary comments • Given the different institutional histories, missions and capacities, a high degree of differentiation in terms of key research production dimensions are only to be expected • The differentiation constructs and associated indicators presented and discussed here are not independent of each other (in statistical terms there are multiple “interaction effects”)

  3. Unpacking the concept of “research differentiation” We still need a proper conceptualisation of the notion of research differentiation. As a first attempt I would distinguish the following SIX types or categories: Differentiation in terms of • Volume of research production • Shape of research production (differences in distribution of output by scientific field) • Site of publication (comparative journal indexes) • Research collaboration • Research impact (High or low visibility or recognition) • Demographics: Differences in distribution of output by gender/ race/ qualification/ age

  4. Research differentiation indicators

  5. Proposition 1 University research production - since the introduction of a national research subsidy scheme in 1987 – initially remained quite stable (ranging between 5000 and 5500 article units between 1988 and 2003) BUT then increased dramatically to reach more than 8000 units in 2010. The best explanation for this dramatic increase is the introduction of the new research funding framework in 2003 (and which came into effect in 2005) which provided much more significant financial reward for research units and clearly provided a huge incentive to institutions to increase their output

  6. Output of article equivalents: 1987 - 2009

  7. Proposition 2 But the increase in recent years in absolute output has not affected the institutional distribution. The huge differences between the most productive and least productive universities that were evident 25 years ago, have remained mostly unchanged. A few universities have managed to improve their position in the ranking (UWC is a good example), but the vast inequalities in knowledge production between the top and bottom universities have not disappeared.

  8. Total research publications output (1990 – 2009)

  9. Total research publications output (1990 – 2009) – The top five Rule: Universities producing more than 10% of total university output

  10. Total research publications output (1990 – 2009) – The middle seven Rule: Universities poducing at least 1% of total sector output

  11. Total research publications output (1990 – 2009) – The bottom eleven

  12. Total DHET research output: 1990 - 2009

  13. Highest versus lowest output institutions

  14. A comment on institutional differentiation The statistics presented thus far on institutional output only refer to absolute output and have not been normalized for the size (viz. Academic capacity) of institutions. In the following two graphs we first present the rankings i.t.o. research output (normalized for number of permanent staff) and then the rankings i.t.o. knowledge output (Masters and Doctoral graduates included) also normalized for size of academic staff. A comparison of the two ranking reveal some interesting shifts in rankings (most notably for NMMU, UNISA and some of the UoT’s) but the overall difference in normalized output between the top and the bottom universities remains huge.

  15. Ranking of universities i.t.o. per capita research output (2009)

  16. Ranking of universities according to average normed knowledge production (2007 – 2009)

  17. Proposition 3 SA universities vary hugely in terms of the “shape” of their knowledge production. The big differences in scientific field profiles of the different universities is clearly a function of institutional histories (e.g. having a medical school or faculty of theology) and institutional missions (research intensive universities versus more teaching universities and ex-technikons)

  18. Shape of knowledge production (1990 -2005)

  19. Proposition 4 Distribution of research output by journal index (ISI, IBSS and “SA”) varies hugely. The differences between the universities in terms of this dimension is mainly a function of the shape of knowledge production at the universities, but clearly also of other factors like institutional histories, language of publishing, and so on. One of the immediate results of these differences is its impact on university rankings.

  20. Distribution of university output by journal index (1990 – 2005)

  21. Proposition 4 University research output has become significantly more “international” and “collaborative” over the past 10 – 15 years. South African academics collaborate much more than before – in the post-apartheid sanction period this was always to be expected. But we also collaborate more in fields (such as infectious diseases) with international teams receiving huge international funding. Interestingly, there is nothing in the funding framework that actively encourages collaborative research – on the contrary. But one has to immediately add that this “negative” feature of the framework is offset by the positive effects of collaborative publishing as demonstrated in higher citations and more visibility.

  22. Co-authorship trends by university for the period 1996 to 2007 (ISI-papers only)

  23. Proposition 5 The impact of SA’s research production has increased significantly over the past 15 years – mostly because of collaborative publishing (in high-impact journals) – and possibly also because of increased research in highly visible research areas. This is true at the country level, but with very different impact levels at the institutional level.

  24. South African ISI publication output Source: Robert Tijssen (CWTS, Leiden University, Netherlands); CWTS WoS database

  25. South African citation impact (ISI: 2000 – 2010) Source: Robert Tijssen (CWTS, Leiden University, Netherlands); CWTS WoS database

  26. Trends in field-normalised citation impact (“top five”)

  27. Trends in citation impact of other ‘second tier’ research active universities

  28. Distribution of citation impact of across fields of science (2004-2007)

  29. Enablers of productivity and impact

  30. Explaining the differences in research production • International trends – the “demands” created by international rankings • National steering instruments: Revised funding scheme + Expanded SA presence in ISI + NRf Rating system which have led to …. • Increase in research output • Increase in ISI-production • Institutional capacities (Merton and cumulative advantage theory) • Institutional histories and structures • Institutional strategies (overleaf)

  31. Institutional enablers We have seen how the institutional differences in research productive capacity has remained pretty much unchanged for the past 20 years. But how have the most productive universities (the top 5 – 7) managed to increase their absolute output so much more than some of the weakest institutions? How have some universities managed to increase their international visibility and impact much more significantly than others? There are at least two plausible (complementary) explanations – both relates to the human capital base. The first is evidence that shows that the top universities are not necessarily more productive at the individual level – they simply manage to broaden the active research base within the institution (cf. next slide). The second is the very persuasive evidence that shows the very strong correlation between the proportion of doctorate capacity and per capita research output (cf. following slide)

  32. Depth of the human knowledgebase (WITS, UCT and UKZN)

  33. Comparison of WITS, UCT and UKZN i.t.o. research productivity

  34. Productivity (average nr of papers per permanent academic staff) and the % of permanent academic staff with PhDs, by individual university and the total headcount of permanent academic staff

  35. In conclusion • We undoubtedly have a highly differentiated university sector when assessed in terms of key and relevant indicators • Some of the “causes” of these differences reflect the path-dependency of historical factors, missions and structures. Other differences are the results of more recent institutional responses to international and national policies, strategies and incentives. • I have argued that the trends presented show that there are identifiable enabling mechanisms and drivers that impact on greater productivity and international impact even within a differentiated system.

  36. The end

More Related