1 / 16

Kate Barker, Jakob Edler, Kieron Flanagan

Concepts, scope and limitations of evaluating E-science policy. Kate Barker, Jakob Edler, Kieron Flanagan Manchester Institute of Innovation Research, Manchester Business School UK e-Science ALL HANDS MEETING 2008 - "Crossing Boundaries" 10 September, Edinburgh, UK. Content.

Télécharger la présentation

Kate Barker, Jakob Edler, Kieron Flanagan

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Concepts, scope and limitations of evaluating E-science policy Kate Barker, Jakob Edler, Kieron Flanagan Manchester Institute of Innovation Research, Manchester Business School UKe-Science ALL HANDS MEETING 2008 - "Crossing Boundaries" 10 September, Edinburgh, UK

  2. Content E-science as investment in infrastructure Evaluation of e-science – starting from first principles Impact dimensions and examples from impact assessment of EC e-infrastructures Knowledge dynamics and e-science Possible ways forward

  3. Infrastructure as an element of ‘research capacity’ • Capacity is mobilised in the research process to create various outcomes: • cognitive developments (new concepts and theories) • new data sets • impacts upon teaching and learning • wider impacts upon research ‘users’ and practice • impacts on infrastructure innovation

  4. Special features of research infrastructures – what about e-science? • Distinctive innovation dynamics • Most high-tech innovation is supplier-driven • Much research infrastructure innovation is user-led and involves a complex interaction between demanding users and potential suppliers • Expectations of economic effects usually too high • Distinctive funding dynamics • Challenges of assembling infrastructure • Small items funded through project grants • Very large-scale infrastructure through special funding • Researchers often complain of a ‘missing middle’- lack of funding for mid-range items (e-science – middleware)

  5. Special features of research infrastructures 2 – relevance for e-science? • Challenges of funding • underfund the ongoing support costs of infrastructure, underestimate the complementary investment needed in order to maximise the original investment (physical access) • opportunity costs of major investments See above with e-science • Challenges of management and provision • Often limited scope for pooling/sharing • Sensitivity to different access and charging models • Public vs. private users Some relevance of these generic issues

  6. Special features of research infrastructures 3 – relevance for e-science? • Dynamics of scientific production – impacts of research infrastructures upon this • Detailed histories and some anthropologies of science performed on major research infrastructures (classic histories of CERN) • Work on peculiar scientific output patterns from research infrastructures eg telescopes evaluation/policy studies need to have these insights to avoid ‘blind’ evaluations For e-science – transformations seem to be profound, likely to be different in different disciplines – but lack of fundamental sociology/anthropology of –e-science to provide grounding for evaluations

  7. Evaluation of e-science - first principles... Evaluation has a purpose...what purpose? Justify funding, measure impacts, learn lessons for improvement, map outcomes and transformations, benchmarking performance Continue/terminate decision (classical evaluation) What audience/customer? Which level of intervention, which unit of analysis? A specific infrastructure, specific project or programme? Funding system (within one discipline), or overall funding system? National infrastructure policy (how well does the UK provide scientists with what they need to deliver)? Infrastructure policy as part of sectoral/technological/issue driven policy

  8. Evaluation principles - evaluability • Are there objectives and goals to evaluate against? • Do these need to be constructed/reconstructed? • Are they concrete or aspirational and vague? • Are there data/possibilities to collect data to measure progress against goals? • Later on – evaluation would question whether the goals were the right ones

  9. Impact dimensions – first principles Evaluating what impact, and on whom? Impact of e-science on...science Mode of Knowledge production: change of the ways knowledge is produced: following a certain type of model that dominates the interpretation of what e-science is and can be? Organisational and behavioural adaptations (or the failure to adapt) Adaptations for the good or worse – given e-scientists part of the story only? Direction(s) of Knowledge production: does e-infrastructure influence what is researched – and/or what is funded („we do X and not Z because for X we need the large infrastructure and can use our data...“): Socio-economic impacts Second order impacts of new knowledge for creation of socio-economic effects Second order impacts on ICT suppliers, spin offs from the technology and software

  10. Impact dimensions – first principles contd Efficiency of knowledge production (costs): does e-science enable cost-sharing, scale and scope effects to deliver more efficiently what needs to edelivered? How to weigh against enormous investment and opportunity costs? How to find the optimal level of investment (why 250 Mio on e-science) Dissemination and Impact of the knowledge produced, societal and economic effects: did e-science enable, improve delivery? Diffusion of the e-science technology and its complementary technologies, spill over effects beyond (public) science

  11. Experience – evaluation of the impacts of the EC Research Infrastructures Programme Study by Ramboll-Matrix consultancies, MIOIR advising, 2007-2008 – still in progress - covering research infrastructures and e-infrastructures funded under FP6 Purpose of evaluation – policy learning for future programmes, evidence of impacts and effects upon European research and in the policy context of aims to make progress in the European Research Area and the standing of European research viz a viz the rest of the world

  12. Ramboll-Matrix evaluation of E-infrastructures • Collection of data on new networks created, new areas of research opened up, new users of e-science, new standards and protocols • Using structured questionnaire (on-line) and personal interviews where reported impacts are explored and degree of attribution to the EC programme estimated, economic spin offs and social impacts asked for • We avoided publication counting • Case studies to provide more flesh and stories of paths to impact • -typical approaches in S&T evaluation to get over the problems of under-reporting and missing paths to impact • -typical evaluation problems encountered – going outside science becomes tenuous, lack of reliable hard data, most impacts are on the science

  13. Policy level evaluation for e-science Needs to differentiate and clarify with policy makers and stakeholders: explicit goals of the e-science policy (first order, second order) (re) construct them? Cannot evaluate against vague aspirational policy statements Look for secondary and unexpected effects (on beneficiaries, on other stakeholders) Look for changes in behaviour (changes of practice) Link to other policy goals, eg internationalisation of science, innovation, performance of ICT sector, creation of highly (knoweldge produced as input somewhere else? Innovation?) Meaning of e-science for the research and innovation system

  14. Impact on Knowledge Production: Keep in mind the specific “Knowledge Configurations” for different areas Premise: need for policy instruments (and infrastructures) differs greatly between different scientific fields and their “knoweldge configurations“. Knowledge configurations are determined by • The specific characteristics of the knowledge dynamics of different research and innovation themes / topics / areas; • The institutional set-up (markets; industry structures; regulation; organisation; tradition) • The involved actors, their ambition, strategy and power. When evaluating e-science: take those dimensions not only as context variables, but as analytical units. • differentiate for the different science fields and their dynamics!! • Understand the changes in cofigurations and the meaning of the infrastructure (before new, critical e-infrastructures were used) Source: Project Group within PRIME Network of Excellence on „Era-Dynamics“, www.prime-noe.org

  15. Limits and drawbacks of evaluating e-science policies • Complexity: • differences in knowledge configurations between areas hard to pin down but probably essential not to miss important effects and changes • Data and methods: • can indicator analysis do the job: time lag, boundaries, relative weight of infrastructure • Much more...

  16. Thank you Contact Kate Barker Senior Lecturer Manchester Institute of Innovation Research MIoIR Manchester Business School, University of Manchester Harold Hankins Building Manchester, UK M13 9PL 0044 (0) 161 275-0919 5932 Kate.Barker@mbs.ac.uk

More Related