1 / 26

New models for evaluation for research and researchers

New models for evaluation for research and researchers. Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble. Why evaluate proposed research?. Research value Gate-keep Rank Impact. Novel?. Valid & reliable?. Useful?. Why evaluate published research?. Novel?. Defend

Télécharger la présentation

New models for evaluation for research and researchers

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. New models for evaluation for research and researchers Beyond the PDF 2 Panel 19-20 March 2013 Carole Goble

  2. Why evaluate proposed research? Research value Gate-keep Rank Impact Novel? Valid & reliable? Useful?

  3. Why evaluate published research? Novel? Defend Review, Test, Verify Transfer Contribution Good? Repeatable? Reproducible? Reusable? Comparable?

  4. 47 of 53 “landmark” publications could not be replicated Inadequate cell lines and animal models Preparing for & Supporting Reproducibility is HARD “Blue Collar” Burden Constraints Stealth & Graft Nature, 483, 2012 http://www.reuters.com/article/2012/03/28/us-science-cancer-idUSBRE82R12P20120328

  5. Galaxy Luminosity Profiling 35 different kinds of annotations 5 Main Workflows, 14 Nested Workflows, 25 Scripts, 11 Configuration files, 10 Software dependencies, 1 Web Service, Dataset: 90 galaxies observed in 3 bands José Enrique Ruiz (IAA-CSIC)

  6. Contribution Activity – Review CorrelationAccountability & Coercion

  7. What are we evaluating? 1 0 JA N UA RY 2 0 1 3 | VO L 4 9 3 | N AT U R E | 1 5 9 • Article? • Interpretation? Argument? Ideas? • Instrument? Cell lines? Antibodies? • Data? Software? Method? • Metadata? • Access? Availability? • Blog? Review? • Citizenship?

  8. Why evaluate researchers? • Recognise contribution • To the whole of scholarly life • Track/Measure Quality & Impact • Ideas, Results, Funds, Value for money, Dissemination. • Discriminate: Rank and filter • Individual, Institution, Country • Country Club -> Sweatshop Reputation Productivity

  9. How do we evaluate? Peer review Best effort Re-produce/Re-peat/Re-* Rigour Popularity contests Rigour vs Relevance

  10. Panelists + Roles Carole Goble (Manchester): Chair Steve Pettifer (Utopia): Academic, S/w innovator Scott Edmunds (GigaScience): Publisher Jan Reichelt (Mendeley): New Scholarship vendor Christine Borgman (UCLA) Digital librarian/Scholar Victoria Stodden (Columbia): Policymaker, funder Phil Bourne (PLoS, UCSD): Institution deans All are researchers and reviewers

  11. Disclaimer The views presented may not be those genuinely held by the person espousing them.

  12. Panel Question What evaluation means to you What evaluation would be effective and fair What responsibility do you bear?

  13. Notes We didn’t have to use any of the following slides as the audience asked all the questions or the chair promoted.

  14. Reproduce mandate Open Solves It Infrastructure Another panelist Conflicting Evaluation Qualitative and Quantitative Faculty promotion Right time Convince policy makers $10K Challenge Who Johan Bollen

  15. A Funding Council / Top Journal decrees (without additional resources) all research objects published must be “reproducible”. How? Is it possible? Necessary? How do we “evaluate” reproducibility? Preparing data sets. Time. Wet science, Observation Science, Computational (Data) Science, Social Science.

  16. In a new promotion review, researchers have to show that at least one of their research objects has been used by someone else. Maybe cited. Preferably Used. How will you help?

  17. Do we have the technical infrastructure to reproduce research? Is research platform linked to communication platform? Or the incentives?

  18. What is the one thing someone else on the panel could do to support a new model of evaluation? And the one thing they should stop doing?

  19. Should research be evaluated on rigour, reproducibility, discoverability or popularity? Qualitative and Quantitative

  20. When is the right time to evaluate research? during execution? peer review time? 5 years later? Should we bother to evaluate “grey” scholarship?

  21. What will convince the policy makers / funders / publishers to widen focus from impact factor to other researcher metrics? Other scholarly units. • How will the digital librarian / academic convince them?

  22. Who should evaluate research? And who should not?

  23. Johan Bollen, Indiana University, suggests in a new study of NSF funded research that we might as well abandon grant peer evaluation and just give everyone a budget with the provision that recipients must contribute some of their budget to someone they nominate. • Why don’t we do that?

  24. If you had $10K what would you spend it on?

  25. Make Everything Open. That solves evaluation, Right?

  26. Joined up evaluation across the scholarly lifecycle? or Conflict? Strategy vs Operation

More Related