70 likes | 196 Vues
This document outlines a set of best practices recommended by a group of experts, including strategies for authors, referees, and editors to improve reproducibility in research publications. It advocates for a robust rubric to rate papers, encourages the submission of reproducibility reviews, and suggests integrating test suites and well-documented code. While addressing potential challenges related to proprietary software and the installation difficulties of source codes, it emphasizes the importance of clear instructions and supportive reviewer feedback, ultimately striving to establish a standard for reproducibility.
E N D
Publication Policy Breakout • Peter Olver • Matthias Troyer • Ron Boisvert • Carol Woodward • Neil Calkin • Judy Borwein • Nicolas Limare • Ian Mitchell • Randy LeVeque
Recommend that we put forth a set of best practices for what authors should do for reproducibility • Set of procedures for authors, referees, and editors • Put forth a rubric for rating papers that all can use • Individual journals could adapt this as appropriate • Could pull examples from sources where reproducibility is encouraged currently • SIGMOD • IPOL • Ian’s conference (he has a draft set of recommendations)
What would best practices include? • VM (full supplied by authos, reference VMs and partial VMs) • Pros: Can execute anywhere • Cons: Big (IPOL does not allow VMs because of this, reference and partial will make these submissions smaller), proprietary software is an issue • All source codes • Pros: All source present • Cons: Hard to install and run in general (can specify compilers and make procedures to help with this) • Code excerpts with implementations of relevant algorithms (ETH requires this, Science requires this – has retracted papers for this) • Pros: Protects development investment, lower barrier to submit • Cons: Hard to run for testing • Documentation (in and out of code) and instructions for running code • Test suites for submitted code and / or algorithms
How to introduce this expectation into the published literature • Invite accepted papers to submit to a reproducibility review • If not reproducible as submitted, ask for more information to bring them up to the standard • Develop a special issue where papers undergo a reproducibility review (like an editors’ choice issue) • Overlays for arXiv, other archives or journals (like a certification webpage) with links to pointers to “certified” papers • Supplementary journals like SIAM Imaging Science and IPOL • Certifying journals on sites like Romeo or ISI Thompson
Other items discussed • Award: we generally all thought awards for excellent papers meeting reproducibility criteria would be good. However, we thought it a bit premature now. Let’s introduce criteria and standards of best practices then introduce awards in 3-5 years. • Who would do reproducibility reviews? Will senior researchers respect the review? • For SIGMOD it has been students and postdocs. • Accepted papers have been asked to go through the process so generally senior researchers have been OK with process. • How to bring in standards for reproducibility? • Optional but bring in a certification system for those that meet it • Positive encouragement from referees and editors – “This is a good paper but it would be better if…” • Key is to ensure editors make referes aware of the expectation and opportunities
More points • Refereeing this will take time. How do you handle that? • A certification level could be one that has a level that certifies that the paper has the information to be reproducible but this has not been checked. • Results on super computers could be problematic. Here details but not executables could be provided. SIGMOD tested used donated computer time. • Remaining questions: • How to deal with release restrictions form industry and labs? • What about papers from numerical analysts with small problems for numerical results?
Title • Points