150 likes | 273 Vues
This paper presents a comprehensive framework for assessing the true openness of open source software (OSS) through an Openness Factor Rating (OFR) scheme. It identifies key categories and traits essential for genuine openness, differentiating between projects that merely claim to be open and those that meet rigorous standards. The OFR is based on an analysis of reproducibility, documentation, and community contributions. Case studies illustrate the application of the rating system across various OSS projects, offering insights into their openness and areas for improvement.
E N D
Towards anOpenness Rating Systemfor Open Source Software Wolf Bein Clint Jeffery UNLV University of Idaho
Outline • Motivation & Background • The Open Source Social Contract • The Openness Factor Rating Scheme • Case Studies • Openness Categories • Conclusions
Motivation • Many “open source” projects are open in name only. • The formal definitions of OSS state what licenses have to say • We rate traits that a project has to deliver in order to be truly open.
Cathedral and Bazaar • What is “openness”? • “read” open vs. “write” open • Open code vs. open process and control • Cathedral projects can go to great lengths to provide openness of code
Social Contract of Open Source • Open source projects are science experiments • First principle of openness: reproducibility (buildable and runnable) • Second principle of openness: documentation (Markle’s corollary)
Openness Factor Rating Scheme • Categories and Rubric for Assessing Openness of OSS • Geometric average of n ratings • OFR=(C1C2…Cn)1/n • In version 1, n=9
Openness Categories • Language portability • Contributors • Source documentation • Repository permanence • Library portability • Users • User Documentation • Build Documentation • License
Portability • Language portability • 1.0 for C/C++/Java • Market share on which others can be built • Includes language version dependence • Library portability • 1.0 = no non-ANSI/non-ISO libraries • Market share of all other libraries • Penalties for compiler limitations, excessive size
Population Rationale: more eyes = more open • Users: generate (or reflect) openness, accessibility, internationalization • 1.0 = 100K+; .9 = 10K+; .8 = 1K+; .7 = 100+ • Contributors: a form of proof that the software is built successfully by many • 1.0 = 1000+; .9 = 100+; .8 = 10+
Documentation • User Documentation • 1.0 = good books; .9 = TRs, articles • 0.8 = website, online help; 0.5 = documented • Build Documentation • 1.0 = automated or well documented • 0.5 = build requires training • Source Documentation • 1.0 = books on the implementation • 0.9 = design docs; 0.8 = code comments
Misc • License • 1.0 = public; 0.9 = liberal; 0.8 = GNU • Permanence of Repository • 1.0 = many mirrors • 0.9 = major third party repository • 0.8 = 1-2 major institutional sites • 0.7 = canonical but obscure site
Case Studies • Galib .77 cathedral, institutional • Unicon .85 niche, library pains • Alice .49 no build doc, cathedral • Linux .95 assembler • Open Office .93 repository • LaTeX .9 cathedral, LPPL license • OpenSolaris .9 compiler sensitive • SecondLife .68 major library issues • Freespire .69 non-open mix-ins • MediaPortal .72 build challenges
Openness Classes • Class I : >= 0.9 • Class II : >= 0.75 • Class III: < 0.75 • Two “failing scores” produces Class II; four produces a Class III rating
Conclusions • OFR version 1 is simplistic • Could find new categories, or better measures for existing categories • OFR1 does separate sheep and goats • We are looking for posers • We hope such a measure will encourage open source projects to be more open