1 / 26

Reuse Working Group Breakouts #1 & #3 – Reuse Readiness Levels

Reuse Working Group Breakouts #1 & #3 – Reuse Readiness Levels. 9th Earth Science Data Systems Working Group Meeting New Orleans, LA October 20–22, 2010. Part 1: Overview of RRLs. Version 1.0 (April 30, 2010) http://www.esdswg.org/softwarereuse/Resources/rrls/. RRL v1.0 Topic Area Levels.

jed
Télécharger la présentation

Reuse Working Group Breakouts #1 & #3 – Reuse Readiness Levels

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Reuse Working GroupBreakouts #1 & #3 – Reuse Readiness Levels 9th Earth Science Data Systems Working Group Meeting New Orleans, LA October 20–22, 2010

  2. Part 1: Overview of RRLs Version 1.0 (April 30, 2010) http://www.esdswg.org/softwarereuse/Resources/rrls/

  3. RRL v1.0 Topic Area Levels

  4. Overall RRLs v1.0

  5. Part 1: RRL Calculator Tools

  6. Background Earlier discussions on RRLs indicated using a (weighted) average of topic area levels could be a useful way of determining an overall RRL for a reusable asset. The WG thought an RRL calculator would be a useful tool, so this was developed. The RRL calculator web page: Is written in PHP Provides descriptions of the topic area levels and overall RRLs to assist users in making assessments Input: topic area level scores (1–9) and optionally a weighting factor (0–100, 0=do not use) for each topic area Output: a possibly weighted average of the topic area level scores, which can be used as the overall RRL

  7. Screenshots As scores and weights are changed, the short description following the topic title changes as does the overall RRL calculated by (weighted) average. Longer descriptions of topic area levels are displayed by clicking a topic area title

  8. Points for Discussion How can we improve this RRL calculator tool? Create some type of summary report that the user can save for future reference and/or posting? Is it a useful/valuable tool, something we should continue to work on? Can (or should) it be copied/ported to other web sites for use on other systems? Have started looking into how to port the PHP code into the Python-based portal web site. Suggestions for integration of this tool into the RES?

  9. Contributed Excel Calculator Frédéric Coisman of the DCNS Group (dcnsgroup.com) was interested in the RRLs, and wanted a calculator tool. Ours is not publicly available, so he developed an Excel-based tool using the AFRL Technology Readiness Level Calculator V2.2 by William L. Nolte as a guide. At our request that this have a license under which we could distribute it and users could modify it for their needs, he put his tool under a Creative Commons Attribution-ShareAlike 3.0 Unported License. Files available on WebDrive until MM/DD/YYYY URL

  10. Readme File Tool Views (1 of 2)

  11. Readme File Tool Views (2 of 2) Strict Calculation Mark: integer = highest level completed, decimal = %count value of next level Compound Average Mark: average of the unitary topic RRL values according to the selected topic weights Global Mark: computes a global topic %topic_count according to the number of checked assessments for this topic / the total number of assessments for the topic, then those %topic_counts are summed up according to topic filtering and compound weights.

  12. Points for Discussion How well does this tool capture the RRLs? Is the interpretation of the levels appropriate? Are the breakdowns and individual questions appropriate? Are the calculations appropriate? Are there suggestions for modifications/improvements that we should pass along to the creator? Should we distribute this tool through our web site?

  13. Part 2: Updates and Planning for v1.1

  14. Possible Updates Received some comments and feedback, from WG members and others, about RRLs after v1.0 released. Collected this into draft v1.1 document to make it easier for us to address and develop an update to the RRLs. Some of the suggestions included: Group multiple RRLs together into categories of readiness Check that all topic area levels are combined into each RRL Consider whether 9 levels are needed Are the levels quantitative enough to allow consistent assessments? Consider improving definitions/descriptions of levels and terms Consider use vs. reuse, defining them and how each is included How does open source play into the RRLs? More clearly define the target audience Consider how white box reuse, black box reuse, VM-ware, reuse after long periods of time, etc. are addressed Consider how cost and risk are factored into levels Also some suggestions specific to certain topic areas

  15. Planning for v1.1 Need to consider all of the comments received and how we are going to address them, as necessary. Some comments are fairly general, others are specific to topic areas or even certain topic area levels. Can we get original topic area level authors to help on the more specific feedback? Should we get new volunteers to help review (fresh eyes)? How are we going to get to a v1.1 update release?

  16. Part 2: Integration and Use Cases in DSMs

  17. RRLs and the DSMs Bob has been working with the ICESat-2 mission to develop a procedure/template for making RRL assessments. Also received some interest from people related to GPM about making RRL assessments. How to develop and release the assessment template? Will probably want to use these initial assessments by users to check level consistency and help refine the RRLs. How to use that feedback for improvements?

  18. Backup Slides

  19. Introduction to RRLs Having a measure of the reusability of an asset: Provides potential reusers with additional information about the reuse maturity of the asset: Lets them know what they’re getting Gives them a basic feel for what modifications may be needed Helps potential reusers make better informed choices about: What to reuse What best meets their needs This measure can be used as a piece of metadata for assets placed in the proposed RES (or anywhere else). In an iterative process, volunteers from the WG: Wrote an initial set of levels for each topic, Drafted summaries of each RRL, looking across all topic areas at each level, Created a set of summary RRLs with descriptions by combining information from all topics at the same level, and Made suggested revisions to RRLs and topic area levels based on feedback received from the community. The WG is reviewing/revising use cases that have been developed. The WG plans to use the RRLs to assess some reusable assets and use the results to revise the levels as needed to ensure consistent assessments can be made. 19

  20. RRL Topic Areas and Levels Topic areas included: Documentation Extensibility Licensing Modularity Packaging Portability Standards compliance Support Verification/Testing A scale of 1–9 was used to match the Technology Readiness Level (TRL) scale. Topic levels were combined into a single RRL scale. A sample RRL calculator was also developed (currently only on the prototype RES) as a means to help providers and consumers rate the reusability of software assets by calculating weighted averages of topic area levels. Example from Testing/Verification RRL 4 – Software application testedand validated in laboratory environment Following successful testing of inputs and outputs, the testing would include integrating an application to establish that the “pieces” will work together to achieve concept-enabling levels. This validation must be devised to support the concept that was formulated earlier and should also be consistent with the requirements of potential system applications. The validation is relatively “low-fidelity” compared to the eventual system: it could be composed of ad hoc discrete components in a laboratory; for example, an application tested with simulated inputs. 20

  21. Summary of RRL Use Cases

  22. Current Draft RRL Use Cases Software Contribution Planning Software Development Identifying Software Development Components to be Created Identifying Standards for Creating Software Development Components Creating Deliverables of Software Development Teams Evaluating Individual Software Components for Integration Integrating Software Components Software Sponsorship Planning Software Sponsorship Determining the Cost of Software Projects to be Sponsored Establishing Criteria to Assess Software Proposals for Sponsorship Identifying Potential Software Proposals for Sponsorship Evaluating the Quality of Software Proposals for Sponsorship Evaluating Progress of Sponsored Software Projects Software Adoption • Planning Software Adoption • Identifying Criteria to Assess Software Candidates for Adoption • Identifying Potential Software Candidates for Adoption • Assessing the Cost of Software Components for Adoption • Evaluating the Quality of Software Components for Adoption • Determining Progress on Integrating Adopted Components

  23. Points for Discussion Feedback on the categories of the use cases? Feedback on the use cases mentioned?

  24. Process for Assessing Assets with RRLs

  25. Idea Behind Assessments RRLs and topic area levels have been revised over a long period of time based on community feedback. RRLs are intended to be used as a measure, and assessments are meant to ensure precision of measurement. This includes consistency of topic areas across a given level and consistency of assessments by different people. Plan to use some other work to inform possible revisions to topic area levels Example: packaging/distribution work for packaging topic area Also plan to perform assessments of some reusable assets to check if RRLs need revisions to ensure consistent assessments. Initial idea is for a four-step process: Selecting reusable assets to assess Performing the assessments Analyzing the results of the assessments Revising the RRLs as necessary

  26. Discussion Selecting reusable assets to assess What types of assets should we assess? How do we select appropriate assets for refining the RRLs? Performing the assessments How do we ensure that not having a reuse purpose in mind as an end result will not affect the assessments? Should we provide any guidance on how to do assessments? Analyzing the results of the assessments How consistent were the results of different reviewers? How can the RRLs be updated to achieve better consistency? Revising the RRLs as necessary How can we use cost/effort, as a fraction or percentage of the total work, to help distinguish or move between RRLs?

More Related