1 / 22

MOGENTES 3 rd and Final Review Reporting Period January 2010 – March 2011 Cologne, 26 May 2011

MOGENTES 3 rd and Final Review Reporting Period January 2010 – March 2011 Cologne, 26 May 2011. WP0 – Project Overview and Progress Manfred Gruber, AIT. Agenda I. Agenda II. Overview Objectives, Progress, Deviations, Planning. Objectives of the Project.

elvis
Télécharger la présentation

MOGENTES 3 rd and Final Review Reporting Period January 2010 – March 2011 Cologne, 26 May 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. MOGENTES 3rd and Final Review Reporting Period January 2010 – March 2011 Cologne, 26 May 2011 WP0 – Project Overview and Progress Manfred Gruber, AIT

  2. Agenda I

  3. Agenda II

  4. OverviewObjectives, Progress, Deviations, Planning

  5. Objectives of the Project • Methods and Tools for the generation of efficient test cases from system and fault models • Framework for the integration of tools which can be easily used by domain experts • Increase confidence in safety-relevant embedded systems by improving their testing (incl. conformance with safety standards) • Foster application of automated testing for satisfying functional safety standards requirements • Reduce testing effort by at least 20%

  6. Progress towards Objectives Methods and Tools: Theory, coverage criteria and fault models for mutation-based TCG (MBTCG) and Fault Injection (FI) MBTCG for embedded systems First versions of all TCG tools available Testcases for the demonstration systems have been created applied and assessed Framework: Improved Framework is available Service orientation, distributed execution, traceability, … Increase confidence in safety-relevant embedded systems: Model based TCG provides better coveragethan manual testing Foster application of MBTCG for satisfying safety standards: Model-based testing now (highly) recommended in IEC 61508 37 conference and 5 journal papers, special MOGENTES workshops Reduce testing effort by at least 20%: automated TCG (much) less expensive than manual TCG mutation-based TCG promises increase of efficiency of generated test cases (no irrelevant test cases)

  7. WP 3Modelling / Test Theory WP 4Algorithms, Tools, FI Project Breakdown & Workflow WP 1Requirements Application elements,Requirements, Faultsto be modelled needed Tools and Interfaces WP 2Framework Interfaces and Transformations NeededConversions TheoreticalFoundations,Language(s,) NeededModellingAspects Proposal for finalDemonstration Faults (for MCSBFI) Test Generation Method(s) and Tool(s) WP 5Test-Case Generation Examples to begenerated Demonstration Setting WP 6Rail Demonstrators WP 7Automotive Demonstrators

  8. Progress - Framework • General tool integration framework that can be used easily by domain experts in industrial environments • Service oriented integration of tools – integration of heterogeneous tools • Model-based construction and easy to create toolchains • Toolchain models are deployed automatically to workflow engine • Uniform, centralized view of different processes • Data integration and certification support • Traceability of information and execution • Extendable framework architecture • Easy adaptation to industrial environments • Support for load balancing, distributed execution

  9. Progress – Modelling and Testing Theory |[ var v : T := init methods M1;...;Mn actions A1 = g1-> v := e1; ...; Am = gm -> Mi(ei); do A1 ☐ A2 ; A3 // Am od ]| : MI • Modelling Languages • UML, Simulink for target applications – core modelling sub-language • Object Oriented Action Systems – internal model • Ontology based model verification • conformance check to all syntactic and semantic design and testability constraints • Fault Models • Definition of domain-specific fault models • Testing Theory • Action Systems using ioco and Timed Automata for generating test cases from UML models • Mutation based test case generation for Simulink models • Formal concept analysis supporting mutation testing • Automated and precise verification of floating point arithmetic

  10. Progress – Algorithms, Tools and Fault Injection • Development of methods and test case generation tools based on formal and experimental methods • Implemented tool chains (5 tracks): • UML – OOAS • UML - UPPAL • Simulink - CMBC • Simulink – Fault Injection • Prover iLock Extensions • Framework for model implemented fault injection in Simulink models • Minimal cut sets based on FI experiments • Comparison with HW implemented fault injection • Integrated in tool integration framework • Application to demonstration systems, assessment of results

  11. Progress – Proof-of-Concept Demonstrators • Railway • interlocking / state analyser • Automotive • car alarm, steering anti catchup • Off highway • Bucket control of wheel loader • All methods and tools could be applied and assessed successfully - but there are clear limitations depending on the complexity of the systems under test • Follow-up research needed to solve state space explosion problem (one project started already) • Cost / effort reduction estimates, comparison with manual creation • Improved test quality (e.g. coverage)

  12. Deviations and Corrective Actions • Extension of the project duration by three months • Reason • Creation of test cases took significantly longer than expected due to the state space explosion effect caused by high inner complexity of some of the demonstrator models • Major adaptations to the methods and tools developed were required • Allowed complexity of UML application models had to be constrained in order to be able to show the applicability of methods and tools • Apart from the extension there were no changes to major project milestones and objectives of the project

  13. Prepayment by the Commission: 2.627.814 € (+ 7.178 € interest generated, 85% of maximum funding) Transfer to Guarantee Fund: 154.999 € (will be released when the final payment is made) For the details of each partner please refer to the periodic report Costs and Effort

  14. Statistics • Deliverables Planned • M1 – M39: 55 • Delivered to the Commission in Time: 44 • Deliverables delayed: 10 (3 months), 1 (6 months) • Milestones Planned • M1 – M39: 39 • Milestones Reached in Time: 33 • Milestones delayed: 2 (3 months), 4 (6 months)

  15. Meetings

  16. Communication – MOGENTES Document Repository (https://www.mogentes.eu)

  17. Review Recommendations* The final plan for each tool track has been updated in D2.2c (table 3-1), according to their application to the demonstrator use cases. The set of supported modelling features by the test case generation tools is described in deliverable D3.2b. On methodologies and tools The final plan for each tool track (see table 3.1 in D2.2b) should be clarified in the still under discussion cases and confirmed in the already decided cases. Additionally, for each track, a comprehensive list of supported modelling language features supported by the test generation tools should be provided, in order to better assess the developed methods and tools and their applicability *Recommendations made by the reviewers after the review meeting 2010

  18. Review Recommendations In deliverable D5.6, i.e. the assessment report, all test case generation methodologies of MOGENTES are discussed with respect to benefits, limitations, and estimation of costs (and potential for cost savings) induced by their application. Deliverable D5.6 also contains a discussion of benefits and implications of qualitative (fault) modelling (clause 7.3.6). On methodologies and tools The description of methodologies and tools developed within the project for the model-based automatic TCG should include a discussion on the limitations of their applicability (if any) in relation with the complexity of the system under evaluation, as well as the cost (in terms of time and resources) induced by their application. For example, the qualitative modelling approach, presented in deliverable D3.b with the aim of dimini­shing the time-consuming testing activity, needs to be better discussed in terms of applicability conditions and cost-benefits aspects.

  19. Review Recommendations The deliverables D6.3, D6.4, D7.3, and D7.4 discuss these points for the different demonstrator use cases, and D5.6 discusses these outcomes from a more application independent view. On demonstrators • The four demonstrators set up by the industrial partners are individually excellent proofs of concept for the methods and tools developed by MOGENTES. However, a final discussion should be provided on: • how each demonstrator contributed as a proof of concept for the techniques developed during the project, and • how they, in combination, provide evidence that the methods and tools developed by the project constitute a satisfactory set for model-based automatic TCG for dependable embedded systems.

  20. Review Recommendations This is addressed in deliverable D6.3. However, due to the limited number of test cases which could be generated for ELEKTRA, that point could not be demonstrated in depth. On demonstrators Concerning the demonstrator on the interlocking railway system, it should be demonstrated that the virtualization of some of the hardware used by Thales will not have any adverse implications on the real-time behaviour of the system under analysis.

  21. Review Recommendations The website has been improved as recommended, in particular to provide an easier access to public project deliverables. General Dissemination through the website should be improved, highlighting the most important and appealing achievements of the project. (In the homepage of the website there should be single click links to the most important/most appealing achievements and public deliverables of the project.)

  22. Review Recommendations The achievement of project objectives is described in deliverable D5.6, including in particular this topic – as objective 1. General The consortium should clarify how they will convincingly demonstrate the achievement of at least a 20% reduction in testing time.

More Related