1 / 18

Students: Nadia Goshmir, Yulia Koretsky Supervisor: Shai Rozenrauch

Industrial Project 234313 Advanced Tool for Automatic Testing Final Presentation. Students: Nadia Goshmir, Yulia Koretsky Supervisor: Shai Rozenrauch. Introduction.

nico
Télécharger la présentation

Students: Nadia Goshmir, Yulia Koretsky Supervisor: Shai Rozenrauch

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Industrial Project 234313Advanced Tool for Automatic TestingFinal Presentation Students: Nadia Goshmir, Yulia Koretsky Supervisor: ShaiRozenrauch

  2. Introduction • RADAR (Remote Analysis Diagnostics and Reporting) is a system developed in Philips in order to control the CT scanner performance in the installed base. • The monitoring results are verified by V&V (Verification & Validation) engineers semi-manually; the testing coverage is rather low and its results strongly depend on the human resources.

  3. Goals • The goal of this project is to create automated testing environment which will allow configuring numerous scenarios, run them automatically and report the testing results without a human intervention.

  4. Goals (cont.) • The tool will receive a set of parameters for predefined tests, run them sequentially through the CT RADAR (simulator) and generate reports of the comparison results. In case a test did not pass, display an explanation if exists. • Advanced: interface (UI) for the users to add new tests and parameters easily. • Delighter: Support tests for Trends (based on several days for each test)

  5. Methodology • Tool written in c# • Microsoft’s .NET Windows form was used to create the UI • Multithreading was used to enable run tests simultaneously. • Microsoft SQL server were used to store all entered tests and their runs. • SQL queries and XML analyzer was used to compare the expected values (which were entered by the user) and the actual values received from updated DB or XML file • A tool called “FullPathGenerator” is used to run the log files (which were selected by the user) through the processes • Crystal Report was used to create the final report of the test

  6. Achievements An efficiently working tool, that addresses all initial requirements and user needs.

  7. Flowchart

  8. Achievements (cont.) • Ui (advanced) - a friendly environment for the user to create and run tests (in a batch mode) easily. Opening window- allows the user to choose whether to create a test or run old tests.

  9. Achievements (cont.) If user chooses to create a new “MultiTest”, he enters a window where he can choose the log files (each single file considered as “Test”) that will be tested. The user can choose a single test, multiple or persistent tests. In the persistence option, the log files are chosen by range of dates or by range of zips. The tool finds all the relevant files in the chosen range. The user can also enter attributes such as name or group that will be attached to the test, and can be used on test’s search.

  10. Achievements (cont.) • After the user chose the files that will be tested, he enters to a window in which he can enter for each log files the expected values, stroll between the files back and forth. when finish entering the parameters he can either run the test or only save it for future run. Both of the options save the test in the DB.

  11. Achievements (cont.) • If in the beginning user chose to run old test, he enters a “Saved Tests” window where all the multiTests that were saved in the database are shown. User can run multiple multiTests of his choice, remove them, edit them (automatically enters the “new Test” window where all the fields already loaded) and filter them by attributes.

  12. Achievements (cont.) • Final Report- by the end of a test run, in case of failure, a detailed report is created, demonstrates all the failed parts: the multiTests that failed, for each one - the tests that failed, for each test– the exact parameters that were mismatched and its expected value vs. actual value.

  13. Achievements (cont.) • Low Complexity - when a (multi)test is run, all the files that were chosen are sent to the DFL. Once all are sent, a polling starts, we check in the DB whether the Radar updated the DB, or an XML file has arrived. When one of them occurs, a new thread is created for running the compare DB/XML respectively regardless of the other tests. Therefore many tests run simultaneously and independently. Locks were used for synchronization.

  14. Achievements (cont.) • Easy future development - the tool is programed to be as object oriented as possible. Future additions can be very easily conducted due to the virtual base classes.

  15. Achievements (cont.) • Trends support (Delighter) – tool supports “Persistence test”. User can choose multiple zips, they are sorted, and sent to the DFL in the right order. That way the Radar can analyze them properly and a test of this sort can be run as expected. • Batch mode support – user can create tests during the day, can choose in the “Saved Tests” window all the tests he want to run (hundreds) and go home, in the morning a detailed report will be expecting him. If a test fails due to unexpected reason there is an exceptions handling that won’t ruin the whole run, and will allow it to finish.

  16. Achievements (cont.) • Database - All the tests and runs are stored in the DB • Documentation - User manual Developers manual Inline documentation of the code

  17. Conclusions • A design is a crucial part when working on a large program, well worth the time invested. • All sides involved should be coordinated. • The course is a good way to gain experience with “real” software projects, get familiar with a project’s life-cycle and learn how to conduct oneself in a team.

  18. The end

More Related