1 / 32

Software Testing Strategies

Software Testing Strategies. What is testing? What is quality? Why is testing necessary? What are defects and what causes errors? Types of testing . Agenda. What is Testing? - Exercise. Audition Breathalyzer test Driving test Eye test Final exam after faculty IQ test

alangley
Télécharger la présentation

Software Testing Strategies

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Testing Strategies

  2. What is testing? What is quality? Why is testing necessary? What are defects and what causes errors? Types of testing  Agenda

  3. What is Testing? - Exercise Audition Breathalyzer test Driving test Eye test Final exam after faculty IQ test Spelling test Test driving a car Test paper What do these tests tell us about the software test process?

  4. What is Testing? These tests have different elements of: Planning and preparation Known goals including pass/fail criteria and risk assessment Judging and evaluation Measurement during a controlled test Report back on the outcome Action as result of the outcome

  5. What is Software testing? Software testing is a process of verifying and validating that a software application: - Meets the business and technical requirements that guided its design and development - Works as expected. Software testing has three main purposes: verification, validation, and defect finding. - The verification process confirms that the software meets its technical specifications. Compliance with requirements. - The validation process confirms that the software meets the business requirements. Fitness for expected use. - A defect is a variance between the expected and actual result. It may be a bug (error in the code) or a fault in the specification, design phase. Above all of this a tester main goal should be to provide information to the people in charge with the project: - test plans, tests results, tests coverage, data from stress, performance, experience based tests etc. Software testing is a Measurement of software quality

  6. What is Quality? If testing measures software quality, then... what is quality?

  7. What is Quality? Possible definitions: Product based: quality characteristics User based: “fitness” for use Manufacturing based: conformance to requirements Value based: balancing with time and money

  8. Software quality • According to ISO/IEC 9126 software quality consists of: - Functionality (correctness and completeness) - Reliability (fault tolerance, recover after failure) - Usability (intuitive handling, easy to learn) - Efficiency (e.g. the system requires a minimal use of resources (e.g. CPU-Time)) - Maintainability (how easy the system can be maintained, improved) - Portability (ability to transfer the software to a new environment) • Types of Quality Assurance (QA) - constructive activities to prevent defects, e.g. through appropriate methods of software engineering - analytical activities for finding defects, e.g. through testing leading to correcting defects and preventing failures, hence increasing the software quality

  9. QA vs. Testing • Software Quality Assurance - Software QA involves the entire software development PROCESS: monitoring and improving the process, making sure agreed upon standards and procedures are followed, and ensuring that problems are found and dealt with. It is oriented towards 'prevention'. • Software Testing - Testing involves operation of a system or application under controlled conditions and evaluating the results. The controlled conditions should include both normal and abnormal conditions. It is oriented towards 'detection' • Testing is to create, run, verify test cases to see if the proper behavior was observed. • QA is to examine the whole process to minimize risk. Risk due to specs/requirements, implementation, follow-up. • Testing is a part of the QA role, but the QA role extends much further than just focusing on testing.

  10. What's the need of testing? We all make mistakes... Crucial for business survival: save money in the long run (maintenance etc.) reputation and reliability lawsuits Increasing importance and size of software in society as a whole Defects hardly decrease

  11. What are the causes of software failures? • Human error A defect was introduced into the software code, the data or the configuration parameters Causes of human error time pressure,excessive demands because of complexity, distractions, not enough knowledge, bad communication etc • Environmental conditions changes of environmental conditions Causes of negative environmental conditions radiation, magnetism, electronic fields and pollution sun spots, hard disk crashes, power fluctuations etc.

  12. What are Software Defects? Bugs vs. Defects Bug: (1) things the software does that it is not supposed to do, [or] something the software doesn't do that it is supposed to; (2) a fault in a program which causes the program to perform in an unintended or unanticipated manner. Defect: (1) (IEEE) a product anomaly; (2) synonymous with error, implies a quality problem discovered after the software has been released to end-users (or to another activity in the software process); (3) fault; (4) error, fault, or failure; (5) non conformance to requirements. Question: What do we mean by zero defects?

  13. Test levels • During the software life cycle several test levels are found • Component testing (also known as module, unit, class or developer's test) • Integration testing (also interface testing) • System testing • Acceptance testing

  14. Component testing • Test of each component after realization – is made by the developer • Every component is tested on its own • finding failures caused by internal defects • cross effects between components are not within the scope of this test • Test cases may be derived from • component specification • software design • data model • Testing components often requires drivers and stubs Drivers handle the interface of the component (simulate inputs, record outputs) Stubs replace or simulate components not available yet or not part of the test • Source code knowledge allows to use white box methods for component tests

  15. Component testing – 2 • Testing functionality • Every function must be tested with at least one test case • are the functions working correctly, are all specifications met? • Defects found commonly are • defects in processing data, often near boundary values • missing functions • Testing robustness (resistance to invalid input data) • Test cases representing invalid inputs are called negative tests • A robust system provides and appropriate handling of wrong inputs • Wrong inputs accepted in the system may produce failure in further processing (wrong output, system crash) • Other non functional attributes may be tested • e.g. performance and stress testing, reliability

  16. Integration testing • Examines the interaction of software components after system integration • Assumes that the components have been already tested. • Integration tests examine the interaction of software components (subsystems) with each other: • interfaces with other components • interfaces among GUIs / MMIs • Integration tests aim at finding defects in interfaces • Test cases may be derived from • interface specifications • architectural design • data models • Different strategies are used for integration testing; bottom-up and top-down are the most used

  17. System test • Testing the integrated software to prove compliance with the specified requirements • System tests refer to: • functional and non-functional requirements (functionality, reliability, usability, efficiency etc) • Test of the integrated system from the user's point of view • complete and correct implementation of the requirements • deployment in the real system environment with real life data • Test environment should meet the true environment • no test drivers are needed • all external interfaces are tested under true conditions • close representation of the later true environment • No tests in the real life environment • defects could damaged the real life environment • software under deployment is constantly changing. Most tests will not be reproducible

  18. System test – 2 • Testing functionality • Prove that the implemented functionality exposes the required characteristics: • Suitability - are the implemented functions suitable for their expected use • Accuracy - do the functions produce correct results? • Interoperability - does interaction with the system environment show any problems? • Compliance - does the system comply with applicable norms and regulations? • Security - are data / programs protected against unwanted access or loss?

  19. Acceptance testing • A formal test performed in order to verify compliance with the user requirements • First test where the customer is involved • Customers use the software to process their daily business process at the suppliers location (alpha testing) or at their own location (beta testing) • Advantages of alpha and beta tests • reduce the cost of acceptance testing • use different user environments • involve a high number of users

  20. Principles of all testing levels • each development activity must be tested • no piece of software may be left untested • each test level should be tested specifically • each test level has its own test objectives • the test performed at each level must reflect these objectives • testing begins long before test execution • as soon as development begins the preparation of the corresponding tests can start • this is also the case for document reviews starting with concepts, specification and overall design

  21. Roles in testing • Testing activity is such complex that several roles have been defined each of them with different responsibilities • Roles: • Developers – also developers have to test their code • Test lead/manager - effectively lead the testing team • Tester – execute the tests and record the test results • Test designer – write test cases, develop test structure, perform test coverage analysis • Test automation developer – create automatic test scripts that will follow the designed test cases • Beside that every person involved in the PLC process has his role in the testing process - developers have to test their code - project leaders have to test the requirements by reviewing them

  22. Testing process @ Continental Discipline (Entertainment, Navi etc.) System Integration PVV (Product Verification and Validation) Requirements / Architecture Failed Test Result Passed PRODUCTION

  23. What are the elements that are contained by the testing process? • Test Plan • Test Cases • Test Suites • Test Techniques

  24. Test plan A test plan is a formal document that describe: • Scope, objectives, and the approach to testing – why, what and how do we have to test • People and equipment dedicated/allocated to testing – who are the people responsible for tests • Tools that will be used – what kind of tools to use, do we need automation tests, where we will use them • Dependencies and risks – what other things have to take care before starting tests and what are the risks • Categories of defects – what are the defects categories and which kind of defects are included in each of them • Test entry and exit criteria – when does the test start and when does it end • Measurements to be captured – what kind of results do we need • Reporting and communication processes – what is the procedure for reporting problems and for communication • Schedules and milestones – what are the milestones of the project and when testing should be performed Basically the Test plan describes what and how you are going to test. It is used to keep track for each steps of activities in testing. If we follow the test plan a quality product can be delivered.

  25. Test cases • A Test case is a document that describe the process and expected result required to determine if a requirement has been satisfied. There is situations when more test cases are cover in the same document. • A requirement may need several test cases to cover it all. Mainly a Test Case can have the below elements: - Test case ID - Preconditions for the test case - Test case description – what are we trying to test - Requirement that will be cover - Test procedure - Expected results - Actual results In many times a Test case can have more than one test step each step having his own Expected result. The process of developing test cases can help finding problems in the requirements of the application so is suggested that writing the test cases should start early in the develop cycle if possible.

  26. Test suites • A Test suite is a collection of test cases which are grouped based on one criteria or more. • A system can have an unlimited number of test suites based on different criteria: - A test suite for HMI tests - A test suite for Performance tests - A test suite for Navigation tests - A test suite for Entertainment tests - etc.

  27. Testing techniques • Static code analysis – mainly used by developers for a part of the program without executing the program • Requirements Based Tests – the most used one – tests are based on the specification • Scenario Based Tests – based on the scenarios a real user will follow when performing a function on the system • Equivalence Partitioning – used to minimize the number of tests to minimum necessary (make one test for an partition) e.g. values 1..12 for month • Boundary-value Analysis – tests are done using the limit values e.g. for month test with 0,1,12,13 • All-pairs testing – used for minimizing the tests when two or more variables have to be taken into consideration http://www.pairwise.org/articles.asp • Error Guessing – think where the application may have problems (also known as Risk based) • Output Forcing – make tests to obtain a specific output not interesting the imputs • Ad-hoc tests/Exploratory tests – the tests are performed without planning/documentation(usually done only once unless bugs are found); used to quickly find the problems

  28. Relation with the developers • “The best tester is not the one who finds the most bugs or who embarrasses the most developers. The best tester is the one who gets the most bugs fixed.”  Cem Kaner’s Testing Computer Software • Strategies to have a good relation with the developers: • Be Cordial and Patient – convincing a developer that his program has a bug is not such an easy job • Be Diplomatic - try presenting your findings with tact not accusing anyone. Be sure that testers exist in the company because of developers and because of us their jobs are saved  • Don’t embarrass - nobody is perfect and nobody wants mistakes to be pointed out. Just as a tester can’t test a program completely, developers can’t design programs without mistakes. We are human after all. • Be Cautious - design your bug reports and test documents in a way that clearly lays out the risks and seriousness of issues so the developer can’t say later that they didn’t solved the bug because they were not aware or you • Don’t consider your work less important – it has same importance as its work • “A smart tester is one who keeps a balance between listening and implementing. If a developer can’t convince you a bug shouldn’t be fixed, it’s your duty to convince him to fix it.”

  29. Attributes of a good tester • Curious, perceptive attentive to detail • to comprehend the practical scenarios of the customer • to be able to analyze the structure of the test • to discover details where failures might show • Skepticism and has a critical eye • test objects contain defects – you just have to find them • do not believe everything you are told by the developers • must not fear that serious defects may be found which will affect the course of the project • Good communication skills • to bring bad news to the developers • to overcome frustration states of minds • positive communication can help to avoid or to ease difficult situations • to quickly establish a working relationship with the developers • Experience • experience helps identifying where errors might accumulate

  30. What is the problem with testing? It takes longer than the people expect It is not as successful as people expect (Is this so? Why?) Does not prove the absence of faults, but can only discover the presence of faults It is more difficult than people expect

  31. The End – The Test Paradox Imagine the following scenario: • You've just finished a test run • You have found 150 problems • How do you feel now? OR • You've just finished a test run • You have found 0 problems • How do you feel now?

  32. Thank you! Thank you for your participation. Questions?

More Related