1 / 49

Lecture 14 Software Testing

This article provides an overview of software testing, including the process, fundamental concepts, and different types of testing. It covers manual testing, automation testing, black box testing, and white box testing.

bernita
Télécharger la présentation

Lecture 14 Software Testing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CSC291 - Software Engineering Concepts(Fall 2018) Lecture 14 Software Testing

  2. Software Testing The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.

  3. Software Testing Testing can be defined as “ A process of analyzing a software item to detect the differences between existing and required conditions and to evaluate the features of the software item” According to IEEE standard

  4. What Testing shows • Errors • Requirements Conformance • Performance • An Indication of Quality

  5. Who Does Testing? In most cases, following professionals are involved in testing of a system within their respective capacities: • Software Tester • Software Developer • Project Leader/Manager • End User

  6. Testing Fundamentals • You should design and implement a system or a product with “testability” in mind • Testability • It is simply how easily a computer program can be tested • Operability • The better it works, the more efficiently it can be tested • Observability • What you see is what you test • Controllability • The better we can control the software, the more the testing can be automated and optimized • All possible outputs can be generated through some combination of input • Software and hardware states and variables can be controlled directly by the test engineer

  7. When to Start Testing? • An early start to testing reduces the cost, time to rework and error free software that is delivered to the client. • In SDLC, testing can be started from the Requirements Gathering phase and lasts till the deployment of the software. • It also depends on the development model that is being used.

  8. When to Stop Testing? Following are the aspects which should be considered to stop the testing: • Testing Deadlines. • Completion of test case execution. • Completion of Functional and code coverage to a certain point. • Bug rate falls below a certain level and no high priority bugs are identified. • Management decision.

  9. Testing and Debugging Testing: • It involves the identification of bug/error/defect in the software without correcting it. • Normally professionals with a Quality Assurance background are involved in the identification of bugs.

  10. Testing and Debugging Debugging: • It involves identifying, isolating and fixing the problems/bug. • Developers who code the software conduct debugging upon encountering an error in the code. • Debugging is the part of White box or Unit Testing. Debugging can be performed in the development.

  11. Types of Testing Manual testing • This type includes the testing of the Software manually i.e. without using any automated tool or any script. • There are different stages for manual testing like unit testing, Integration testing, System testing and User Acceptance testing. • Testers use test plan, test cases or test scenarios to test the Software to ensure the completeness of testing.

  12. Automation Testing • Automation testing which is also known as Test Automation, is when the tester uses another software to test the software. • This process involves automation of a manual process. • Integration of computerized tools into the process of software development • Code auditing • Coverage monitoring • Load tests…etc

  13. Classification According to Testing Concept • Black Box Testing • White Box Testing

  14. Black Box Testing • The technique of testing without having any knowledge of the interior workings of the application is Black Box testing. • The tester does not have access to the source code. • A tester will interact with the system's user interface by providing inputs and examining outputs without knowing how and where the inputs are worked upon.

  15. Black Box Testing Advantages • Well suited and efficient for large code segments. • Code Access not required. • Clearly separates user's perspective from the developer's perspective. • Large numbers of moderately skilled testers can test the application with no knowledge of implementation, programming language or operating systems.

  16. Black Box Testing Disadvantages • Inefficient testing, due to the fact that the tester only has limited knowledge about an application. • Blind Coverage, since the tester cannot target specific code segments or error prone areas. • The test cases are difficult to design.

  17. White Box Testing • White box testing is the detailed investigation of internal logic and structure of the code. • White box testing is also called glass testing or open box testing. • In order to perform white box testing on an application, the tester needs to have knowledge of the internal working of the code. • The tester needs to have a look inside the source code and find out which unit/chunk of the code is behaving inappropriately.

  18. White Box Testing Advantages • As the tester has knowledge of the source code, it becomes very easy to find out which type of data can help in testing the application effectively. • Extra lines of code can be removed which can bring in hidden defects. • Due to the tester's knowledge about the code, maximum coverage is attained during test scenario writing.

  19. White Box Testing Disadvantages • Due to the fact that a skilled tester is needed to perform white box testing, the costs are increased. • Sometimes it is impossible to look into every nook and corner to find out hidden errors. • It is difficult to maintain white box testing as the use of specialized tools like code analyzers and debugging tools are required.

  20. Black Box vs White Box

  21. Testing strategies • There are basically two testing strategies: Big bang testing: tests the software as a whole, once the completed package is available. Incremental testing: • Tests the software piecemeal – software modules are tested as they are completed (unit tests) • followed by groups of modules composed of tested modules integrated with newly completed modules (integration tests). • Once the entire package is completed, it is tested as a whole (system test).

  22. Unit Testing • Unit testing is performed by the respective developers on the individual units of source code assigned areas. • The goal of unit testing is to isolate each part of the program and show that individual parts are correct in terms of requirements and functionality.

  23. Integration Testing /System testing • The testing of combined parts of an application to determine if they function correctly together is Integration testing. System Testing • This is the next level in the testing and tests the system as a whole. • Once all the components are integrated, the application as a whole is tested to see that it meets Quality Standards. • This type of testing is performed by a specialized testing team.

  24. Testing Strategies Incremental testing is also performed according to two basic strategies: • bottom-up and top-down • In top-down testing, the first module tested is the main module, the highest level module in the software structure; the last modules to be tested are the lowest level modules. • In bottom-up testing, the order of testing is reversed: the lowest level modules are tested first, with the main module tested last.

  25. Bottom-up Testing

  26. Top-down Testing

  27. Stubs And Drivers For Incremental Testing • Stubs and drivers are software replacement simulators required for modules not available when performing a unit or an integration test. • A stub (often termed a “dummy module”) replaces an unavailable lower level module. • Stubs are used in top down testing approach • when you have the major module ready to test, but the sub modules are still not ready yet. • So in a simple language stubs are "called" programs, which are called in to test the major module's functionality.

  28. Stubs And Drivers For Incremental Testing • A driver is also a substitute module but of the upper level module • Drivers are required in bottom-up testing until the upper level modules are developed (coded). • They are used when the sub modules are ready but the main module is still not ready.

  29. Example of Stubs and Drivers  For Example we have 3 modules login, home, and user module. Login module is ready and need to test it, but we call functions from home and user (which is not ready). To test at a selective module we write a short dummy piece of a code which simulates home and user, which will return values for Login, this piece of dummy code is always called Stubs and it is used in a top down integration.  

  30. Example of Stubs and Drivers  • Considering the same Example above: If we have Home and User modules get ready and Login module is not ready, and we need to test Home and User modules Which return values from Login module, So to extract the values from Login module We write a Short Piece of Dummy code for login which returns value for home and user, So these pieces of code is always called Drivers and it is used in Bottom Up Integration

  31. Regression Testing • Whenever a change in a software application is made it is quite possible that other areas within the application have been affected by this change. • The intent of Regression testing is to ensure that a change, such as a bug fix did not result in another fault being uncovered in the application.

  32. Acceptance Testing • The most importance type of testing as it is conducted by the Quality Assurance Team who will gauge whether the application meets the intended specifications and satisfies the client's requirements. • The QA team will have a set of pre written scenarios and Test Cases that will be used to test the application. • Acceptance tests are not only intended to point out simple spelling mistakes, Interface gaps, but also to point out any bugs in the application that will result in system crash or major errors in the application.

  33. Alpha Testing • This test is the first stage of testing and will be performed amongst the teams (developer and QA teams). • Unit testing, integration testing and system testing when combined are known as alpha testing.

  34. Beta Testing • This test is performed after Alpha testing has been successfully performed. • In beta testing a sample of the intended audience tests the application. • Beta testing is also known as pre-release testing. In this phase the audience will be testing the following: • Users will install, run the application and send their feedback to the project team. • Getting the feedback, the project team can fix the problems before releasing the software to the actual users.

  35. Test Documentation • Test Plan • What: a document describing the scope, approach, resources and schedule of intended testing activities; identifies test items, the features to be tested, the testing tasks, who will do each task and any risks requiring contingency planning; • Who: QA; • When: (planning)/design/coding/testing stage(s);

  36. Contd.. • Why: • Divide responsibilities between teams involved; if more than one QA team is involved (ie, manual / automation, or English / Localization) – responsibilities between QA teams ; • Plan for test resources / timelines ; • Plan for test coverage; • Plan for OS / DB / software deployment and configuration models coverage. - QA role: • Create and maintain the document; • Analyze for completeness; • Have it reviewed and signed by Project Team leads/managers.

  37. Test Case • What: a set of inputs, execution preconditions and expected outcomes developed for a particular objective, such as exercising a particular program path or verifying compliance with a specific requirement. • Five required elements of a Test Case: • ID – unique identifier of a test case; • Features to be tested / steps / input values – what you need to do; • Expected result / output values – what you are supposed to get from application; • Actual result – what you really get from application; • Pass / Fail.

  38. Contd.. • Inputs: • Through the UI; • From interfacing systems or devices; • Files; • Databases; • State; • Environment. • Outputs: • To UI; • To interfacing systems or devices; • Files; • Databases; • State; • Response time.

  39. Contd.. • Format– follow company standards; if no standards – choose the one that works best for you: • MS Word document; • MS Excel document; • Memo-like paragraphs (MS Word, Notepad, Wordpad).

  40. Example

  41. Test Suite • A document specifying a sequence of actions for the execution of multiple test cases; • Purpose: to put the test cases into an executable order, although individual test cases may have an internal set of steps or procedures; • Is typically manual, if automated, typically referred to as test script (though manual procedures can also be a type of script); • Multiple Test Suites need to be organized into some sequence – this defined the order in which the test cases or scripts are to be run, what timing considerations are, who should run them etc.

  42. Traceability matrix • What: document tracking each software feature from PRD to FS to Test docs (Test cases, Test suites); • Who: Engineers, QA; • When: (design)/coding/testing stage(s); • Why: we need to make sure each requirement is covered in FS and Test cases; • QA role: • Analyze for completeness; • Make sure each feature is represented; • Highlight gaps.

  43. PRD Section FS Section Test case Notes 1.1. Validation of user login credentials. 4.1. User login validation. 6.1.4. User login with proper credentials. 6.1.5. User login with invalid username. 6.1.6. User login with invalid password. 1.2. Validation of credit card information. 7.2.4. Credit card information verification. 10.1.1. Valid credit card information input. 10.1.2. Invalid credit card number. 10.1.3. Invalid credit card name. … Example

  44. PATH TESTING • Flow Graph Notation

  45. CyclomaticComplexity • Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program • The value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program • It provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once • An independent path is any path through the program that introduces at least one new set of processing statements or a new condition

  46. Flow chart and flow graph

  47. Flow chart and flow graph • Compound Logic

  48. Independent program paths • It is any path through the program that introduces at least one new set of processing statements or a new condition • path 1: 1-11 • path 2: 1-2-3-4-5-10-1-11 • path 3: 1-2-3-6-8-9-10-1-11 • path 4: 1-2-3-6-7-9-10-1-11 • The number of regions of the flow graph correspond to the cyclomatic complexity • Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E - N + 2 • where E is the number of flow graph edges, N is the number of flow graph nodes

  49. Independent program paths • Cyclomatic complexity, V(G), for a flow graph, G, is also defined as • V(G) = P + 1 • where P is the number of predicate nodes contained in the flow graph G • The cyclomatic complexity can be computed using each of the algorithms just noted: • The flow graph has four regions • V(G) = 11 edges - 9 nodes + 2 = 4 • V(G) = 3 predicate nodes + 1 = 4 • The value for V(G) provides us with an upper bound for the number of independent paths that form the basis set • an upper bound on the number of tests that must be designed and executed to guarantee coverage of all program statements

More Related