1 / 63

Software Quality Assurance

Software Quality Assurance. Training Course – DAY 5 Neven Dinev By courtesy of. Review from day 4. Testing techniques Requirement coverage Equivalence class partitioning Boundary value analysis Domain Testing Decision Tables Path testing State transition diagrams

Télécharger la présentation

Software Quality Assurance

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Software Quality Assurance Training Course – DAY 5 Neven Dinev By courtesy of

  2. Review from day 4 • Testing techniques • Requirement coverage • Equivalence class partitioning • Boundary value analysis • Domain Testing • Decision Tables • Path testing • State transition diagrams • Pairwise – Orthogonal arrays and All Pairs • Risk and defect techniques

  3. Test case design • If the purpose of testing is to gain information about the product, then a • test case’s function is to elicit information quickly and efficiently. • In information theory, we define “information” in terms of reduction of uncertainty. If there is little uncertainty, there is little information to be gained. • A test case that promises no information is poorly designed. • A good test case will provide information of value whether the program passes the test or fails it.

  4. Good Test Case design • Neither Too Simple Nor Too Complex • What makes test cases simple or complex? (A simple test manipulates one variable at a time.) • Advantages of simplicity? • Advantages of complexity? • Transition from simple cases to complex cases (You should increase the power and complexity of tests over time.)

  5. The excellent test case • Reasonable probability of catching an error • Exercises an area of interest • Does interesting things • Doesn’t do unnecessary things • Neither too simple nor too complex • Not redundant with other tests • Makes failures obvious • Allows isolation and identification of errors

  6. Making a good test • Start with a known state • Design variation into the tests • configuration variables • specifiable (e.g. table-loadable) data values • Check for errors • Put your analysis into the test itself • Capture information when the error is found (not later) • test results • environment results • Don’t encourage error cascades

  7. Scripted/planed testing • Review documentation • Write detailed test cases • Execute them step by step and provide results for each step • Scripted tests are non variable? How do they uncover new defects?

  8. Exploratory testing Definition • “Any testing to the extent that the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.”

  9. Exploratory TestingDefinition 2 • Simultaneously: • Learn about the product • Learn about the market • Learn about the ways the product could fail • Learn about the weaknesses of the product • Learn about how to test the product • Test the product • Report the problems • Advocate for repairs • Develop new tests based on what you have learned so far.

  10. Choosing Exploratory ApproachTester Domain Knowledge • Simply put, how well does an individual test engineer understand the operation of the system and the business processes that system is supposed to support? • If a tester does not understand the system or the business processes, it would be very difficult for him to use, let alone test, the application without the aid of test scripts and cases. Likewise, in user acceptance testing (UAT), where business users are conducting testing, a new interface may require the use of formal scripts or training to orient the users before they can test the system. • A simple system such as an information Web site, on the other hand, is well understood by a professional tester, making it a prime candidate for exploratory testing. 

  11. Choosing Exploratory ApproachSystem Complexity • The complexity of the system can also affect its suitability for exploratory testing. The user needs to understand how to use the system. • Where there is a high level of dependency between functions for end-to-end testing (i.e., one process depends on the data created by another), detailed test planning is required. End-to-end testing can be accomplished with exploratory testing; however, the capabilities and skill sets required are typically those of a more experienced test engineer. 

  12. Choosing Exploratory ApproachLevel of documentation • Scripted testing generally flows from business requirements documents and functional specifications. • When these documents do not exist or are deficient, it is very difficult to conduct scripted testing in a meaningful way. The shortcuts that would be required, such as scripting from the application as it is built, are accomplished as efficiently using exploratory testing.  

  13. Choosing Exploratory ApproachTimeframe and deadlines • The lead-in time to test execution determines whether you can conduct test design before execution. • When there is little or no time, you might at least need to start with exploratory testing while documented tests are prepared after the specifications become available. 

  14. Choosing Exploratory ApproachAvailable resources • The total number of person-days of effort can determine which approach you should take. • Formal test documentation has significant overhead that can be prohibitive where resources and budgets are tight.  

  15. Choosing Exploratory ApproachSkills required • The skill sets of your team members can affect your choices. • Good test analysts may not necessarily be effective exploratory testers and vice versa. • A nose for finding bugs quickly and efficiently is not a skill that is easily learned. • Rigorous analysis and test design are skills that, while sharing similar characteristics, differ significantly in their application. Good documentation makes it easier for a test analyst to come up to speed during the design phase; however, good documentation may not help an exploratory tester become an expert user of a system during testing. So there are many considerations when evaluating the skill set you have available. 

  16. Choosing Exploratory ApproachCoverage • Some systems and applications are such that the level of test coverage must be accurately measurable and demonstrable. • Formal testing through a documented test design has a clear path from each requirement or specification to a test case that is created, reviewed, signed off, and executed (with a signed record of execution). The degree to which a function or process has been tested is therefore known. This detailed documentation of test coverage adds overhead that detracts from exploratory testing’s efficiency advantage. 

  17. Choosing Exploratory ApproachVerification • Verification requires something to compare against. • Formal test scripts describe an expected outcome that is drawn from the requirements and specifications documents. As such, we can verify compliance. • Exploratory testing is compared to the test engineer’s expectations of how the application should work. 

  18. Choosing Exploratory ApproachAcceptable risk levels • The degree of acceptable risk for a function is directly related to how critical it is to the business. • When assessing the test approach (and the amount of test effort) for a particular requirement or area of the system, you must assess the level of risk. • Exploratory testing carries a risk in that the coverage of testing cannot be guaranteed. However, this can be mitigated. 

  19. Choosing Exploratory ApproachReproducibility • Test comprises essentially two components that can require reproduction: • 1) the defects found must be able to be reproduced; and • 2) the test itself may need to be reproduced such that the same sets of tests are conducted to give the same result. • This second component of reproducibility is less critical but still important for some projects and is a requirement of some standards. 

  20. Scripted vs Exploratory Testing

  21. Scripted vs Exploratory Testing

  22. Sample Exploratory procedure • Identify the purpose of the product. • Identify functions. • Identify areas of potential instability. • Test each function and record problems. • Design and record a consistency verification test.

  23. Identify the purpose of the product 1. Review the product and determine what fundamental service it’s supposed to provide. To the extent feasible, define the audience for the product. 2. Write (or edit) a paragraph that briefly explains the purpose of the product and the intended audience.

  24. Identify functions 1. Walk through the product and discover what it does. 2. Make an outline of all primary functions. 3. Record contributing functions that are interesting or borderline primary. 4. Escalate any functions to the Test Manager that you do not know how to categorize, or that you are unable to test.

  25. Identify areas of potential instability 1. As you explore the product, notice functions that seem more likely than most to violate the stability standards. 2. Select five to ten functions or groups of functions for focused instability testing. You may select contributing functions, if they seem especially likely to fail, but instability in primary functions is more important. 3. Determine what you could do with those functions that would potentially destabilize them. Think of large, complex, or otherwise challenging input. 4. List the areas of instability you selected, along with the kind of data or strategies you’ll use to test them.

  26. Test each function and record results 1. Test all the primary functions you can in the time available. 2. Test all the areas of potential instability you identified. 3. Test a sample of interesting contributing functions. 4. Record any failures you encounter. 5. Record any product notes you encounter. Notes are comments about quirky, annoying, erroneous, or otherwise concerning behavior exhibited by the product that are not failures

  27. Design and record a consistency verification test. Record a procedure for exercising the most important primary functions of the product to assure that the product behaves consistently on other Windows platforms and configurations

  28. Exercise Do first 2 steps of exploratory procedure for sample application • Identify Purpose • Identify Functions

  29. Automated Testing • Why to automate? • repeatability, • leverage, • accumulation.

  30. Automation Benefits Repeatability • Repeatability means that automated tests can be executed more than once, consistently each time. This leads to time savings as well as predictability. • But in order to realize this benefit, the application must be stable enough that the same tests can be repeated without excessive maintenance.

  31. Automation BenefitsLeverage • True leverage from automated tests comes not only from repeating a test that was captured while performed manually, but from executing tests that were never performed manually at all. • For example, by generating test cases programmatically, you could yield thousands or more - when only hundreds might be possible with manual resources. • Enjoying this benefit requires the proper test case and script design.

  32. Automation BenefitsAccumulation • The third benefit, accumulation, is the most critical for the long term. It is a fact that applications change and gain complexity over their useful life. • Constant modifications and enhancements are typical; rarely does the functionality decline or even freeze. • Therefore, the number of tests which are needed for coverage is also constantly increasing. But, if the automated tests arc not designed to be maintainable as the application changes, the test library will be fighting just to stay even. • Therefore, it is critical to adopt an approach to test library design that supports maintainability over the life of the application.

  33. When not to automateInstability • There are certain applications that are inherently unstable by design. • For example, a weather-mapping system or one which relies on real-time data will not demonstrate sufficiently predictable results for automation. • Applications whose data is not stable enough to produce consistent results are not good candidates for automation. • The investment required to develop and maintain the automated tests will not be offset by the benefits, since repeatability will be doubtful.

  34. When not to automateInexperienced testers • If the person(s) automating the tests are not sufficiently experienced with the application to know the expected behavior, automating their tests is also of doubtful value. • Their tests may not accurately reflect the correct behavior, causing later confusion and wasted effort. • Remember, an automated test is only as good as the person who created it.

  35. When not to automateTemporary testers • In other cases, the test team may be comprised primarily of personnel from other areas, such as users or consultants, who will not be involved over the long term. • Because of the initial investment in training and the short payback period, it is probably not effective to automate with a temporary team.

  36. When not to automateInsufficient time and resources • If you don't have enough time or resources to get your testing done manually in the short term, don't expect a tool to help you. • The initial investment for planning, training and implementation will take more time in the short term than the tool can save you. • Get through the current crisis, then look at automation for the longer term.

  37. When not to automateshort projects • Short project are not applicable for automation. • Huge investment • Low ROI • Often changes in UI

  38. Price of automation • Can be calculated like: price_for_tool + price_to_automated - price_of_test_run*num_of_runs We hope that total price of test runs will be more that investment

  39. Who should automate? • Developers, because it is a extensive development task even sometimes it seems easy • Can mix QA Manager with Developers.

  40. How to manage automation? • Approach automation as project • plans, • Verification and validation, • source control, • defect tracking … etc. • When to start? • Earlier you will have time to prepare libs and learn aplication • Wait for stable interfaces

  41. Automated tools Selection • Record & Playback • Web Testing • Database tests • Data functions • Object Mapping • Image testing • Test/Error recovery • Object Name Map

  42. Automated tools selection (cont) • Object Identity Tool • Extensible Language • Environment support • Integration • Cost • Ease of use • Support • Object Tests

  43. Automated GU testing tools • Test Complete – Automated QA • Quick test pro – Mercury Win Runner (old) • Visual Test – Rational • Silk

  44. Test Complete Review(live demonstration) • Script panel • Suite panel • Result panel • Recording • Object Browser • Execution • Log Panel • Sample test demonstrations

  45. Exercise • Using test complete design 4 test case for Calculator • To test add • To test subtract • To test multiply • To test divide Organize tests in set and execute them

  46. Mercury Quick Test Review Demonstration • Record simple test in “Keyword Driven Mode” • Execute test and review results

  47. Performance testing • What is a Virtual user? • A virtual user is a simulation of a real user created by software

  48. Performance testing How to plan performance test? • First ask client • Design most common use case • Start as much earlier as you can • Plan for hardware What kind of testers we need? • Programming skills required • Almost full time working perf scripts and updates?

  49. Executing performance tests How to execute performance test? • Start With simple test • Research application. • Find weak point and focus on them Monitoring tools in Windows 2000&XP • Use standard Perf monitor

  50. Performance test tools • OpenSta • Load Runner • jMeter • Silk Performer

More Related