1 / 29

High Volume Test Automation in Practice

High Volume Test Automation in Practice. Andy Tinkham Principal Lead Consultant, QAT Magenic Technologies. Acknowledgements.

ananda
Télécharger la présentation

High Volume Test Automation in Practice

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Volume Test Automation in Practice Andy Tinkham Principal Lead Consultant, QAT Magenic Technologies

  2. Acknowledgements • This presentation draws on the knowledge shared by the attendees of WTST 12 in Melbourne, FL (Jan 25-27, 2013, hosted by the Harris Institute for Assured Information at the Florida Institute of Technology and Kaner, Fiedler & Associates, LLC) • Cem Kaner, Catherine Karena, Michael Kelly, Rebecca Fiedler, JanakaBalasooriyi, Thomas Bedran, Jared Demott, Keith Gallagher, Doug Hoffman, Dan Hoffman, Harry Robinson, Rob Sabourin, Andy Tinkham, Thomas Vaniotis, Tao Xie, Casey Doran, Mark Fiorvanti, Michal Frystacky, Scott Fuller, NawwarKabbani, Carol Oliver, VadymTereschenko • This material is heavily drawn from Cem Kaner’s blog posts on kaner.com and context-driven-testing.com, referenced at the end of this slide deck

  3. About me 17 years in testing industry Principal Lead Consultant at Magenic Technologies Doctoral student at Florida Tech Host free virtual office hours roughly weeklyhttp://ohours.org/andytinkham http://magenic.com/Blog.aspx http://testerthoughts.com http://twitter.com/andytinkham

  4. What is High Volume Test Automation (HiVAT)? -- WTST 12 working definition

  5. Let’s break that down…

  6. Manual & automated tests Every test falls somewhere on a continuum between the two extremes

  7. HiVAT tests tend toward the automated side • Human still designs overall tests (possibly very high-level) • Computer may determine inputs, paths and expected results • Computer evaluates individual results • Human determines stopping criteria • Number of tests • Time • First bug • Human analyzes overall results

  8. …but are different from “traditional” automation Include many iterations of execution May run for longer periods of time Sometimes involve more randomness Can be focused on looking for unknown risks rather than identified risks

  9. Why do HiVAT? Find problems that occur in only a small subset of input values Find difficult to encounter bugs like race conditions or corrupted state Catch intermittent failures Leverage idle hardware Address risks and provide value in ways that traditional automation & manual testing don’t normally do

  10. How do we do HiVAT? Lots of ways! Kaner gives this classification scheme which covers many techniques (including the ones we’re about to talk about)

  11. Methods that focus on inputs Testers usually divide inputs into equivalence classes and pick high-value representative values For reasonably-sized datasets, automation doesn’t need to do this! Run all (or at least many of) the values through the automation Alternatively, use random input generation to geta stream of input values to use for testing

  12. Parametric Variation • Replace small equivalence class representative sets • Some input sets may allow running the total set of inputs • Doug Hoffman’s MASPAR example • Others may still require sampling • Valid passwords example • Sampling can be optimized if data is well understood • Can generate random values

  13. High-Volume Combination Testing • Testers often use combinatorial test techniques to get a workable set of combinations to cover interactions • These techniques leave combinations uncovered • If we know which uncovered combinations are more important or risky, we can add them to the test set • What about when we don’t know which ones are of interest? • HiVAT tests can run many more combinations through than are usually done • Sampling can be same as Parametric Variation • Retail POS system example

  14. Input Fuzzing/Hostile Data Stream Testing Given a known good set of inputs Make changes to the input and run each changed values through the system Watch for buffer overruns, stack corruption, crashes, and other system-level problems Expression Blend example Alan Jorgensen’s Acrobat Reader work

  15. Automated Security Vulnerability Checking Scan an application for input fields For each input field, try a variety of common SQL Injection and Cross-Site Scripting attacks to detect vulnerabilities Mark Fiorvanti’s WTST paper (see references)

  16. One problem with input focused tests • We need an oracle! • It can be hard to verify the correctness of the results without duplicating the functionality we’re testing • Input-focused tests may look for more obvious errors • Crashes • Memory problems • Simple calculations

  17. Methods that exploit oracles Sometimes we already have an oracle available If so, we can take advantage of it!

  18. Functional Equivalence Run lots of inputs through the SUT and another system that does the same thing, then compare outputs FIT Testing 2 exam example

  19. Constraint Checks • Look for obviously bad data • US ZIP codes that aren’t 5 or 9 digits long • End dates that occur before start dates • Pictures that don’t look right

  20. State-Model Walking • 3 things required • State model of the application • A way to drive the application • A way to determine what state we’re in

  21. Methods that exploit existing tests or tools • Existing artifacts can be used in high-volume testing • Tests • Load Generators

  22. Long-Sequence Regression Testing • Take a set of individually passing automated regression tests • Run them together in long chains over extended periods of time • Watch for failures • Actions may leave corrupted state that only later appears • Sequence of actions may be important • Mentsville example

  23. High-Volume Protocol Testing • Send a string of commands to a protocol handler • Web service method calls • API calls • Protocols with defined order

  24. Load-enhanced Functional Testing Run your existing automated functional tests AND your automated load generation at the same time Add in additional diagnostic monitoring if available Systems behave differently under load System resource problems may not bevisible when resources are plentiful Timing issues

  25. Starting HiVAT in your organization • Inventory what you already have • Existing tests you can chain together (Preferably without intervening clean-up code) • Tools you can put to additional uses • Oracles you can use • Places where small samples have been chosen from a larger data set • Hardware that is sometimes sitting idle

  26. Starting HiVAT in your organization Match your inventory up to techniques that can take advantage of them Think about what sorts of risks and problems a technique could reveal in your application For each risk, do you have other tests that can be reasonably expected to cover that issue? How much value is there in getting information about the risk? How much effort is required to get the information? What other tasks could you do in the same time? Is the value of the information ≥ the cost to implement + the value of the other tasks?

  27. Summary High volume automated testing is a family of test techniques focused on running an arbitrary number of tests The number of tests is often defined by an amount of time or coverage of a set of values rather than trying for a minimal set Some high-volume techniques focus on covering a set of inputs Some take advantage of an accessible oracle Some reuse existing artifacts in new ways Determining what makes sense for you is a matter of risk and value

  28. References Cem Kaner’s High Volume Test Automation Overviewhttp://kaner.com/?p=278 Cem’s WTST 12 write-uphttp://context-driven-testing.com/?p=69 WTST 12 home page (with links to papers and slides, including Mark Fiorvanti’s)http://wtst.org Doug Hoffman’s MASPAR examplehttp://www.testingeducation.org/BBST/foundations/Hoffman_Exhaust_Options.pdf Alan Jorgensen’s “Testing With Hostile Data Streams” paperhttps://www.cs.fit.edu/media/TechnicalReports/cs-2003-03.pdf Pat McGee & Cem Kaner’s Long-Sequence Regression Test (Mentsville) planhttp://www.kaner.com/pdfs/MentsvillePM-CK.pdf

  29. Contact Information Andy Tinkham Magenic Technologies andyt@magenic.com http://magenic.com http://ohours.org/andytinkham http://testerthoughts.com http://twitter.com/andytinkham

More Related