1 / 56

AXIOMS

AXIOMS. THE. TESTING. OF. Advancing Testing Using Axioms. Paul Gerrard. Agenda. Axioms – a Brief Introduction Advancing Testing Using Axioms First Equation of Testing Test Strategy and Approach Testing Improvement A Skills Framework for Testers Quantum Theory for Testing Close.

lyneth
Télécharger la présentation

AXIOMS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. AXIOMS THE TESTING OF Advancing Testing Using Axioms Paul Gerrard

  2. Agenda • Axioms – a Brief Introduction • Advancing Testing Using Axioms • First Equation of Testing • Test Strategy and Approach • Testing Improvement • A Skills Framework for Testers • Quantum Theory for Testing • Close

  3. Some testers disagree with other testers...Truly, madly, deeply! Surely, there must be SOME things that ALL testers can AGREE ON? Or are we destined to argue FOREVER?

  4. Started as a ‘thought experiment’ in my blog in February 2008 Some quite vigorous debate on the web ‘great idea’ ‘axioms don’t exist’ ‘Paul has his own testing school’ Initial 12 ideas evolved to 16 test axioms Testers Pocketbook: testers-pocketbook.com Test Axioms Website test-axioms.com Test axioms – a little history

  5. If Axioms are “common ground” for ALL testers... Some very useful by-products • Test strategy, improvement, skills framework Interesting research areas! • First Equation of Testing, Testing Uncertainty Principle, Quantum Theory, Relativity, Exclusion Principle... • You can tell I like physics

  6. There is no agreed set of testing laws There are no agreed definitions of test or testing!

  7. Our definitions MUST be context-neutralto support the testing of ANY SYSTEM The words software, IT, program, technology, methodology, v-model, entry/exit criteria, risk – do not appear in definitions

  8. The selected definition of ‘test’ American Heritage Dictionary: Test: (noun) A procedure for critical evaluation; A means of determining the presence, quality, or truth of something; A trial.

  9. ‘Stakeholder Obsessed’ A testing stakeholder is someone who is interested in the outcome of testing; You can be your OWN stakeholder (e.g. dev and users)

  10. 16 Proposed Axioms(in three groups) Let’s look at a few of the test axioms

  11. Stakeholder Axiom Testing needs stakeholders

  12. Test Model Axiom Test design is based on models

  13. Test Basis Axiom Testers need sources of knowledge to select things to test

  14. Coverage Axiom Testing needs a test coverage model or models

  15. Fallibility Axiom Our sources of knowledge are fallible and incomplete

  16. Confidence Axiom The value of testing is measured by the confidence of stakeholder decision making

  17. Event Axiom Testing never goes as planned; evidence arrives in discrete quanta “Ohhhhh... Look at that, Schuster... Dogs are so cute when they try to comprehend quantum mechanics.”

  18. Never-Finished Axiom Testing never finishes; it stops

  19. Consider Axioms as thinking tools

  20. Advancing Testing Using Axioms

  21. First Equation of Testing Axioms+ Context+ Values+ Thinking =Approach

  22. Why is the equation useful? • Separation of Axioms, context, values and thinking • Tools, methodologies, certification, maturity models promote approaches without reference to your context or values • No thinking is required! • Without a unifying test theory you have no objective way of assessing these products.

  23. Test Strategy and Approach Strategy is a thought process not a document

  24. Contexts of Test Strategy Axioms Communication Early Testing Risks De-Duplication Test Strategy Opportunities Goals Automation Culture Contract User involvement Constraints Human resource Artefacts Skills Environment Process(lack of?) Timescales

  25. Testing needs stakeholders (p64) Summary: Identify and engage the people or organisations that will use and benefit from the test evidence we are to provide Consequence if ignored or violated: There will be no mandate or any authority for testing. Reports of passes, fails or enquiries have no audience. Questions: • Who are they? • Whose interests do they represent? • What evidence do they want? • What do they need it for? • When do they want it? • In what format? • How often?

  26. Test design is based on models (p68) Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions • Are design models available to use as test models? Are they mandatory? • What test models could be used to derive tests from the Test Basis? • Which test models will be used? • Are test models to be documented or are they purely mental models? • What are the benefits of using these models? • What simplifying assumptions do these models make? • How will these models contribute to the delivery of evidence useful to the acceptance decision makers? • How will these models combine to provide sufficient evidence without excessive duplication? • How will the number of tests derived from models be bounded?

  27. IEEE 829 Test Plan Outline • Test Plan Identifier • Introduction • Test Items • Features to be Tested • Features not to be Tested • Approach • Item Pass/Fail Criteria • Suspension Criteria and Resumption Requirements • Test Deliverables • Testing Tasks • Environmental Needs • Responsibilities • Staffing and Training Needs • Schedule • Risks and Contingencies • Approvals Based on IEEE Standard 829-1998

  28. IEEE 829 Plan and Axioms • Items 1, 2 – Administration • Items 3+4+5 – Scope Management, Prioritisation • Item 6 – All the Axioms are relevant • Items 7+8 – Good-Enough, Value • Item 9 – Stakeholder, Value, Confidence • Item 10 – All the Axioms are Relevant • Item 11 – Environment • Item 12 – Stakeholder • Item 13 – All the Axioms are Relevant • Item 14 – All the Axioms are Relevant • Item 15 – Fallibility, Event • Item 16 – Stakeholder Axioms

  29. A better Test Strategy and Plan? • Stakeholder Objectives • Stakeholder management • Goal and risk management • Decisions to be made and how (acceptance) • How testing will provide confidence and be assessed • How scope will be determined • Design approach • Sources of knowledge (bases and oracles) • Sources of uncertainty • Models to be used for design and coverage • Prioritisation approach • Delivery approach • Test sequencing policy • Repeat test policies • Environment requirements • Information delivery approach • Incident management approach • Execution and end-game approach • Plan (high or low-level) • Scope • Tasks • Responsibilities • Schedule • Approvals • Risks and contingencies

  30. Testing Improvement Test process improvement is a waste of time

  31. The delusion of ‘best practice’ • There are no “practice” Olympics to determine the best • There is no consensus about which practices are best, unless consensus means “people I respect also say they like it” • There are practices that are more likely to be considered good and useful than others, within a certain community and assuming a certain context • Good practice is not a matter of popularity. It’s a matter of skill and context. Derived from “No Best Practices”, James Bach, www.satisfice.com

  32. Actually its 11 (most were not software related)

  33. The delusion of process models(e.g. CMM) • Google search • “CMM” – 22,300,000 • “CMM Training” – 48,200 • “CMM improves quality” – 74 (BUT really 11 – most of these have NOTHING to do with software) • A Gerrard Consulting client… • CMM level 3 and proud of it (chaotic, hero culture) • Hired us to assess their overall s/w process and make recommendations (quality, time to deliver is slipping) • 40+ recommendations, only 7 adopted – they couldn’t change • How on earth did they get through the CMM 3 audit?

  34. “Test Process Improvement is a Waste of Time” • Using process change to fix cultural or organisational problems is never going to work • Improving test in isolation is never going to work either • Need to look at changing context rather than values…

  35. Why you are where you are Context+ Values+ Thinking =Approach <- your values <- your context <- your thinking <- your approach

  36. Where maturity models come from Context+ Values+ Thinking =Approach <- someone else's <- someone else's <- someone else's <- someone else's

  37. Making change happen Axioms+ Context+ Values+ Thinking =Approach <- recognise <- hard to change <- could change? <- just do some <- your approach

  38. Using the axioms and questions • Axioms represent the critical things to think about • Associated questions act as checklists to: • Assess your current approach • Identify gaps, inconsistencies in current approach • QA your new approach in the future • Axioms represent the WHAT • Your approach specifies HOW

  39. Eight stage change process (after Kotter) • Mission • Coalition • Vision • Communication • Action • Wins • Consolidation • Anchoring Changes identified here If you must use one, this is where your ‘test model’ comes into play

  40. A Skills framework for testers Axioms indicate WHAT to think about... ...so the Axioms point to SKILLS

  41. Test design is based on models (p68) Summary: Choose test models to derive tests that are meaningful to stakeholders. Recognise the models’ limitations and the assumptions that the models make. Consequence if ignored or violated: Tests design will be meaningless and not credible to stakeholders. Questions: • Are design models available to use as test models? Are they mandatory? • What test models could be used to derive tests from the Test Basis? • Which test models will be used? • Are test models to be documented or are they purely mental models? • What are the benefits of using these models? • What simplifying assumptions do these models make? • How will these models contribute to the delivery of evidence useful to the acceptance decision makers? • How will these models combine to provide sufficient evidence without excessive duplication? • How will the number of tests derived from models be bounded?

  42. Test design and modelling skills • A tester needs to understand: • Test models and how to use them • How to select test models from fallible sources of knowledge • How to design test models from fallible sources of knowledge • Significance, authority and precedence of test models • How to use models to communicate • The limitations of test models • Familiarity with common models Is this all that current certification provides?

  43. Testing as a commodity;Testers must specialise • Functional testers are endangered: • Certification covers process and clerical skills • Functional testing is becoming a commodity and is easy to outsource • To survive, testers need to specialise: • Management • Test automation • Test strategy, design, goal- and risk-based • Stakeholder management • Non-Functional testing • Business domain specialists...

  44. Training and certification must change • Intellectual skills and capabilities are more important than the clerical skills • Need to re-focus on: • Testing thought processes (Axioms) • Real-world examples, not theory • Testing as information provision • Goal and risk-based testing • Testing as a service (to stakeholders) • Practical, hands-on, real-world training, exercises and coaching.

  45. Quantum Theory of Testing If evidence arrives in discrete quanta... ...can we assign a value to it?

  46. How testing builds confidence • Tests are usually run one by one • Every individual test has some significance • Some tests expose failures but ultimately we want all tests to PASS • When all tests pass – the stakeholders are happy, aren’t they? • Can we measure confidence? • But...

  47. Testing never goes as planned (p78) • Testers cannot usually: • Prepare all tests they COULD do • Run ALL tests in the plan • Re-test ALL fixes • Regression-test as much or as often as required • How do we judge the significance of tests? • To include them in scope for planning (or not) • To execute them in the right order? • To ensure the most significant tests are run?

  48. Test ‘progress’ is measured using test cases and incidents • What stakeholders want ultimately, is every test to pass • The ideal situation is: • We have run all our tests • All our tests pass • Acceptance is a formality • Not all tests pass though • We track incidents, severity and priority – great • But how do we track the significance or value of tests that pass?

More Related