1 / 29

Definitions and Concepts of Testing and Quality

Definitions and Concepts of Testing and Quality. What is Quality ?. What is Testing?. How Related?. Does finding a Lot of Problems through testing improve Quality?. Introduction to Testing Software.

francescaa
Télécharger la présentation

Definitions and Concepts of Testing and Quality

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Definitions and Concepts ofTesting and Quality What is Quality ? What is Testing? How Related? Does finding a Lot of Problems through testing improve Quality?

  2. Introduction to Testing Software • Testing is a relatively new discipline although programmers always “debugged” their programs. • Testing was conducted to show that software“works.” • In the 1970’s Glen Myers wrote a book, Art of Software Testing (1978). • He believed that the main purpose of testing is to find “faults.” • Are these (“works” .vs. “faults”) opposing views? • Why do we and why should we Test ? Your thoughts?

  3. Why Test ? • Simply because we can’t assume that the software “works” ---- • How do we answer the question: “Does this software work?” • Is it functional ? • Complete • Consistent • Is it reliable ? (system is continuously functioning > 720hrs) • Is it available (can you access the functionality when you want?) • Is it responsive ? (response time < 1 second) • In general, “what is the QUALITY of our software?” • How do we answer this? How would you answer this about the program that you wrote? How would you answer this about the program your friend wrote?

  4. Quality of Software? • Depends on what we include as software • Just the executable code? ----- How about ? • the pre-populated data in the database • the help text and error messages • the sourcelogic code • the design document • the test scenarios and test results • the reference manuals • the requirements document • When we talk about quality how much of the above (do we / should we) include? • How would you “test” these different artifacts?

  5. 2 Main Reasons for Testing • Assess the Quality/Acceptability of the (software) Artifact --- how much works? 2. Discover Problems in the (software) Artifact If we are already “satisfied” with the software, should we test? How do we know if we are satisfied? ---- based on what?

  6. 3 major “Testing” approaches • Traditionally, testing includes executing the code with test cases. (assuming - code is the main software artifact) 2. What do we do with the non-executable software artifacts? Reviews and Inspections - Expensive in terms of human resources - A lot to maintain and keep updated 3. Can we “prove” that the software works or is defect free? - Theoretically, given an arbitrary program we can not show that it has no bug. - We can use Formal Proofsto show the behavior of code

  7. An Informal Survey 15 years ago (late 90’s) • Professionals taking a course in testing: • 60% were new to testing • 20% had 1 to 5 years of experience in testing • 20 % expert testers • Metric used in testing • a) Most regularly used: Counting bugs found and ranking them by severity • b) Small number used : bug find rate or bug fix rate • Formal Methods used • Almost none formally trained in inspection or analysis • Test tool: • 76 % had been exposed to some automated test tool (Silk Test –Borland; Quick Test –HP, etc.) • Test definitions • Most of the practicing testers could not supply good definitions of testing terms; they just did the work! what do think?

  8. Historical Progression of “Attitudes” on Software Quality -Large Host & Centralized Systems -Single Vendor (hdw-sw-serv) -Long term development and long term investment(10 yrs) -Single platform -Systems ran by computer professionals -PC and Desktop Computing became ubiquitous -multiple vendors -quicker product development & shorter term investment -systems ran by non-computer individuals **Product Reliability & Quality was required and expected** **New product was fashionable & “reboot” became acceptable.** Even though we didn’t have very good quality -Web Services availability to many -Business conducted on the Web -Software and systems are not hobbies but a “business” again 1990’s 1980’s & Before **Product Reliability & Quality is once again important** --especially SECURITY -- Late 1995’s - 2015’s

  9. Current State (2000-2015) on Testing • Software is written by many with the heightened “entrepreneurial” spirit: • Speed to market • New & innovation is treasured • *Small organization that can’t afford much more than “coders”* • Embracing “Agile” process and mistaking it as “fast production regardless of what”: • Not much documented (requirements/design) • Hard to test without documented material • Lack of Trained/Good/Experienced Testers • Testers are not quite rewarded “as equals” to designers, but definitely gaining on and sometimes surpassing programmers (takes “good” cops to catch the thieves --- e.g. SECURITY area) • Improvement in tools and standards making development easier and less error prone Why ? How ? * The “entrepreneurs” such as Yahoo/Google/Amazon/Facebook are all maturing into large companies ---- with “SERVERS” --- reliability Matters, again --- so must test more ----

  10. Is Quality still a major Issue ? • What is Quality? • Some common comments: • “I know it when I see it” • Reliable • Secure • Meets requirements • Fit for use • Easy to use • Responsive • Full function / full feature • Classy and luxurious

  11. Some U.S. “pioneers” on Quality • Pioneers: • Joseph M. Juran • Quality =>Fitness for use • W. Edward Deming • Quality => Non-faulty system • More recently: • Philip Crosby • Quality => conformance with requirements • Achieve quality via “prevention” of error • Target of Quality is “zero defect” • Measure success via “cost of quality”

  12. ISO/IEC 9126 (1994) Quality “model” • Quality is composed of several characteristics: • Functional Completeness/Consistency • Reliability • Usability • Efficiency • Maintainability • Portability What do these mean ? How do you know if it is achieved & how would you “test” for these? Do these have to be specified in the requirements? ---- if they are not-------- do we ask for these during review/inspection? Your Thoughts?

  13. Quality • Quality is a characteristic or attribute • Needs to be clearly defined and agreed to • May have sub-attributes (e.g. previous page) • Needs specific metrics for each of the sub-attributes • Needs to be measured with the defined metric(s) • Needs to be tracked and analyzed • Needs to be projected • Needs to be controlled • Testing and Measurementare two key activities that would help us manage quality. Imagine what it is like without this. Explain ----- the relationship ?

  14. Some Precept Concerning Quality & QA • Quality requirements Does Not always dictate schedule ! • Market Condition often dictates schedule (especially for small companies) BUT • For large and multiple release software, quality is still a factor and may affect schedule ---- albeit schedule is seldomly changed for quality • Software development process today incorporates both the need for speed and quality (incorporating the notion of ) • a) service cost and • b) rewrite for a replacement new product. • Quality does not require “zero defect” Reliability • Commercial (non-life critical or mission critical) products are not developed with “zero defect” goal in mind. They are much more market driven --- market prefers but does not “demand” zero defects. • Focus on proper support • Focus on main functions and heavily used areas (not all defects are the same) • Focus on customer productivity (e.g. easy to learn and use) • Zero Defect is very expensive proposition ( time & resource)

  15. Some Precept Concerning Quality & QA(cont.) • Users Do Not Always Know What They Want • Users may not know all the requirements (especially for large, complex systems which require professional or legal knowledge.) • Users may not have the time or interest to “really focus” onthe requirements at the time when asked (timing problem). Users have their own fulltime jobs. • Users may not know how to prioritize needs from wishes • Users may not know how to articulate clearly all the requirements. (They are non-software development people.) • Developersmay not listen well or properly interpret the users statements. (They are not industry specialists.) Requirements is a key factor in software development ---- why? How does it affect software quality? ----- think about definitions of Quality --- “meets requirements”

  16. Some Precept Concerning Quality & QA(cont.) • Requirements are not always Clear, Stable, and Doable • Not all requirements are technically feasible; sometimes the “desired” new technology needs to be prototyped first. • Sometimes the requirements are changed, causing re-design or re-code without proper assessment of schedule impact. • Requirements are not always reviewed and signed off, but sometimes given in verbal form --- especially small changes. • People mistake iterative development to mean continuous change of requirements. What’s the danger here? – cost, schedule, quality

  17. Some Precept Concerning Quality & QA (cont.) • Customers often go for “New and Exciting” Product • “If the product has all the ‘needed’ features it would sell” --- is not necessarily true; people often WANT new & extra features. • Reliability is not always enough; sometimes customers will sacrifice quality for new and exciting features. • The story of IBM OS/2 operating system and Microsoft’s DOS operating system (even though both was commissioned by IBM). • IBM went for Reliability of the old Host Machines for desktop PC’s • Microsoft went for exciting individual user interfaces Over-emphasis of “exciting features” is one reason why we are regressing a little in software quality in the last ten years ! **Still, consider Apple i-phone success in spite of its activation & other problems

  18. Some Precept Concerning Quality & QA(cont.) • Price and Availability is sometimes more important to customers (especially for “commodity level” software) than Product Maturity or Quality. • At the commodity level software, the customers are individuals who wants the product NOWat an competitiveprice. (much like shopping for a home appliance such as a coffee maker, T.V. or an i-phone) • Sophisticated and full feature software needs to be balanced and sometimes traded off for price and speed. • Customers don’t always “need” all the functions and product maturity they think they require ---- if the price is right!

  19. Summarizing: major “Competitors” and“Adversaries” to Qualitywhen developing software: • Schedule (first to market) • Requirements (“bad” or “missing”) • New and Exciting(demands of “WANT” not “need”) • Price and Availability (retail customers)

  20. Some “Goals & Objectives” for Quality • Customer Oriented “Goals” (Example): • Show that your product “works” ( ---- perhaps x%) • Test all main paths and show that there is no problem • Show that your intent is customer satisfaction: • Test and find as much problem as possible and fix them before release, focusing on attributes such as: • “usability” • “reliability” • “functional completeness” • “price” and “innovation” • Developer Oriented “Goals” (Example): • Focus on both product and process • Process includes ample “testing” activities ---- within cost/schedule • Product is maintainable, easy to understand, reliable, complete, etc. Need numerical goals

  21. “Goal” Construction and Usage Steps • Define the sub-attributes of “Quality” interest • Define a metric or use an existing metric for that sub-attribute • Seta “goal” for that Quality interest ---- quantitative one • Measure, collect/record and analyze the collected data as we progress through the project. • Relateto (or prognosticate) and Assess the general quality of the product

  22. Start of “Quality Assurance” • As software size and complexity increased in the 1960’s and 1970’s, many software projects started to fail. • Many software did not perform the required functions • Others had performance (speed) problems • Some had large number of defects that prevented users from completing their work; some just flat out won’t even install ! • Software “Quality” Crisis was recognized and Quality Assurance was born, along with the term Software Engineering (1968 NATO conference).

  23. Software Quality Assurance (QA) • Software QA focused on 2 main areas • Software product • Software process • The focus on the process areas “borrowed” many techniques from the traditional i)manufacturing area and ii) systems engineering area • Concept of reliability (number of defects, mean time to failure, probability of failure, etc. metrics) • Concept of process controlin terms of looking at “repeatability” of process--- repeatable process produces “similar” product (or controllable results). mostly handled by “testing” lots of emphasis : e.g. CMM & CMMI

  24. QA, Process Control, and Documentation • A period of i)heavy emphasis on software development process and ii)excessive documentation dominated QA --- initially this improved the “software crisis.” • Process included multiple steps of reviews • Process included multiple steps of test preparation, test execution, and test result analysis • Process was controlled by many documentsand document flow which also improved project communications • But ----- the price paid was • a) speed and • b) someinnovation. (**** Lots of energy spent on process, LESS on product ****) Very Small Enterprises ( ≤ 25 people) could not afford process & documentation!

  25. Software Development is NOT Manufacturing(or is it?) • Software Development is extremely labor intensive • BUT ---People are not uniform like machines used in manufacturing. • Software Development often requires some innovation • Every software seems to be a one of its kind although more and more are becoming “standardized by domain” • The same set of people do not get to repeatedly develop the exactly same software multiple times.

  26. Some Areas of Improvements for QA Process and Control • Automate Record Keeping • Improved Documentation Techniques • Use trained “testers” and improve on their tools and methodologies

  27. Automate Record Keeping & Documentation • Many records are kept and should be automated with on-line forms: • Test plan • Test schedule • Test scripts • Test results • Etc. • The information often should be available to a “set” or “group” of people: • Repository or Configuration Management • Collaborative web-sites

  28. Improve Testers’ Tools and Methodology • Test Methodology Improvements • Test coverage analysis • Test case generation • Test-Fix-Integrate Process • Test results analysis & quality projection • Test metrics definition and measurements process • Etc. • Test tools improvements • Test coverage computation • Test trace • Test script generator • Test result records keeping and automated analysis • Build and integration (daily builds) • Etc.

  29. Some Wrap up Questions • What is Quality ? ---- an attribute ----- • What are some of Goals of Quality ? • What areas does Quality Assurance focus on? • What is Testing and what activities are included? Read the Article, “ Clearing A Career Path for Software Testers,” by E. Weyuker, T. Ostrand, J Brophy, & R. Prasad in IEEE Software (March/April – 2000) to get further understanding of this profession.

More Related