Download
software quality assurance testing n.
Skip this Video
Loading SlideShow in 5 Seconds..
Software Quality Assurance & Testing PowerPoint Presentation
Download Presentation
Software Quality Assurance & Testing

Software Quality Assurance & Testing

359 Vues Download Presentation
Télécharger la présentation

Software Quality Assurance & Testing

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Software QualityAssurance & Testing Mistakein coding is called error , Errorfound by tester is called defect, Defectaccepted by development team then it is called bug , Product does not meet the requirements then it Is failure.”

  2. Importance of Test • There are hundreds of stories about failures of computer systems that have been attributed to errors in software • There are many reasons why systems fail but the issue that stands out the most is the lack of adequate testing. • ExamplePepsi - $42 Billion Error Philippines -1992 • Dueto a software error, 800,000 bottle caps were produced with number 349 instead of one. • Itwas equivalent to$42 billion in prize moneyinsteadof $40,0000

  3. TestingLevels

  4. Testing Goals Based on Test Process MaturityBezierTestingLevels Level 0 There is no difference between testing and debugging Level 1 The purpose of testing is to show the correctness (software works). Level 2 The purpose of testing is to show that the software doesn’t work. Level 3 The purpose of testing is not to prove anything specific, but to reducethe risk of using the software. Level 4 Testing is a mental discipline that helps all IT professionals develophigherqualitysoftware.

  5. BezierTestingLevels • Level 0 is the view that testing is the same as debugging. • In Level 1 testing, the purpose is to show correctness • In Level 2 testing, the purpose is to show failures • Level 3 testingshowsthepresence , not absence or failure. Ifwe use software, we exposesome risk. The risk may be small and unimportant, or the risk may be great and the consequences catastrophic,but risk is always there. In level 3testing, both testers and developers work togetherto reduce risk.

  6. BezierTestingLevels • Once the testers and developers are on the same “team,” an organization canprogress to real Level 4 testing. Level 4 testingmeans that the purpose of testing is to improve the ability of the developers to producehighquality software.

  7. Debugging andTesting • Debugging is done in the development phase by the developer • Indebuggingphaseidentified bugsarefixed • Testingis done in the tester phase by the tester. • In testing,locating and identifying of thebugs areincluded.

  8. Software Bug • A software bug is an error, flaw, mistake, failure, or fault in a computer program or system • Bug is terminology of Tester • Softarebugproduces an incorrect or unexpected result, or causes it to behave in unintended ways

  9. Software Fault • A static (physical)defect*, imperfection, flaw in the software. • short between wires • break in transistor • infinite program loop • A fault means that there is a problem within the code of a program which causesit to behave incorrectly. *defect: Mismatch between the requirements

  10. Software Error • Error is an incorrect internal state that is the demostration of some fault. • In other words; Error is a deviation from correctness or accuracy. Example • Suppose a line is physically shortened to 0. (there is a fault). • As long as the value on line issupposed to be 0, there is noerror. • Errors are usually associated with incorrect values in the system state.

  11. Software Failure • A failure means that the program has not performed as expected, and the actual performance has not met the minimum standards for success. • External, incorrect behavior with respect tothe requirements or other description of the expected behavior Example • Suppose a circuit controls a lamp (0 = turnoff, 1 = turn on) and the output is physically shortenedto 0 (there is a fault). • As long as the user wants thelamp off, there is no failure.

  12. eBayCrash • eBay in 1999 was a verylarge Internetauction house –top 10 Internetbusiness – Market value of $22 billion – 3.8 million users as of March 1999 – Access allowed 24 hours 7 days a week • InJune 6, 1999 eBay system was unavailable for 22 hours withproblems ongoing for several days • Stock drops by 6.5%, $billion lost revenues Problems blamed on Sun server software

  13. Ariane 5 RocketCrash (June 4, 1996) • Ariane 5 rocket exploided 37 seconds after lift-off on • Error wasdue to software bug • Conversion of a 64-bit floating point number to a 16-bitinteger resulted in an overflow • Dueto the overflow, the computer cleared itsmemory • Ariane5 interpreted the memory dump as an instruction to its rocket nozzles • Testing of full system under actual conditions not done due to budget limits Estimatedcost: 60 million $

  14. Nine Causes of Software Errors 1. Faulty requirements definition 2. Client developer communication failures 3. Deliberate deviations from software requirements 4. Logical design errors 5. Codingerrors 6. Noncompliance with documentation and coding instructions 7. Shortcomings of the testing process 8. User interface and procedure errors 9. Documentation errors

  15. Example: Failure& Fault& Error Consider a medical doctor making a diagnosis for a patient. • The patient entersthe doctor’s office with a list of failures(that is, symptoms). • The doctor then mustdiscover the fault, or root cause of the symptom. To aid in the diagnosis, a doctormay order tests that look for anomalous internal conditions. • Inour terminology, these anomalous internal conditions correspond to errors.

  16. Cause-and-effect relationship • Faults can result in errors. • Errors can lead to system failures • Errors are the effect of faults. • Failures are the effect of errors • Bug in a program is a fault. • Possible incorrectvalues caused by this bug is an error. • Possiblecrush of the operating system is a failure

  17. Cause & EffectRelationship

  18. Origins of Faults • Specification Mistakes – incorrect algorithms, incorrectly specified requirements (timing, power, environmental) • Implementation Mistakes – poor design, software coding mistakes, • Component Defects – manufacturing imperfections, random device defects, components wear-outs • External Factors – operator mistakes, radiation, lightning,

  19. Program Example classnumZero { public static intnumZero (int[] x) { // if x == null throw NullPointerException // else return the number of occurrences of 0 in x intcount= 0; for (int i = 1;i < x.length; i++) if(x[i] == 0) count++; returncount;} }

  20. TheFaultin theExample • The fault in this program is that it starts looking for zeroes at index 1insteadof index 0, as is necessary for arrays First Input: numZero([2, 7, 0]) correctlyevaluates to 1, Second Input: numZero([0, 7, 2]) incorrectlyevaluates to 0. • In bothof these cases the fault is executed. • Bothof these cases result in an error, • Onlythe second case results in failure.

  21. ErrorStatesin theExample To understand the error states, we need toidentify the state for the program. • The state for numZeroconsists of values for thevariables x, count , iand the program counter (PC). • The state at the if statement on the firstiteration of the loop is ( x = [2, 7, 0], count = 0, i= 1, PC = if) • This state is in error, becausethe value of ishould be 0on the first iteration.

  22. ErrorStates in theExample(forthefirstinput) However • The value ofcount is correct • The error state does not affectthe output, • The software does not fail. Finally: • Astate is in error if it is notthe expected state, even if all of the values in the stateareacceptable.

  23. ErrorStates in theExample(forthesecondinput) Forthesecondinput: • The state for numZeroconsists of values for thevariables x, count , i and the program counter (PC). • Thestate is (x = [0, 7, 2], count = 0,i=1, PC = if) • The erroraffectsthe variable count • Theerroris present inthe return value of the method. • Thefailure results.

  24. DistinguishTestingfrom Debugging • The definitions of fault and failure allow us to distinguish testing from debugging. • Big difference is that debugging is conducted by a programmer and the programmer fix the errors during debugging phase. • Tester never fixes the errors, but rather find them and return to programmer.

  25. TestingversusDebugging • Testing activity is carried down by a team of testers, in order to find the defect in the software. • Test engineers run their tests on the piece of software and if they encounter any defect (i.e. actual results don't match expected results), they report it to the development team. • Testers also have to report at what point the defect occurred and what happened due the occurrence of that defect. • All this information will be used by development team to DEBUG the defect.

  26. Testingversus Debugging • Debugging is the activity which is carried out by the developer. • Aftergetting the test report from the testing teamaboutdefect(s), the developer tries to find the cause of the defect. • He has to go through lines of code and find which part of code in causing that defect. • After finding out the bug, he tries to modify that portion of code and then he rechecks if the defect has been finally removed. • After fixing the bug, developers send the software back to testers.

  27. What is Software Quality ? According to the IEEE SoftwareQualityis: • The degree to which a system, component, or processmeets specified requirements. • The degree to which a system, component, or processmeets customer or user needs or expectations.

  28. Importance of Software Quality • Software is a major component of computer systems(about 80% of the cost) – used for – communication (e.g. phone system, email system) – health monitoring, – transportation (e.g. automobile, aeronautics), – economic exchanges (e.g. ecommerce), – entertainment, – etc. • Software defects are extremely costly in term of – money – reputation – loss of life

  29. Software Quality Factors

  30. Software Quality Factors • Correctness – accuracy, completeness of required output –datedness,availability of the information • Reliability – maximum failure rate • Efficiency – resources needed to perform software function • Integrity – software system security, access rights • Usability – ability to learn, perform required task

  31. Software Quality Factors • Maintainability – effort to identify and fix software failures (modularity,documentation, etc) • Flexibility – degree of adaptability (to new customers, tasks, etc) • Testability – support for testing (e.g. log files, automatic diagnostics, etc)

  32. Software Quality Factors • Portability – adaptation to other environments (hardware, software) • Reusability – use of software components for other projects • Interoperability – ability to interface with other components/systems

  33. What is Software Quality Assurance? • According to the IEEESoftware quality assurance is: • A planned and systematic pattern of all actions necessary to provide adequate confidence that an item or product conforms to established technical requirements. • A set of activities designed to evaluate the process bywhich the products are developed or manufactured. Contrast with: quality control.

  34. Software Quality Assurance • Verification – are we building the product right ? – performed at the end of a phase to ensurethat requirements established duringprevious phase have been met • Validation – are we building the right product ? – performed at the end of the development process to ensure compliance with product requirements

  35. Three General Principles of QualityAssurance • Know what you are doing • Know what you should be doing • Know how to measure the difference

  36. Verification & Validation • Verification: The process of determining whether the productsof a given phase of the software development process fulfill the requirementsestablished during the previous phase. • Validation: The process of evaluating software at the end ofsoftware development to ensure compliance with intended usage.

  37. DifferenceBetweenVerification &Validation • Verification is preventing mechanism to detect possible failures before the testing begin. • It involves reviews, meetings, evaluating documents, plans, code, inspections, specifications etc. • Validation occurs after verification and it's the actual testing to find defects against the functionality or the specifications

  38. TheDifferencebetweenVerification &Validation • Verification is usually a more technical activity that uses knowledge about theindividual software artifacts, requirements, and specifications. • Validation usuallydepends on domain knowledge; that is, knowledge of the application for which thesoftware is written. • For example, validation of software for an airplane requiresknowledge from aerospace engineers and pilots.