1 / 19

ELN5622 Embedded Systems Class 10 Spring, 2003

ELN5622 Embedded Systems Class 10 Spring, 2003. Aaron Itskovich itskov a @ algonquincollege .com. Outlook. Testing, & Verification , Reliability Test Plan Creation & Execution. Relaiability & Correctness Design for Test & Debugging

Télécharger la présentation

ELN5622 Embedded Systems Class 10 Spring, 2003

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ELN5622Embedded SystemsClass 10Spring, 2003 Aaron Itskovichitskova@algonquincollege.com

  2. Outlook • Testing, & Verification, Reliability • Test Plan Creation & Execution. • Relaiability & Correctness • Design for Test & Debugging • Testing (Yield, Field Return, Golden System, Coverage), JTAG

  3. Why test? • To reduce risk to both user and company • To reduce development and maintenance cost • To improve performance • To find bugs in software and hardware

  4. To Find the Bugs • Halting Theorem proves it’s impossible to prove that an arbitrary program is correct • Given the right test you can prove that a program is incorrect • Testing is not about proving “correctness” of a program but about finding bugs • The only way to “know” how many bugs left is to test it with carefully designed test plan • Known bug is already “half bug”

  5. To reduce the risk and costs • Minimize risk to yourself, your company and your customers • Earlier you detect the problem cheaper the fix

  6. Costs of bugs • In 1990 HP sampled the cost of errors in software development during the year. The answer $400 million, shocked HP into a completely new effort to eliminate mistakes in writing software. The $400 waste , half of ot spent in the labs on rework and half in the field to fix the mistakes that escaped from lab amounted to 1/3 of the company total R&D budget

  7. How to make bug fixing cheaper? • If we can’t ensure correctness of the released system how to make bug fix cheaper? • Design system to be field upgradeable • Re-configurable hardware (FPGA) • Separate application software from boot • Use Flash or EEPROM as application storage

  8. When to test? • As early as possible • Statistically about 70% of the bugs found during the integration phase of the project were generated by code that had never been exercised before

  9. What tests? • Every time the program is modified, it should be retested to assure that the changes didn’t break some unrelated behavior - REGRESSION TESTING • Individual developers test at the module level by writing stub code to substitute for the rest of the system hardware and software – UNIT TESTINGH

  10. Test case design • Functional testing (black box) • Can and should be written in parallel with the requirements document • Coverage testing (white box) • Coverage test implies that your code is stable • Both kind of testing are necessary to test rigorously your embedded design

  11. A bit of history • The first known computer bug came about in 1946 when a primitive computer used by Navy to calculate the trajectories of artillery shells shut down when a moth got stuck in one of it’s computing elements, a mechanical relay. Hence, the name bug for computer error.

  12. When to stop testing • When the boss says • When the new iteration of the test cycle founds fewer than X new bugs • When a certain coverage threshold has been met without uncovering any new bugs • In case your system is mission critical look into DO-178B specification.

  13. Choosing Test Cases • Functional tests • Stress tests: test that intentionally overload input input channels, memory buffers… • Boundary value tests: Inputs that represent “boundaries”within particular rangeand input values that should case the output to transition across a similar boundary in the output range • Exception test: Test that should trigger a failure mode or exception mode • Error guessing: Test based on the prior experience with testing similar products • Random tests: Usually the least productive form of testing • Performance tests: Test that performance expectation from the requirements are met

  14. Choosing Test Cases • Coverage tests • Statement coverage: Test cases selected because they execute every statement in the program at least once. • Decision or branch coverage: Test cases chosen because they cause every branch (both true and false path) to be executed at least once. • Condition coverage: Test cases chosen to force each condition (term) in decision to take on all possible logic values.

  15. Practical alternatives • Gray box testing-what it is? • White box tests – expensive to maintain need to be reengineered every time code is changed • Gray box – exploit knowledge of implementation without being intimately tied to the coding details

  16. Some distinguishers of the embedded system • Embedded system must run reliably without crashing for long periods of time. • Embedded software must often compensate for problems with the embedded hardware • Real world events are usually asynchronous and non deterministic, making simulations tests difficult and unreliable • Did you read software “license agreement”?

  17. Measuring test coverage • Code instrumentation methods - aka Software logging • Printf: intrusive - slows down system • Low intrusion Printf • Usage of logic analyzer to measure coverage • Decision coverage (DC): measures results of decision points in the code • Modified decision coverage (MDC): One step farther than DC evaluates the terms that make up decision point. • Hardware instrumentation methods (logic analyzer, trace,

  18. How to test performance

  19. Manufacturing tests • Build in self test (BIST) • Test bed • Golden system concept • JTAG boundary scan • Yield • Field return • Fault correlation and root cause analyzes

More Related