1 / 102

Stockholm, May 19, 2003 Testing Strategies for NoC

Stockholm, May 19, 2003 Testing Strategies for NoC. Raimund Ubar Tallinn Technical University Estonia raiub@pld.ttu.ee www.ttu.ee/ˇraiub/. OUTLINE. Introduction: how much to test? Defect modeling Hierarchical approaches to test generation Built-in self-test Stimuli generation in BIST

xenos
Télécharger la présentation

Stockholm, May 19, 2003 Testing Strategies for NoC

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Stockholm, May 19, 2003Testing Strategies for NoC Raimund Ubar Tallinn Technical University Estonia raiub@pld.ttu.ee www.ttu.ee/ˇraiub/

  2. OUTLINE • Introduction: how much to test? • Defect modeling • Hierarchical approaches to test generation • Built-in self-test • Stimuli generation in BIST • Response compaction and signature analyzers • BIST architectures • Hybrid BIST • P1500 Standard for SoC and NoC testing • Testing the communication infrastructure • Conclusions

  3. Introduction • The reliability of electronic systems is no longer a topic of limited critical applications like • military, aerospace and nuclear industries, where • failures may have catastrophic consequences • Electronic systems are becoming ubiquitous • their reliability issues are present in all types of consumer applications • Adequate testing of electronic products is a must

  4. Introduction • The complexity of systems, new failure models and modern technologies • cause the necessity for developing more efficient test methods • In the middle of 1990s, the core based SoC concept evolved • new strategies and standards dedicated to SoC test • Today the design methdology is moving towards the NoC approach • the presence of the regular communication structure requires new dedicated methods to test it

  5. Introduction Dependability There is no sequrity on the earth, there is only oportunity Douglas McArthur (General) Reliability Security Safety Design for testability: Diagnosis Test Fault-Tolerance Fault Diagnosis BIST Test

  6. Introduction – Test Tools Test experiment Test result System Fault simulation Fault diagnosis System model Fault table Test Go/No go Located defect Test generation Test tools

  7. Introduction – Test Tasks Fault Diagnosis and Test Generation as direct and reverse mathematical tasks: dy = F(x1, ... , xn) F(x1 dx1 , ... , xndxn) dy = F(X, dX) • Direct task: • Test generation: dX,dy = 1 given, X = ? • Reverse task: • Fault diagnosis: X,dy given, dX = ? • Fault simulation: X,dy = 1 given, dxk = ? Fault Simulationis aspecial case of fault diagnosis

  8. F F F F F F F 1 2 3 4 5 6 7 T 0 1 1 0 0 0 0 1 T 1 0 0 1 0 0 0 2 T 1 1 0 1 0 1 0 3 T 0 1 0 0 1 0 0 4 T 0 0 1 5 Introduction – Fault Diagnosis Fault localization by fault tables 0 1 1 0 T 0 0 1 0 0 1 1 6 Fault F located 5 Faults F and F are not distinguishable 1 4 No match, diagnosis not possible

  9. Test as the Quality Problem Defect level Yield (Y) P,n Quality policy Design for testability P - probability of a defect n - number of defects Testing - probability of producing a good product

  10. How Much to Test? Cost of quality Cost How to succeed? Try too hard! How to fail? Try too hard! (From American Wisdom) Cost of testing Cost of the fault Conclusion: “The problem of testing can only be contained not solved” T.Williams Quality Optimum test / quality 0% 100%

  11. How Much to Test? Time can be your best friend or your worst enemy (Ray Charles) Paradox: 264 input patterns (!) for 32-bit accumulator will be not enough. A short will change the circuit into sequential one, and you will need because of that 265 input patterns Paradox: Mathematicians counted that Intel 8080 needed for exhaustive testing 37 (!) years Manufacturer did it by 10 seconds Majority of functions will never activated during the lifetime of the system Y = F(x1, x2, x3) Bridging fault State q 0 y x1 & 1 & x2 * x3 1 Y = F(x1, x2, x3,q)

  12. How to Generate a Good Test? The best place to start is with a good title. Then build a song around it. (Wisdom of country music) Paradox: To generate a test for a block in a system, the computer needed 2 days and 2 nights An engineer did it by hand with 15 minutes So, why computers? Sea of gates & Sequence of 216 bits 16 bit counter 1 System

  13. Complexity vs. Quality Problems: • Traditional low-level test generation and fault simulation methods and tools for digital systems have lost their importancebecause of the complexityreasons • Traditional Stuck-at Fault (SAF) model does not quarantee the qualityfor deep-submicron technologies • How toimprove test quality at increasing complexities of today's systems? Two main trends: • Defect-oriented test and • High-level modelling • Both trends are caused by the increasing complexities of systems based on deep-submicron technologies

  14. Towards Solutions • The complexity problems in testing digital systems are handled by raising the abstraction levels from gate to register-transfer level (RTL) instruction set architecture (ISA)or behavioral levels • But this moves us even more away from the real life of defects (!) • To handle defects in circuits implemented in deep-submicron technologies, new defect-oriented fault models and defect-oriented test methods should be used • But, this is increasing even more the complexity (!) • As a promising compromise and solution is: To combine hierarchical approach with defect orientation

  15. OUTLINE • Introduction: how much to test? • Defect modeling • Hierarchical approaches to test generation • Built-in self-test • Stimuli generation in BIST • Response compaction and signature analyzers • BIST architectures • Hybrid BIST • P1500 Standard for SoC and NoC testing • Testing the communication infrastructure

  16. Fault and defect modeling Defects, errors and faults • An instance of an incorrect operation of the system being tested is referred to as an error • The causes of the observed errors may be design errors or physical faults -defects • Physical faults do not allow a direct mathematical treatment of testing and diagnosis • The solution is to deal with fault models System Defect Component Fault Error

  17. Transistor Level Defects Stuck-at-1 Broken (change of the function) Bridging Stuck-open NewState Stuck-on (change of the function) Short (change of the function) Stuck-off (change of the function) Stuck-at-0 SAF-model is not able to cover all the transistor level defects How to model transistor defects ?

  18. Mapping Transistor Defects to Logic Level A transistor fault causes a change in a logic function not representable by SAF model Function: y Faulty function: x1 x4 Short 0 – defect d is missing 1 – defect d is present d= Defect variable: x2 Generic function with defect: x5 x3 Mapping the physical defect onto the logic level by solving the equation:

  19. Mapping Transistor Faults to Logic Level Function: Faulty function: Generic function with defect: y x1 x4 Short Test calculation by Boolean derivative: x2 x5 x3

  20. Why Boolean Derivatives? Given: Distinguishing function: BD-based approach: Using the properties of BDs, the procedure of solving the equation becomes easier

  21. Functional Fault vs. Stuck-at Fault Full 100% Stuck-at-Fault-Test is not able to detect the short:  Functional fault The full SAF test is not covering any of the patterns able to detect the given transistor defect

  22. Defect coverage for 100% Stuck-at Test Results: • the difference between stuck-at fault and physical defect coverages reduces when the complexity of the circuit increases (C2 is more complex than C1) • the difference between stuck-at fault and physical defect coverages is higher when the defect probabilities are taken into account compared to the traditional method where all faults are assumed to have the same probability

  23. Generalization: Functional Fault Model Fault-free Faulty Constraints calculation: d = 1, if the defect is present Component with defect: Constraints: Component F(x1,x2,…,xn) y Wd Defect Fault model: (dy,Wd), (dy,{Wkd}) Logical constraints

  24. Functional Fault Model for Stuck-ON NOR gate Stuck-on VDD x1 RN x2 Y x1 x2 RP VSS Condition of the fault potential detecting: Conducting path for “10”

  25. Functional Fault Model for Stuck-Open NOR gate Test sequence is needed: 00,10 Stuck-off (open) tx1 x2 y 1 0 0 1 2 1 0 1 VDD x1 x2 Y x1 x2 VSS No conducting path from VDD to VSS for “10”

  26. Functional Fault Model for Shorts x*k xk d Example: Bridging faultbetween leadsxkandxl The conditionmeans that in order to detect the short between leadsxkand xl on the leadxk we have to assign toxkthe value 1 and toxlthe value 0. xl xk*= f(xk,xl,d) Wired-AND model

  27. Functional Fault Model for Sequential Shorts Bridging fault causes a feedback loop: Example: A short between leads xkand xlchanges the combinational circuit into sequential one x1 y & x2 & x3 Equivalent faulty circuit: x1 y & & x2 x3 tx1 x2 x3 y 1 0 1 0 2 1 1 1 1 Sequential constraints: &

  28. Mapping High level k WFk Component Low level WSk Bridging fault Environment Mapping First Step to Quality How to improve the test quality at the increasing complexity of systems? First step to solution: Functional fault model was introduced as a means for mapping physical defects from the transistor or layout level to the logic level System

  29. Faults and Test Generation Hierarchy Functional Structural Higher Level approach approach k Component Lower level WFk S Test F W k WSk System Network Bridging fault F W of modules k Environment S Test W F k ki Interpretation of WFk: - asa test on the lower level - asa functional fault on the higher level Module Network F W ki of gates d W Test F ki ki Circuit e Gat

  30. OUTLINE • Introduction: how much to test? • Defect modeling • Hierarchical approaches to test generation • Built-in self-test • Stimuli generation in BIST • Response compaction and signature analyzers • BIST architectures • Hybrid BIST • P1500 Standard for SoC and NoC testing • Testing the communication infrastructure

  31. Hierarchical Test Generation • In high-level symbolic test generation the test properties of components are often described in form of fault-propagation modes • These modes will usually contain: • a list of controlsignals such that the data on input lines is reproduced without logic transformation at the output lines - I-path, or • a list of control signals that provide one-to-one mapping between data inputs and data outputs - F-path • The I-paths and F-paths constitute connections that can be used to propagate test vectors from input ports (or any controllable points) to the inputs of the Module Under Test (MUT) and to propagate the test response to an output port (or any observable points) • In the hierarchical approach, top-down and bottom-up strategies can be distinguished

  32. Hierarchical Test Generation Approaches A System Bottom-up approach: • Pre-calculated tests for components generated on low-level will be assembled at a higher level • It fits well to the uniform hierarchical approach to test, which covers both component testing and communication network testing • However, the bottom-up algorithms ignore the incompletenessproblem • The constraints imposed by other modules and/or the network structure may prevent the local test solutions from being assembled into a global test • The approach would work well only if the the corresponding testability demands were fulfilled a D B C c a,c,D fixed x - free a D Local test: A = a.x B = f’(D) C = c.x c Module

  33. Hierarchical Test Generation Approaches Top-down approach: A System • Top-down approach - to solve the test generation problem by deriving environmental constraints for low-level solutions. • This method is more flexible,since it does not narrow the search for the global test solution to pregenerated patterns for the system modules • The method is of little use when the system is still under development in a top-down fashion, or when “canned” local tests for modules or cores have to be applied a’ D’ B c’ C a’,c’,D’ fixed x - free a’.x Symbolic global test: A = a’.x D’ = d’.x C = c’.x d’.x c’.x Module

  34. ALU Basics of Theory for Test and Diagnostics • Two basic tasks: • 1. Which test patterns are needed to detect a fault (or all faults) • 2. Which faults are detected by a given test (or by all tests) Boolean differential algebra only for logic level System Gate 0 1 & & 0 0 Decision Diagrams for logic and higher levels 1 Multiplier

  35. Hierarchical Diagnostic Modeling High-Level DD-s Boolean differential algebra BDD-s Two trends: • high-level modeling • to cope with complexity • low-level modeling • to cope with physicaldefects,to reach higher acuracy

  36. Binary Decision Diagrams y 1 Functional BDD x1 1 0 x2 x3 Simulation: x4 x5 0 1 1 0 1 0 0 Boolean derivative and test generation: x6 x7 0

  37. & & & Low-Level Test Generation on SSBDDs Bridge between leads 73 and 6: (dx7,Wd) 2. Activate a path: Test generation for a bridging fault: Macro 1 1 d 1 & 2 a y & 71 73 D 6 D D & e 1 3 72 7 b 1 4 5 1 y D D & 5 73 c 1 71 6 2 72 0 Wd: x6= 0, x7= 1 1. Solve the constraint: Path to 71: x1= 1, x2= 1 Path from 71: x5=0 Network x7 Wd 123 4 5 6 7 y 1 1 0 0 1 1 Defect Test pattern:

  38. Test Generation on High Level DDs High-level test generation with DDs: Conformity test Decision Diagram Multiple paths activation in a single DD Control function y3 is tested R 2 0 y # 0 4 Data path 1 R 2 y y y y 1 2 3 4 0 0 2 y y R + R 3 1 1 2 a R · c 1 1 M + IN + R 1 2 e 1 IN · M R 3 2 b 2 · R * M 1 · 2 IN · d 3 0 y R * R 2 1 2 1 IN* R Control: For D = 0,1,2,3: y1 y2 y3y4 = 00D2 Data: Solution of R1+ R2 IN  R1  R1* R2 Test program: 2

  39. Hierarchical Test Generation on DDs Hierarhical test generation with DDs:Scanning test Single path activation in a single DD Data function R1* R2is tested Decision Diagram R 2 0 Data path y # 0 4 y y y y 1 1 2 3 4 R 2 0 0 a 2 R · c y y 1 R + R 3 1 M + 1 2 1 e 1 · M IN + R R 2 3 2 1 b · IN * M · 2 IN · 2 d R 1 3 0 y R * R 2 1 2 Control: y1 y2 y3 y4 = x032 Data: For all specified pairs of(R1, R2) Test program: 1 IN* R 2 Low level test data

  40. y 2 y 3 A A s y 1 Y B C Test Generation for RTL Cores High-level path activation on DDs 0 0 • Transparency functions • on Decision Diagrams: • Y = C  y3 = 2, R3’ = 0 • C - to be tested • R1 = B  y1 = 2, R3’ = 0 • R1 - to be justified y # 0 # y R Y,R 0 2 3 2 3 1 1 R’ R’ 2 3 2 2 0 A C R’ 2 3 0 C 2R’ R’ 2 2 0 # 0 C A C+R’ 2 0 1 R’ 1 1 0 y R # # 0 1 1 1 1 R R’ R’ 2 1 1 2 0 R’ 0 + R 1 R’ 3 B 1 3 A * * F A R R’ 1 F(B,R’ ) 1 3

  41. y y 2 y 3 # A A R 2 s y 1 + R Y 3 B * F R C 1 = 0 0 2 Test Generation for RTL Cores High-level test generation example: t Time: t-1 t-2 t-3 q’=4 q’=2 q’=1 q’=0 y =2 3 y R’ = 0 = 0 2 2 R =D 3 # 0 Symbolic test sequence: R’ =0 2 q’=2 Fault propagation q’=1 y =2 1 y = 0 3 C =D R’ =0 A =D 3 1 # 0 * A R’ R’ =D B =D 1 1 2 2 Fault manifestation Constraints justification

  42. I1: MVI A,D A  IN I2: MOV R,A R  A I3: MOV M,R OUT  R I4: MOV M,A OUT  A I5: MOV R,M R  IN I6: MOV A,M A  IN I7: ADD R A  A + R I8: ORA R A  A  R I9: ANA R A  A  R I10: CMA A,D A  A Test Generation for Processor Cores High-Level DDs for a microprocessor (example): DD-model of the microprocessor: Instruction set: 1,6 A I IN 3 2,3,4,5 I R OUT A 4 7 A + R A 8 2 A  R I A R 9 A  R 5 IN 10  A 1,3,4,6-10 R

  43. Test Generation for Processor Cores High-Level DD-based structureofthe microprocessor (example): DD-model of the microprocessor: 1,6 A I IN IN 3 R 2,3,4,5 I R OUT A 4 7 A + R I A OUT 8 2 A  R I A R 9 A  R 5 A IN 10  A 1,3,4,6-10 R

  44. Test Generation for Processor Cores Scanning test program for adder: Instruction sequence T = I5 (R)I1 (A)I7 I4 for all needed pairs of (A,R) DD-model of the microprocessor: 1,6 A I IN I4 3 OUT 2,3,4,5 I R OUT A I7 A 4 7 A + R I1 A A 8 R IN(2) 2 A  R I A R I5 R 9 A  R 5 IN(1) IN Time: 10 t t - 1 t - 2 t - 3  A 1,3,4,6-10 Observation Test Load R

  45. Test Generation for Processor Cores Conformity test program for decoder: Instruction sequence T = I5 I1 DI4 for all DI1 -I10 at given A,R,IN DD-model of the microprocessor: 1,6 I A IN Data generation: 3 2,3,4,5 I R OUT A 4 7 A + R A 8 2 A  R I A R 9 A  R 5 IN 10  A 1,3,4,6-10 Data IN,A,R are generated so that the values of all functions were different R

  46. y y y y 1 2 3 4 c d 2 3 DECIDER: Hierarchical ATPG Logic Synthesis Scripts RTL Model (VHDL) a R · 1 + M 1 e Design Compiler (Synopsys Inc.) RTL DD Synthesis · M R 3 2 b · FU Library (DDs) FU Library (VHDL) * M · 2 IN · Gate Level Descriptions R 2 0 y # 0 4 SSBDD Synthesis 1 R 2 0 0 2 y y R + R 3 1 1 2 SSBDD Models of FUs RTL DD Model 1 IN + R 2 1 IN Modules or subcircuits are represented as word-level DD structures R Hierarchical ATPG 1 0 y R * R 2 1 2 1 Test patterns IN* R 2

  47. ATPG: Experimental Results Reference ATPGs: HITEC - T.M. Nierman, J.H. Patel, EDAC, 1991 GATEST - E.M.Rudnick et al., DAC, 1994 TTU: DET/RAND - hierarchical deterministic- random ATPG GENETIC - gate-level ATPG based on genetic algorithms

  48. OUTLINE • Introduction: how much to test? • Defect modeling • Hierarchical approaches to test generation • Built-in self-test • Stimuli generation in BIST • Response compaction and signature analyzers • BIST architectures • Hybrid BIST • P1500 Standard for SoC and NoC testing • Testing the communication infrastructure

  49. Built-In Self-Test • Motivations for BIST: • Need for a cost-efficient testing • Doubts about the stuck-at fault model • Increasing difficulties with TPG (Test Pattern Generation) • Growing volume of test pattern data • Cost of ATE (Automatic Test Equipment) • Test application time • Gap between tester and UUT (Unit Under Test) speeds • Drawbacks of BIST: • Additional pins and silicon area needed • Decreased reliability due to increased silicon area • Performance impact due to additional circuitry • Additional design time and cost

  50. BIST Techniques • BIST techniques are classified: • on-line BIST- includes concurrent and nonconcurrent techniques • off-line BIST - includes functional and structural approaches • On-line BIST - testing occurs during normal functional operation • Concurrent on-line BIST- testing occurs simultaneously with normal operation mode, usually coding techniques or duplication and comparison are used • Nonconcurrent on-line BIST - testing is carried out while a system is in an idle state, often by executing diagnostic software or firmware routines • Off-line BIST- system is not in its normal working mode,usually • on-chip test generators and output response analyzers or microdiagnostic routines • Functional off-line BIST is based on a functional description of the Component Under Test (CUT) and uses functional high-level fault models • Structural off-line BIST is based on the structure of the CUT and uses structural fault models (e.g. SAF)

More Related