1 / 23

EE616

EE616. Computer Aided Analysis of Electronic Circuits. Dr. Janusz Starzyk. Computer Aided Analysis of Electronic Circuits. Innovations in numerical techniques had profound import on CAD: Sparse matrix methods. Multi-step methods for solution of differential equation.

nevin
Télécharger la présentation

EE616

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EE616 Computer Aided Analysis of Electronic Circuits Dr. Janusz Starzyk

  2. Computer Aided Analysis of Electronic Circuits • Innovations in numerical techniques had profound import on CAD: • Sparse matrix methods. • Multi-step methods for solution of differential equation. • Adjoint techniques for sensitivity analysis. • Sequential quadratic programming in optimization.

  3. Fundamental Concepts + V - i • NETWORK ELEMENTS: • One-port Resistor voltage controlled. or current controlled Capacitor Inductor condition • Independence voltage source • Independence current source

  4. Fundamental Concepts i2 i1 + V2 - + V1 - • Two-port: Voltage to voltage transducer (VVT): Voltage to current transducer (VCT): Current to voltage transducer (CVT): Current to current transducer (CCT): Ideal transformer (IT): Ideal gyrator (IG):

  5. Fundamental Concepts Positive impedance converter (PIC) Negative impedance converter (NIC) Ideal operational amplifier (OPAMP) OPAMP is equivalent to nullor constructed from two singular one-ports: nullator and norator OPAMP nullor + V - i + V - i i1 i2 i1 i2 + V1 - + V2 - + V1 - + V2 -

  6. Network Scaling Typical design deals with network elements having resistivity from ohms to MEG ohms, capacitance from fF to mF, inductance from mH to H within frequency range Hz. Consider EXAMPLE: Calculate derivative with 6 digits accuracy? Let but because of roundoff errors: Which is 16% error.

  7. Scaling is used to bring network impedance close to unity Impedance scaling: Design values have subscript d and scaled values subscript s. For scaling factor K we get: Frequency scaling: has effect on reactive elements: With:

  8. For both impedance and frequency scaling we have: WT, CCT, IT, PIC, NIC, OPAMP remain unchanged. VCT the transcondactance g is multiplied by K. CVT, IG the transresistance r is divided by K.

  9. NODAL EQUATIONS V3 j3 For (n+1) terminal network: Y V = J or: V2 j2 j1 Jn+1 Vn+1 V1

  10. i k Y is called indefinite admittance matrix. For network with R, L, C and VCT we can obtain Y directly from the network. For VCT: gV1 V1 j m from k to m from i to j

  11. to m from k K=i from k i=Yv when k=I and m=j we have one-port and g = Y: Liner Equations and Gaussian Elimination: For liner network nodal equations are linear. Nonlinear networks can be solved by linearization about operating point. Thus solution of linear equations is basic to many problems. Consider the system of liner equations: or: to m Y m=j

  12. Solution can be obtained by inverting matrix but this approach is not practical. Gaussian elimination: Rewrite equations in explicit from and denote bi by ai,n+1 to simplify notation :

  13. How to start Gaussian elimination? Divide the first equation by a11 obtaining: Where Multiply this equation by a21 and add it to the second. The coefficients of the new second equation are with this transformation becomes zero. Similarly for the other equations, setting:

  14. makes all coefficients of the first column zero with exception of . We repeat this process selecting diagonal elements as dividers and obtaining general formulas where superscript shows how many changes were made. The resulting equations have the form:

  15. Back substitution is used to obtain solution. Last variable is used to obtain xn-1 and so on. In general: Gaussian elimination requires operations. EXAMPLE: Example 2.5.b (p70)

  16. While back substitutions requires . Triangular decomposition: Triangular decomposition has an advantage over Gaussian elimination as it can give simple solution to systems with different right-hand-side vectors and transpose systems required in sensitivity computations. Assume we can factor matrix as follows: where

  17. L stands for lower triangular and U for upper triangular. Replacing A by LU the system of equations takes a form: L U X = b Define an auxiliary vector Z as U X = Z then L X = b and Z can be found easily as: so Zn=b1/l11 and

  18. This is called forward elimination. Solution of UX=Z is called backward substitution. We have so Xn=Zn and to find LU decomposition consider matrix. Taking product of L and U we have :

  19. From the first column we have from the first row we find from the second column we have and so on… In machine implementation L and U will overwrite A with L occupying the lower and U the upper triangle of A. In general the algorithm of LU decomposition can be written as (Crout algorithm): 1. Set k=1 and go to step 3. 2. Compute column k of L using:

  20. if k=n stop. 3. Compute row k of U using 4. Set k=k+1 and go to step 2. This technique is represented in text by CROUT subroutine. Modification which is dealing with rows only by LUROW. Modification of Gaussian elimination which give LU decompositions realized by LUG subroutine. Features of LU decomposition: 1. Simple calculation of determinant 2.if only right-hand-side vector b is changed there is no need to recalculate the decomposition and only forward and backward substitution are performed, which takes n2 operations. 3.Transpose system AT X = C required for sensitivity calculation `can be solved easily as AT = UTLT.

  21. Example 2.5.1 4. Number of operation required for LU decomposition is (equivalent to Gaussian elimination.)

  22. 2.6 PIVOTING: the element by which we divide (must not be zero) in gaussian elimination is called pivot. To improve accuracy pivot element should have large absolute value. Partial pivoting: search the largest element in the column. Full pivoting: search the largest element in the matrix. Example 2.6.1

  23. SPARSE MATRIX PRINCIPLES To reduce number of operations in case many coefficients in the matrix A are zero we use sparse matrix technique. This not only reduces time required to solve the system of equations but reduces memory requirements as zero coefficients are not stored at all. (read section 2.7) Pivot selection strategies are motivated mostly by the possibility to reduce the number of operations.

More Related