1 / 133

Chapter 3

Chapter 3. Fundamentals of Lyapunov Theory. 3.1 Nonlinear Systems and Equilibrium Points. Nonlinear Systems A nonlinear dynamic system can usually be represented by a set of nonlinear differential equations in the form = f ( x , t )

ephraim
Télécharger la présentation

Chapter 3

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 3 Fundamentals of Lyapunov Theory

  2. 3.1 Nonlinear Systems and Equilibrium Points • Nonlinear Systems A nonlinear dynamic system can usually be represented by a set of nonlinear differential equations in the form = f(x , t) where f is an n1 nonlinear vector function, and x is the n1 state vector. The number of states n is called the order of the systems.

  3. A solution x(t) of the equations (3.1) usually corresponds to a curve in state space as t varies from zero to infinity, as already seen in phase plane analysis for the case n = 2. This curve is generally referred to as a state trajectory or a system trajectory. If the plant dynamics is = f(x, u, t ) and some control law has been selected u = g(x, t)

  4. then the closed-loop dynamics is = f [x, g(x, t),t ] A special class of nonlinear systems are linear systems. The dynamics of linear systems are of the form = A(t)x where A(t) is an nn matrix.

  5. Autonomous and Non-autonomous systems Linear systems are classified as either time-varying or time-invariant, depending on whether the system matrix A varies with time or not.

  6. I. Autonomous Definition 3.1: The nonlinear system (3.1) is said to be autonomous if f does not depend explicitly on time, i.e., if the system’s state equation can be written. = f(x ) Otherwise, the system is called non-autonomous. Obviously, linear time-invariant (LTI) systems are autonomous and linear time.

  7. II. Non-autonomous Varying (LTV) systems are non-autonomous. The second-order systems studied in chapter 2 are all autonomous.

  8. Equilibrium Points 1. Definition 3.2:A statex* is an equilibrium state (or equilibrium point) of the system if once x(t) is equal to x*, it remains equal to x* for all future time. 2. Example 3.1: The Pendulum Consider the pendulum of Figure 3.1, whose dynamics is given by the following nonlinear autonomous equation (3.5)

  9. 1. Sect. 3.1: where R is the pendulum’s length, M its mass, b the friction coefficient at the hinge, and g the gravity constant. Letting x1= , x2 =  , the corresponding state-space equations is 1 = x2

  10. Therefore, the equilibrium points are given by x2 = 0, sin x1 = 0 which leads to the points (0 [2π], 0). Physically ,these points correspond to the pendulum resting exactly at the vertical up and down positions.

  11. Nominal Motion Let x* (t) be the solution of equation (3.2), i. e., the nominal motion trajectory, corresponding to initial condition x*(0) = xo. Let us now perturb the initial condition to be x(0) = xo + δxo and study the associated variation of the motion error e(t) = x(t)-x*(t) as illustrated in Figure 3.2.

  12. Since both x*(t) and x(t) are solutions of (3.2), we have Figure 3.2

  13. * = f(x*) x(0) =xo = f(x) x(0) = xo + δxo then e(t) satisfies the following non-autonomous differential equation e = f(x* + e, t)-f(x*, t) = g(e, t) with initial ,condition e(0) = δxo. Since g(0, t) = 0, the new dynamic system, with e as state and g in place of f, has an equilibrium point at the origin of the state space.

  14. Therefore, instead of studying the deviation of x(t) from x*(t) for the original system, we may simply study the stability of the perturbation dynamics (3.8) with respect to the equilibrium point 0. Note, however, that the perturbation dynamics is non-autonomous, due to the presence of the nominal trajectory x*(t) on the right-hand side.

  15. Each particular nominal motion of an autonomous system corresponds to an equivalent non-autonomous system, whose study requires the non-autonomous system analysis techniques to be presented in chapter 4.

  16. Stability and Instability Definition 3.3:The equilibrium state x = 0is said to be stable if , for any R > 0, there existed r > 0, such that if ||x(0)|| < r, then ||x(t)|| < Rfor all t 0 . Otherwise, the equilibrium point is unstable.

  17. Figure 3.3:Concepts of stability

  18. Example 3.3: Instability of the Van der Pol Oscillator The Van der Pol oscillator of Example 2.6 is described by One easily shows that the system has an equilibrium point at the origin.

  19. System trajectories starting from any non-zero initial states all asymptotically approach a limit cycle. This implies that , if we choose R in Definition 3.3 to be small enough for the circle of radius R to fall completely within the closed-curve of the limit cycle, then system trajectories starting near the origin will eventually get out of this circle (Figure 3.4). This implies instability of the origin.

  20. Figure 3.4:Unstable origin of the Van der Pol Oscillator

  21. Definition 3.4:An equilibrium point 0 is asymptotically stable if it is stable, and if in addition there exists some r > 0 such that ||x(0)|| < r implies that x(t) 0as Asymptotic stability means that the equilibrium is stable, and that in addition, stated close to 0 actually converge to 0 as time t goes to infinity. Figure 3.3 shows that system trajectories starting from within the ball Br converge to the origin.

  22. The Br is called a domain of attraction of the equilibrium point (while the domain of attraction of the equilibrium point refers to the largest such region, i. e., to the set of all points such that trajectories initiated at these points eventually converge to the origin). An equilibrium point which is Lyapunov stable but not asymptotically stable is called marginally stable. One may question the need for the explicit stability requirement in the definition above, in view of the second condition of state convergence to the origin.

  23. However, it is easy to build counter-examples that show that state convergence does not necessarily imply stability. For instance, a simple system studied by Vinograd has trajectories of the form shown in Figure 3.5. All the trajectories starting from non-zero initial points within the unit disk first reach the curve C before converging to the origin. Thus, the origin is unstable in the sense of Lyapunov, despite the state convergence. Calling such a system unstable is quite reasonable, since a curve such as C may be outside the region where the model is valid- for instance,

  24. the subsonic and supersonic dynamics of a high-performance aircraft are radically different, while, with the problem under study using subsonic dynamic models, C could be in the supersonic range.

  25. Figure 3.5: State convergence does not imply stability

  26. In many engineering applications, it is still not sufficient to know that a system will converge to the equilibrium point after infinite time. There is a need to estimate how fast the system trajectory approaches 0. The concept of exponential stability can be used for this purpose.

  27. Definition 3.5 :An equilibrium point 0 is exponentially stable if there exists two strictly positive numbers  and  such that in some ball Braround the origin.

  28. In words, (3.9) means that the state vector of an exponentially stable system converges to the origin faster than an exponential function. The positive number  is often called the rate of exponential convergence. For instance the system is exponentially convergent to x = 0 with a rate = 1. Indeed, its solution is

  29. And therefore Note that exponential stability implies asymptotic stability. But asymptotic stability does not guarantee exponential stability, as can be seen from the system whose solution is x = 1/(1+t), a function slower that any exponential function e-t(with  > 0).

  30. Definition 3.6:If asymptotic (or exponential) stability holds for any initial states, the equilibrium point is said to be asymptotically (or exponentially) stable in the large. It is also called globally asymptotically (or exponentially) stable. For instance, in Example 1.2 the linearized system is globally asymptotically stable, but the original system is not. The simple system in (3.10) is also globally asymptotically stable, as can be seen from its solutions.

  31. Linear time-invariant systems are either asymptotically stable, or marginally stable, or unstable, as can be seen from the modal decomposition of linear system solutions; linear asymptotic stability is always global and exponential, and linear instability always implies exponential blow-up. This explains why the refined notions of stability introduced here were not previously encountered in the study of linear system. They are explicitly needed only for nonlinear systems.

  32. 3.3 Linearization and Local Stability

  33. Consider the autonomous system is (3.2), and assume that f(x) is continuously differentiable. Then the system dynamics can be written as x + fh. o. t. (x) where fh. o. t. stands for higher-order terms in x. Note that the above Taylor expansion starts directly with the first-order term, due to the fact that f(0) = 0, since 0 is an equilibrium point.

  34. Let us use the constant matrix A to denote that Jacobian matrix of f with respect to x at x = 0 (an n x n matrix of elements ) Then, the system

  35. is called the linearization (or linear approximation) of the original nonlinear system at the equilibrium point 0. Note that, similarly, starting with a non-autonomous nonlinear system with a control input u such that , we can write

  36. Where stands for higher-order terms in x and u. Letting A denote the Jacobian matrix of f with respect to x at (x = 0, u = 0), and B denote the Jacobian matrix of f with respect to u at the same point (an matrix of elements , where m is the number of inputs)the system Is the linearization (or linear approximation) of the original nonlinear system at (x = 0, u = 0).

  37. Furthermore, the choice of a control law of the form u = u(x) (with u(0) = 0) transforms the original non-autonomous system into an autonomous closed-loop system, having x = 0 as an equilibrium point. Linearly approximating the control law as the closed-loop dynamics can be linearly approximated as

  38. Of course, the same linear approximation can be obtained by directly considering the autonomous closed-loop system and linearizing the function f1 with respect to x, an its equilibrium point x = 0.

  39. Example 3.4:consider the system Its linearized approximation about x = 0 is The Linearized system can thus be written

  40. A similar procedure can be applied for a controlled system. Consider the system The system can be linearly approximated about x = 0 as i. e., the linearized system can be written

  41. Assume that the control law for the original nonlinear system has been selected to be then the linearized closed-loop dynamics is

  42. Theorem 3.1 (Lyapunov’s linearization method) • If the linearized system is strictly stable (i.e ,if all eigenvalues of Aare strictly in the left-half complex plane), then the equilibrium point is asymptotically stable (for the actual nonlinear system). • If the linearized system is unstable (i.e , if at least one eigenvalue of A is strictly in the right-half complex plane), then the equilibrium point is unstable(for the nonlinear system).

  43. If the linearized system is marginally stable (i. e, all eigenvalues of A are in the left-half complex plane, but at least one of them is on the j axis), then one cannot conclude anything from the linear approximation (the equilibrium point may be stable, asymptotically stable, or unstable for the nonlinear system).

  44. Example 3.6:Consider the first order system The origin 0 is one of the two equilibrium points of this system. The linearization of this system around the origin is

  45. The application of Lyapunov’s linearization method indicates the following stability properties of the nonlinear system • a < 0:asymptotically stable; • a > 0:unstable; • a = 0:cannot tell from linearization. • In the third case, the nonlinear system is The linearization method fails while, as we shall see, the direct method to be described can easily solve this problem.

  46. 3.4 Lyapunov’s Direct Method

  47. The Basic philosophy of Lyapunov’s direct method is the mathematical extension of a fundamental physical observation; if the total energy of a mechanical (or electrical) system is continuously dissipated, then the system, wherelinear or nonlinear, must eventually settle down to an equilibrium point.

  48. Specifically , let us consider the nonlinear mass-damper-spring system in Figure 3.6, whose dynamics equation is with representing nonlinear dissipation or damping, and ( ) representing a nonlinear spring term. Assume that the mass is pulled away from the natural length of the spring by a large distance, and then released.

  49. The linearization method cannot be used either because the motion starts outside the linear range (and in any case the system’s linear approximation is only marginally stable).

More Related