1 / 70

Neuro-Fyzzy Methods for Modeling and Identification

Neuro-Fyzzy Methods for Modeling and Identification. Presented by: Ali Maleki. Presentation Agenda. Introduction Fuzzy Systems Artificial Neural Networks Neuro-Fuzzy Modeling Simulation Examples. Introduction. Control Systems Competion Environment requirements

babu
Télécharger la présentation

Neuro-Fyzzy Methods for Modeling and Identification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Neuro-Fyzzy Methods for Modeling and Identification Presented by: Ali Maleki

  2. Presentation Agenda • Introduction • Fuzzy Systems • Artificial Neural Networks • Neuro-Fuzzy Modeling • Simulation Examples

  3. Introduction • Control Systems • Competion • Environment requirements • Energy and material costs • Demand for robust, fault-tolerant systems • Extra needs for Effective process modeling techniques • Conventional modeling? • lack precise, formal knowledg about the system • Strongly nonlinear behavior, • High degree of uncertainty, • Time varying characteristics

  4. Introduction • Solution: Neuro-fuzzy modeling • A powerful tool which can facilitate the effective development of models by combining information from different source: • Empirical models • Heuristics Data • Neuro-fuzzy models • Describe systems by means of fuzzy if-then rules • Represented in a network structure • Apply algorithms from the area of Neural Networks

  5. Introduction • Neuro-fuzzy modeling • - neural networks • - fuzzy systems • Both are motivated by imitating human reasoning process • Relationships: • In neural networks: • implicitly, coded in the network and its parameters • In fuzzy systems: • explicitly, in the form of if–then rules • Neuro–fuzzy systems combine the semantic transparency of rule-based fuzzy systems with the learning capability of neural networks

  6. Nonlinear system identification • NARX (nonlinear autoregressive with exogenous input) model • Regressor vector: • Dynamic order of the system: • Represented by the number of lags nu and ny • Task of nonlinear system identification: • Infer unknown function f from available data sequences

  7. Nonlinear system identification • Multivariable systems: • Nonlinear state-space discription • Task of nonlinear system identification: • Infer unknown functions gandh from available data sequences • Neural NetworksNeuro-fuzzy systemsSplinesInterpolated look-up tables • Accurate PredictorAccurate predictor +model that can be used to learn something about system +Analyse system properties

  8. Fuzzy Models

  9. Fuzzy Models • Definition: A mathematical model which in some way uses fuzzy sets • In system identification: rule-based fuzzy models • Example: • IF heating is highTHEN temperature increase is fast • Linguistic terms • To make such a model operational: Linguistic terms must be defined more precisely using fuzzy sets • Fuzzy sets defined through their membership functions

  10. Fuzzy Models • Types of fuzzy models: (depending on the structure of if-then rules) • Mamdani Model • IF D1 is low and D2 is highTHEN D is medium • Takagi-Sugeno Model • IF D1 is low and D2 is highTHEN D=k (zero-order) • IF D1 is low and D2 is highTHEN D=0.7D1+0.2D2+0.1 (first-order)

  11. Mamdani Model • Linguistic terms Number of rules • Linguistic fuzzy model is useful for representing qualitative knowledge

  12. Example – Mamdani Model Oxygen flow rate Gas burner Heating power • Linguistic terms is defined by membership function

  13. Example – Mamdani Model Oxygen flow rate Gas burner Heating power • Membership functions can be defined by the model developer based on prior knowledgeor (automatically) by using data (constructed - adjusted)

  14. Takagi-Sugeno Model • Mamdani model is typically used in knowledge-based (expert) systems • In data driven identification, Takagi_Sageno model has becom popular • Consequent parameter vector • Scalar offset • Number of rules

  15. Takagi-Sugeno Model • The output y is computed by taking the weighted average of the individual rules contribution: • Degree of fulfilment of the ith rule • In a special case:

  16. Takagi-Sugeno Model (zero-order) • Rules: • Input-output equation: • This model is a special case of the Mamdani system, in which • Consequent fuzzy sets degenerate to singletons (real numbers)

  17. Takagi-Sugeno Model • usually, • antecedent fuzzy sets are usually defined to describe distinct, partly overlapping regions in the input space • Then, • The parameters ai are approximate local linear models of the considered nonlinear system • TS model: • Piece-wise linear approximation of a nonlinear function

  18. Example: Takagi-Sugeno Model • static characteristic of an actuator with • dead zone and a • non-symmetrical response for positive and negative inputs

  19. Fuzzy Logic Operators In fuzzy systems with multiple inputs, the antecedent proposition is usually represented as a combination of terms,by using logic operators ‘and’ (conjunction) ‘or’ (disjunction) and ‘not’ (complement) In fuzzy set theory,several families of operators have been introduced for these logical connectives

  20. Example: Fuzzy Logic Operators conjunctive form of the antecedent degree of fulfillment: minimum conjunction operator product conjunction operator

  21. Dynamic Fuzzy Models TS NARX model Regressor vector

  22. Dynamic Fuzzy Models • TS state-space model • Advantages of the state-space modeling approach: • structure of the model can easily be related to the physical structure of the real system (model parameters are physically relevant) • This is not necessarily the case with input-output models. • Dimension of the regression problem in state-space modeling is often smaller than with input–output models

  23. Artificial Neural Networks • ANNs: • Inspired by the functionality of biological neural networks • Can generalizing from a limited amount of training data • black-box models of nonlinear, multivariable static and dynamic systems • can be trained by using input–output data • ANNs consist of: • Neurons • Interconnection among them • Weights assigned to these interconnections

  24. Multi-Layer Neural Network • One input Layer • One output layer • A number of hidden layers • Activation function:Linear neurons • Tangent hyperbolic • Threshold function • Sigmoidal function

  25. Multi-Layer Neural Network - Training • Training definition: • adaptation of weights in a multi-layer network such that the error between the desired output and the network output is minimized • Training steps: • Feedforward computation • Weight adaptation • Gradient-descent optimization • Error backpropagation

  26. Multi-Layer Neural Network - Structure • A network with one hidden layer is sufficient for most approximation tasks • More layers: • Can give a better fit • But the training takes longer • number of neurons in the hidden layer: • Too few neurons give a poor fit • Too many neurons result in overtraining of the net (poor generalization to unseen data)

  27. Dynamic Neural Networks • static feedforward network combined with an external • feedback connection • First-order NARX model

  28. Dynamic Neural Networks • Recurrent Networks • Feedback: • Internally in the neurons (Elman network) • Internally to other neurons in the same layer • Internally to neurons in preceding layer (Hopfield Network) • Hopfield Network • Elman Network:

  29. Error Backpropagation Input: Desired output: Error: Cost function: Adjusting the weights: minimization of the cost function

  30. Error Backpropagation • network’s output y is nonlinear in the weights • Therefore, • The training of a MNN is thus a nonlinear optimization problem • Methods: • Error backpropagation (first-order gradient) • Newton, Levenberg-Marquardt methods (second-order gradient) • Genetic algorithms and many others techniques

  31. Error Backpropagation • First-order gradient • Update rule: • Weight vector in iteration n • Learning rate • Jacobian of the network • nonlinear optimization problem is thus solved by using the firstterm of its Taylor series expansion

  32. Error Backpropagation • Second-order gradient: • Second-order gradient method make use of the second term • Hessian • Update rule:

  33. Error Backpropagation Difference between first-order and Second-order gradient methods: Size and direction of gradient-decent step Second order methods are usually more effective than first-order ones

  34. Error Backpropagation For output layer Update law for output weights: For hidden layer Update law for hidden layer weights: Backpropagation error

  35. Radial Basis Function Network RBF network is a twolayer network as figure below Ususal choise for basis function is Gaussian function

  36. Radial Basis Function Network • Adjustable weights are only present in the output layer • Free parameters of RBF nets are • Output weights • Parameters of the basis functions (centers and radii) • Output is linear in the weights, and these weights can be estimated by least-squares methods • adaptation of the RBF parameters (center and radial) is a nonlinear optimization problem that can be solved by the gradient-descent techniques

  37. Neuro-Fuzzy Modeling • Fuzzy system as a layered structure (network), similar to ANNs of the RBF type • gradientdescenttraining algorithms for parameter optimization • This approach is usually referred to as Neuro-Fuzzy Modeling • Zero-order TS fuzzy model • First-order TS fuzzy model

  38. Neuro-Fuzzy Modeling Zero-order TS fuzzy model Typical membership function Input-output equation

  39. Neuro-Fuzzy Modeling First-order TS fuzzy model Input-output equation

  40. Neuro-Fuzzy Networks - Constructing • Prior knowledge (can be of a rather approximate nature) • Process data • Integration of knowledge and data: • Expert knowledge as a collection of if–then rules(Initial model creation – fine tune using process data) • Fuzzy rules are constructed from scratch by using numerical data(expert can confront the information stored in the rule base with his own knowledge)(can modify the rules)(supply additional rules to extend the validity of the model) • Comparision the second method with Truly black-box structures • possibility to interpret the obtained results

  41. Neuro-Fuzzy Networks – Structure and parameters • System identification steps: • Structure identification • Parameter estimation • choice of the model’s structure determines the flexibility of the model in the approximation of (unknown) systems • model with a rich structure can approximate more complicated functions, but, will have worse generalization properties • Good generalization means that a model fitted to one data set will also perform well on another data set from the same process.

  42. Neuro-Fuzzy Networks – Structure and parameters • Structure selection process involves: • Selection of input variables • Number and type of membership functions, number of rules(These two structural parameters are mutually related)

  43. Neuro-Fuzzy Networks – Structure and parameters • Selection of input variables • Physical inputs • Dynamic regressors (defined by the input and output lags) • Typical sources of information: • Prior knowledge • Insight in the process behavior • Purpose of the modeling exercise • Automatic data-driven selection can thenbe used to compare different structures in terms of some specified performance criteria.

  44. Neuro-Fuzzy Networks – Structure and parameters • Number and type of membership functions, number of rules • Determine the level of detail (granularity) of the model • Typical criteria • Purpose of modeling • Amount ofavailable information (knowledge and data) • Automated methods can be used to add or remove membership functions and rules.

  45. Neuro-Fuzzy Networks – Gradient-based learning • Zero-order ANFIS model • Consequent parameters • Jacobian • Update law • Centers and spreads of the Gaussian membership functions

  46. Neuro-Fuzzy Networks – Hybrid Learning • Output-layer parameters in RBF networks can be estimated by linear least-squares (LS) techniques • LS methods are more effective than the gradient-based update rule • Hybrid methods: • One-shot least-squares estimation of the consequent parameters • Iterative gradient-based optimization of the membership functions • Choice of LS estimation method: • In terms of error minimization is not crucial • If consequent parameters are to be interpreted as local models great care must be taken • PROBLEM: over-parameterization • numerical problems - over-fitting - meaningless parameter estimates

  47. Example – Hybrid Learning • Approximation of second-order polynomial by a first-order ANFIS model • Membership function for t1<u<t2 • Model: • Output of TS model • Model has four free parameters, while three are sufficient to fit thepolynomial

  48. Neuro-Fuzzy Networks – Hybrid Learning • To avoid over-parameterization, the basic least-squares criterion can be combined with additional criteria for local fit, or with constraints on the parameter values • Local Least-Squares Estimation • Constrained Estimation • Multi-Objective Optimization

  49. Hybrid Learning - Global LS Estimation

  50. Hybrid Learning - Local LS Estimation • While the global solution gives the minimal prediction error, it may bias the estimates of the consequents as parameters of local models. • If locally relevant model parameters are required, a weighted LS approach applied per rule should be used. • The consequent parameters of the individual rules are estimated independently (result is not influenced by the interactions of the rules) • Larger prediction error is obtained than with global least squares

More Related