1 / 32

State based methods

State based methods. State based methods. State of the system information that describes the system at any given instant of time For reliability models, each state represents a distinct combination of failed and working modules.

vail
Télécharger la présentation

State based methods

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. State based methods

  2. State based methods • State of the system information that describes the system at any given instant of timeFor reliability models, each state represents a distinct combination of failed and working modules. • State transitions govern the changes of state that occur within a system-For reliability models, as time passes the system goes from state to state as modules fail and repair. The state transitions are characterized by probabilities, such as the probability of failure and the probability of repair Random processes

  3. Random process A random process is a collection of random variables indexed by time Examples of random process {Xt}, with time T = {1, 2, 3. …} Let Xi be the result of tossing a die{Xt} represents the sequence of results of tossing a die P[X1 = 4] = 1/6 P[X4 = 4 | X2 = 2] = P[X4 = 4 ] =1/6 Independent random variables

  4. Random process Discrete-time random processif the number of time points defined for the random process is finite or countable (e.g., integers) Continuous-time random process if the number of time points defined for the random process is uncountable (e.g., real numbers) Example:Let Xt be the number of fault arrivals in a system up to time t.If t is a real number, Xt is a continuous-time random process

  5. Random process state space State space S of a random process {Xt}:the set of all possible values the process can take S = {y: Xt = y, for some t} If X is a random process that models a system, then the state space S of X represents the set of all possible configuration of the system

  6. Random process state space Discrete-state random process Xif the state space of random process X is finite or countable (e.g., S = {1,2,3,…})Assume N denotes the number of states (if finite). Continuous-state random process Xif the state space of random process X is infinite and uncountable (e.g., S = the set of real numbers) Let X be a random process that represents the number of bad packets received over a network. X is a discrete-state random process Let X be a random process that represents the voltage on a telephone line. X is a continuous-state random process

  7. Markov process • Let {Xt, t>=0} be a random process. A special type of random process is called the Markov process. Basic assumption underlying Markov process: the probability of state transition depends only on the current state For each t, for any couple of states i and j, for any sequence k0, …, kt-1 the future behaviour is independent of past values (memoryless property)

  8. Markov process: steady-state transition probabilities • Let {Xt, t>=0} be a Markov process. The Markov process X has steady-state transition probabilities if for any pair of states i, j: The probability of transition from state i to state j does not depend by the time. This probability is called pij

  9. Markov chain • A Markov chain is a Markov process X with discrete-state space S. • A Markov chain is omogeneous if X has steady-state transition probabilities; non-omogeneous otherwise • A Markov chain is a finite-state Markov chain if the number of states is finite (N). We consider discrete-time omogeneous Markov chains (DTMC)

  10. Transition probability matrix • If a Markov chain is finite-state, we can define the transition probability matrix P (nxn) pij = probability of moving from state i to state j in one step row i of matrix P: probability of make a transition starting from state i column j of matrix P: probability of making a transition from any state to state j

  11. Transition probability after n-time steps THEOREM: Generalization of the steady-state transition probabilities.For any i, j in S, and for any n>0 Definition: steady-state transition probability after n-time steps Definition: transition matrix after n-time steps

  12. Transition probability after n-time steps Definition: • Properties: Si=0,.., npij = 1

  13. Parr Pidle Pbusy 1 2 Pcom Pr Pfi 3 Pfb Pff An example A computer is idle, working or failed. When the computer is idle jobs arrives with a given probability. When the computer is idle or busy it may fail with probability Pfi or Pfb, respectively. Xt : computer at time t S={1,2,3} 1 computer idle 2 computer working 3 computer failed

  14. An Example Multiprocessor system with 2 processors and 3 shared memories system.System is operational if at least one processor and one memory are operational. lm failure rate for memory lp failure rate for processor X random process that represents the number of correct memories and the number of correct processors at time t Given a state (i, j): i is the number of operational memories; j is the number of operational processors S = {(3,2), (3,1), (3,0), (2,2), (2,1), (2,0), (1,2), (1,1), (1,0), (0,2), (0,1)}

  15. Markov chain lm failure rate for memory lp failure rate for processor (3, 2) -> (2,2) failure of one memory (3,0), (2,0), (1,0), (0,2), (0,1) are absorbent states

  16. Availability modeling • Assume that faulty components are replaced and we evaluate the probability that the system is operational at time t • Constant repair rate m (number of expected repairs in a unit of time) • Strategy of repair: only one processor or one memory at a time can be substituted • The behaviour of components (with respect of being operational or failed) is not independent: it depends on whether or not other components are in a failure state. If the failure and repair are exponential, we can use a Markov chain model.

  17. An Example Markov chain modelling the 2 processors and 3 shared memories system with repair. lm failure rate for memory lp failure rate for processor mm repair rate for memory mp repair rate for processor

  18. Example (cont.) • Strategy of repair: only one component can be substituted at a time • processor higher priority • exclude the lines mm representing memory repair in the case where there has been a process failure

  19. Stochastic Petri nets

  20. Stochastic Petri nets • Markov Chain grows very fast with the dimension of the system • Petri nets: High-level specification formalism • Markovian Stochastic Petri nets adding temporal and probabilistic information to the modelthe approach aimed at equivalence between SPN and MC idea of associating an exponentially distributed random delay with the PN transitions (1980) • Non Markovian Stochastic Petri nets non exponentially distributed random delay Automated tools supporting SPN for modelling and evaluation

  21. Petri nets • Place • Transition • Token • Arcs Weigth of arcs Initial marking t, y transitions preset postset t enabled if: we write

  22. Transition firing Firing rule We write M0[>M1 M1=(21,0,2,1) M0=(2,3,0,0)

  23. Reachable marking RN(M): We can build the reachability graph m0 2,3,0,0 Reachability graph t 1,0,2,1 m1

  24. Let then If then is a transition sequence Transition sequence Analysis Reachable markings Transitions never enabled Conditions on reachable markings ………………………

  25. Producer/Consumer example p2 free c1 produce take consume deposit p1 busy c2 Buffer (2 slots) Producer Consumer

  26. System description with Petri nets • Place: • a system component (one for every component) • a class of system components (CPU , Memory, ..) • components in a given state (CPU, FaultyCPU, ..) • ……. • Token: • number of components (number of CPUs) • Occurrence of an event (fault, ..) • ….. • Transitions: • Occurrence of an event (Repair, CPUFaulty, …) • Execution of a computation step • …

  27. Timed transitions • Timed transition: an activity that needs some time to be executed • When a transition is enabled a local timer is set to a delay d; - the timer is decresed • - when the timer elapses, the transition fires and remove tokens from input places • - if the transitions is desabled before the firing, the timer stops. • Handling of the timer (two alternatives): • Continue: the timer maintains the value and it will be used when the transition is enabled again • Restart: the timer is reset

  28. Sequence of timed transitions: • (tk1, Tk1) … (tkn, Tkn)where • tk1 <= tk2 <= tkn • [tki, tki+1) is the period of time between the firing of two transitions • period of time the net stay in a marking STOCHASTIC PETRI NET: when the delay d of a timed transition is a random variable

  29. Stochastic Petri nets (SPN) – reachability graph A timed transition T enabled at time t, with d the random value for the transition delay, fires at time t+d if it remains enabled in the interval [t, t+d)

  30. Markov chain Random process {M(t), t>=0} with M(0) = M0 and M(t) the marking at time t {M(t), t>=0} is a CTMC (memoriless property of exponential distributions)

  31. faulty Tf healty Td Tr repair A redundant system with repair • Two identical CPUs • Failure of the CPU: exponentially distributed with parameter l • Fault detection: exponentially distributed with parameter d • CPU repair: exponentially distributed with parameter μ SPN Reachability graph

  32. Markov chain Properties • Steady-state probability that both processors behave correctly • Steady-state probability of one undetected faulty processor • Steady state probability that both processors must be repaired • ....................

More Related