1 / 26

Part1 Markov Models for Pattern Recognition – Introduction

Part1 Markov Models for Pattern Recognition – Introduction. CSE717, SPRING 2008 CUBS, Univ at Buffalo. Textbook. Markov models for pattern recognition : f rom t heory to a pplications by Gernot A. Fink , 1st Edition, Springer, Nov 2007. Textbook. Foundation of Math Statistics

Télécharger la présentation

Part1 Markov Models for Pattern Recognition – Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Part1 Markov Models for Pattern Recognition – Introduction CSE717, SPRING 2008 CUBS, Univ at Buffalo

  2. Textbook • Markov models for pattern recognition: from theory to applications by Gernot A. Fink, 1st Edition, Springer, Nov 2007

  3. Textbook • Foundation of Math Statistics • Vector Quantization and Mixture Density Models • Markov Models • Hidden Markov Model (HMM) • Model formulation • Classic algorithms in the HMM • Application domain of the HMM • n-Gram • Systems • Character and handwriting recognition • Speech recognition • Analysis of biological sequences

  4. Preliminary Requirements • Familiar with Probability Theory and Statistics • Basic concepts in Stochastic Process

  5. Part 2 aFoundation of Probability Theory, Statistics & Stochastic Process CSE717 , SPRING 2008 CUBS, Univ at Buffalo

  6. Coin Toss Problem • Coin toss result: • X: random variable • head, tail: states • SX: set of states • Probabilities:

  7. Discrete Random Variable • A discrete random variable’s states are discrete: natural numbers, integers, etc • Described by probabilities of states PrX(s1), PrX(x=s2), … s1, s2, …: discrete states (possible values of x) • Probabilities over all the states add up to 1

  8. Continuous Random Variable • A continuous random variable’s states are continuous: real numbers, etc • Described by its probability density function (p.d.f.): pX(s) • The probability of a<X<b can be obtained by integral • Integral from to

  9. Joint Probability and Joint p.d.f. • Joint probability of discrete random variables • Joint p.d.f. of continuous random variables • Independence Condition

  10. Conditional Probability and p.d.f. • Conditional probability of discrete random variables • Joint p.d.f. for continuous random variables

  11. Statistics: Expected Value and Variance • For discrete random variable • For continuous random variable

  12. Normal Distribution of Single Random Variable • Notation • p.d.f • Expected value • Variance

  13. Stochastic Process • A stochastic process is a time series of random variables • : random variable • t: time stamp Stock market Audio signal

  14. Causal Process • A stochastic process is causal if it has a finite history A causal process can be represented by

  15. Stationary Process • A stochastic process is stationary if the probability at a fixed time t is the same for all other times, i.e., for any n, and , • A stationary process is sometimes referred to as strictly stationary, in contrast withweak or wide-sense stationarity

  16. Gaussian White Noise • White Noise: obeys independent identical distribution (i.i.d.) • Gaussian White Noise

  17. Gaussian White Noise is a Stationary Process Proof for any n, and ,

  18. Temperature Q1: Is the temperature within a day stationary?

  19. Markov Chains • A causal process is a Markov chain if for any x1, …, xt k is the order of the Markov chain • First order Markov chain • Second order Markov chain

  20. Homogeneous Markov Chains • A k-th order Markov chain is homogeneous if the state transition probability is the same over time, i.e., • Q2: Does homogeneous Markov chain imply stationary process?

  21. State Transition in Homogeneous Markov Chains • Suppose is a k-th order Markov chain and S is the set of all possible states (values) of xt, then for any k+1 states x0, x1, …, xk, the state transition probability can be abbreviated to

  22. Example of Markov Chain 0.4 0.6 Rain Dry 0.2 0.8 Two states : ‘Rain’ and ‘Dry’. Transition probabilities: Pr(‘Rain’|‘Rain’)=0.4 , Pr(‘Dry’|‘Rain’)=0.6 , Pr(‘Rain’|‘Dry’)=0.2, Pr(‘Dry’|‘Dry’)=0.8

  23. Short Term Forecast 0.4 0.6 Rain Dry 0.2 0.8 Initial (say, Wednesday) probabilities: PrWed(‘Rain’)=0.3, PrWed(‘Dry’)=0.7 What’s the probability of rain on Thursday? PThur(‘Rain’)= PrWed(‘Rain’)xPr(‘Rain’|‘Rain’)+PrWed(‘Dry’)xPr(‘Rain’|‘Dry’)= 0.3x0.4+0.7x0.2=0.26

  24. Condition of Stationary 0.4 0.6 Rain Dry 0.2 0.8 Pt(‘Rain’)= Prt-1(‘Rain’)xPr(‘Rain’|‘Rain’)+Prt-1(‘Dry’)xPr(‘Rain’|‘Dry’)= Prt-1(‘Rain’)x0.4+(1– Prt-1(‘Rain’)x0.2= 0.2+0.2xPrt(‘Rain’) Pt(‘Rain’)= Prt-1(‘Rain’) => Prt-1(‘Rain’)=0.25, Prt-1(‘Dry’)=1-0.25=0.75 steady state distribution

  25. Steady-State Analysis 0.4 0.6 Rain Dry 0.2 0.8 • Pt(‘Rain’)= 0.2+0.2xPrt-1(‘Rain’) • Pt(‘Rain’) – 0.25 = 0.2x(Prt-1(‘Rain’) – 0.25) • Pt(‘Rain’)= 0.2t-1x(Pr1(‘Rain’)-0.25)+0.25 • Pt(‘Rain’)= 0.25 (converges to steady state distribution)

  26. Periodic Markov Chain 0 1 Rain Dry 1 0 Periodic Markov chain never converges to steady states

More Related