1 / 21

Improving the Recognition of Interleaved Activities

Improving the Recognition of Interleaved Activities. Joseph Modayil Tongxin Bai Henry Kautz. Background. With the availability of cheaper and more ubiquitous sensors, mobile devices are able to observe people’s activities continuously This supports many emerging applications

duaa
Télécharger la présentation

Improving the Recognition of Interleaved Activities

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Improving the Recognition of Interleaved Activities Joseph Modayil Tongxin Bai Henry Kautz

  2. Background • With the availability of cheaper and more ubiquitous sensors, mobile devices are able to observe people’s activities continuously • This supports many emerging applications • Very meaningful research

  3. Goal • People often multitask as they perform activities of daily living, switching between many different activities Learn a model that recognizes which activities are being performed given a sequence of observations

  4. Tool: Markov Model

  5. Markov Model • The probabilistic description is truncated to the current and the predecessor state:

  6. A Sample Markov Model • Consider a simple 3-state Markov model of the weather

  7. A Sample Markov Model • Given that the weather on day 1 is sunny (state 3), what’s the probability that the weather for the next 7 days will be “sun-sun-rain-rain-sun-cloudy-sun”?

  8. Hidden Markov Models • Markov Model: each state corresponds to an observable event. • Hidden Markov Model: a doubly embedded stochastic process with an underlying stochastic process that is not observable (hidden state)

  9. Hidden Markov Models • Characterized by the following: State transition probability distribution Observation symbol probability distribution in state j The initial state distribution

  10. Interleaved HMM • Activities: Observations: Object reading: Transition probabilities: Emission probabilities:

  11. Interleaved HMM • The probability of the most likely state sequence:

  12. Interleaved HMM • Hypothesis space: H • For normal HMM: H=A Transition probabilities Emission probabilities Why?

  13. Interleaved HMM • Each state consists of a current activity and a record of the last object while performing each activity. State space is: L is a Cartesian product of |A| copies of O. The hypothesis at time t is given:

  14. Interleaved HMM • We denote an element by where indicates the last object observed in activity I The emission probability is then: id(x,y)=1 if x=y =0 else The transition probability is:

  15. Interleaved HMM • The number of parameters is approximately But the size of the state space is So the state space can NOT be explored completely at each time step. How to solve the problem?

  16. Interleaved HMM • Use a beam search to define a likelihood update equation over a beam in the search space This method can effectively approximate the full state space and contributes most for little added complexity

  17. Defining the best state sequence • Most likely activity sequence derived by Viterbi algorithm • We can also estimate the current state for the most likely path from the evidence seen up to time t This option is desirable in real-time environments.

  18. Results • Lab data HMM: 66% IHMM: 100% Real data by Patterson

  19. Question Compare this method with CRF Which one is better?

  20. Conclusion • IHMM provides a simple but effective way to improve multi-task activity recognition • The model performs well using only a little training data

  21. Thank you!

More Related