1 / 24

An HDP-HMM for Systems with State Persistence

An HDP-HMM for Systems with State Persistence. Emily B. Fox, Erik B. Sudderth, Michael I. Jordan and Alan S. Willsky 25 th International Conference on Machine Learning Presented by Lu Ren ECE Dept., Duke University July 11, 2008. Outline. 1 limitations of the HDP-HMM formulation

Télécharger la présentation

An HDP-HMM for Systems with State Persistence

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. An HDP-HMM for Systems with State Persistence Emily B. Fox, Erik B. Sudderth, Michael I. Jordan and Alan S. Willsky 25th International Conference on Machine Learning Presented by Lu Ren ECE Dept., Duke University July 11, 2008

  2. Outline • 1 limitations of the HDP-HMM formulation • temporal persistence of states versus unrealistically dynamics • Bayesian bias towards simpler models is insufficient • Sometimes models of varying complexity is averaged effectively • (e.g., prediction with posterior being integrated out ) • However, the problem of speaker diarization brings trouble • to HDP-HMM ( infer the number of speakers as well as the • transitions among speakers)

  3. Outline • 2. Contribution: • formulate general solution to the state persistence in • HDP-HMM with nonparametric Bayesian inference • allow more flexible, nonparametric emission distributions • develop a blocked Gibbs sampler to jointly resample the • state and emission assignments

  4. Background 1. Dirichlet process (DP) 2. hierarchical Dirichlet process (HDP) DP Mixture HDP Mixture

  5. Background 3. Chinese restaurant franchise (CRF) An alternative representation via indicator variables Table assignment: Dish assignment: Observation generation: 4. An alternative weak limit approximation as , finite hierarchical mixture model converges in the distribution to the HDP

  6. Sticky HDP-HMM 1. Problem of HDP-HMM • By sampling , similar transition distributions for all states • allows for state sequences with unrealistically fast dynamics Divide an observation block into two small-variance states with slightly different means

  7. Sticky HDP-HMM 2. Potential issues: • redundant states impede us to explain the observations • slowing mixing rates • (alternating pattern reinforced by the properties of the CRF) • poor predictive performance with redundant states for high-dimension observations 3. Proposed solution: • positive values increase self-transition under prior • When the original HDP-HMM is recovered.

  8. Sticky HDP-HMM 4. Graphical Model representation: CRF with loyal customers

  9. Model Sampling 5. Sampling methods: √ A CRF with Loyal Customers Each restaurant has a specialty dish Children are more likely to eat in the same restaurant as their parent and also eat the specialty dish. Keeps many generations eating in the same restaurant represents the considered dish : override variable represents the served dish

  10. Model Sampling √ Sampling via Direct Assignments Sampling Sampling Require sample (number of tables in restaurant j served dish k, and (overwrite variable)

  11. Model Sampling :simulate from a DP with concentration parameter If ; otherwise Then we have Sampling Hyper-parameters Place gamma prior on and , and Beta prior on √ Blocked Sampling of State Sequences Direct assignment sampler exhibit slow mixing rates since global state sequence changes are forced to occur coordinate by coordinate Two continuous and temporally separated observations of a given state to be grouped into two states

  12. Model Sampling A variant of the HMM forward-backward procedure Require approximation with week limit approximation of DP The posterior distribution of and : • Block sample : • compute backward messages • b. sample

  13. Multimodal Emission Distributions • approximate each emission using an infinite DP mixture of Gaussians • Bias towards self-transitions allow us to distinguish between the underlying HDP-HMM states (identifiable). : indexing the component of the emission density For each state, a unique for mixture weights so that

  14. Multimodal Emission Distributions • blocked resampling of Use limit approximations to both the HDP-HMM and DP emissions The backward message from to is solely a function of :

  15. Experiment Results 1. Synthetic Data • Generated from a three-state Gaussian emission HMM: • 0.97 self-transition probability; means 50, 0, -50; variances 50, 10, 50 • For blocked sampler, truncation level Hamming distance between true and estimated state sequence over 100 iterations, and with 200 initializations for median, 10th and 90th quantiles. Note: the direct assignment sampler’s slower convergence can be attributed to the sampler splitting temporally separated segments of a true state into multiple redundant states.

  16. Experiment Results Sticky HDP-HMM blocked sampler and direct assignment sampler Original HDP-HMM blocked sampler and direct assignment sampler

  17. Experiment Results • Generate data from a two-state HMM with multimodal emission • Each state had a two Gaussian components with equal weights • Mean: (0,10) and (-7,7), variance: 10 • Self-transition probability is 0.98 Observation sequence Estimated state sequence of sticky HMM but with single state emission component Infinite Gaussian mixture components True state sequence

  18. Experiment Results Hamming distance error between true state sequence and the estimated state sequence (blue: median, red: 10th and 90th quantiles). (e): infinite Gaussian emission mixture with sticky HDP-HMM (f): infinite Gaussian emission mixture with HDP-HMM (g): single Gaussian emission component with sticky HDP-HMM (h): single Gaussian emission component with HDP-HMM

  19. Experiment Results 2. Speaker Diarization Data Segment an audio recording into speaker-homogeneous regions Averaged 19 MFCCs computed over 250ms window every 10 ms Minimum speaker duration of 500ms is set For meeting with sticky HDP-HMM True state sequence State sequence estimate

  20. Experiment Results For meeting (red: incorrect label) True state sequence State sequence estimate

  21. Experiment Results DER: Diarization error rate

  22. Experiment Results As a further comparison, the best performance of other methods: Overall DER: 18.37%, best and worst DER: 4.39% and 32.33% Results of sticky HDP-HMM: Overall DER: 19.04%, best and worst DER: 1.26% and 31.42%

  23. Conclusions • Extend HDP-HMM with a separate parameter capturing state persistence • A fully nonparametric treatment of multimodal emissions • Present efficient sampling methods • Results on both synthetic data and a real data set

  24. Thanks!

More Related