1 / 17

Bayesian Learning for Models of Human Speech Perception

Bayesian Learning for Models of Human Speech Perception. Mark Hasegawa-Johnson, University of Illinois at Urbana-Champaign. 1. Review of Psychological Results. 1.A. Independence of Distinctive Feature Errors.

sarahjjones
Télécharger la présentation

Bayesian Learning for Models of Human Speech Perception

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Bayesian Learning for Models of Human Speech Perception Mark Hasegawa-Johnson, University of Illinois at Urbana-Champaign

  2. 1. Review of Psychological Results

  3. 1.A. Independence of Distinctive Feature Errors put here: explanation of the Miller and Nicely experiment. Figure schematizing experimental setup? graphic

  4. Put here: Miller & Nicely confusion matrices at -6dB, -12dB. Tables.

  5. 1.B. Dependence of Distinctive Feature Acoustics • Put here: explanation of the Volaitis & Miller experiment: Spectrograms showing typical VOT of p,b,k,g? Spectrograms (4).

  6. 1.C. Perceptual Magnet Effect

  7. 1.D. Redundant Acoustic Correlates • Spectrograms demonstrating redundant acoustic correlates in production of stops. Spectrogram (1).

  8. 1.E. The Vowel Sequence Illusion • Spectrograms showing vowels and syllable rate, division of vowels into L/H and resulting syllable rate. Spectrograms (2).

  9. 2. A Mathematical Model

  10. 2.A. Syllable Detection followed by Explanation • Spectrograms showing syllable detection followed by syllable explanation. Figure includes: spectrogram (from 1.D.), “detected” landmark alignment times (powerpoint), apparent syllables (powerpoint).

  11. 2.B. Perceptual Space Encodes Distinctive Features Acoustic-to-Perceptual Map

  12. 3. A Machine Learning Model

  13. 3.A. ML Learning of an Explicit Perceptual Space • Equations

  14. 3.B. Discriminative Learning of an Implicit Perceptual Space • Equations

  15. 3.C. Bayesian Learning of Syllable Sequences: A Baum-Welch Algorithm for Irregular Sampling • Equations

  16. 3.D. SVM Learning of Syllable Sequences (in which Baum-Welch is a component) • Equations.

  17. Conclusions • Proposed: a machine learning model consistent with five psychophysical results. • Characteristics of the model: • Front-end processor detects syllable onsets, nuclei, codas (“acoustic landmarks”). • Acoustic-to-perceptual mapping (explicit or implied) classifies distinctive feature content of onset, nucleus, and coda of each syllable. • Experimental tests in progress

More Related