1 / 52

Speech Recognition Introduction II

Speech Recognition Introduction II. E.M. Bakker . Speech Recognition. Some Projects and Applications Speech Recognition Architecture (Recap) Speech Production Speech Perception Signal/Speech (Pre-)processing. Previous Projects. English Accent Recognition Tool (NN)

liam
Télécharger la présentation

Speech Recognition Introduction II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Speech RecognitionIntroduction II E.M. Bakker LML Speech Recognition 2008

  2. Speech Recognition • Some Projects and Applications • Speech Recognition Architecture (Recap) • Speech Production • Speech Perception • Signal/Speech (Pre-)processing LML Speech Recognition 2008

  3. Previous Projects • English Accent Recognition Tool (NN) • The Digital Reception Desk • Noise Reduction (Attempt) • Tune Recognition • Say What Robust Speech Recognition • Voice Authentication • ASR on PDA using a Server • Programming by Voice, VoiceXML • Chemical Equations TTS • Emotion Recognition • Speech Recognition using Neural Networks LML Speech Recognition 2008

  4. Tune Identification • FFT • Pitch Information • Parsons code (D. Parsons, The Directory of Tunes and Musical Themes, 1975) • String Matching • Tune Recognition LML Speech Recognition 2008

  5. Speaker Identification • Learning Phase • Special text read by subjects • Sphinx3 cepstral coefficients (vector_vqgen) • FFT-Energy features stored as template/ codebook • Recognition by dynamic time warping (DTW) • Match the stored templates (Euclidean distance with threshold) • Vector Quantization (VQ) using code book • DTW and VQ combined for recognition LML Speech Recognition 2008

  6. Audio Indexing • Several Audio Classes • Car, Explosion, Voices, Wind, Sea, Crowd, etc. • Classical Music, Pop Music, R&B, Trance, Romantic, etc. • Determine features capturing, pitch, rhythm, loudness, etc. • Short time Energy • Zero crossing rates • Level crossing rates • Spectral energy, formant analysis, etc. • Use Vector Quantization for learning and recognizing the different classes LML Speech Recognition 2008

  7. Bimodal Emotion Recognition Nicu Sebe1, Erwin M. Bakker2, Ira Cohen3, Theo Gevers1, Thomas S. Huang41University of Amsterdam, The Netherlands2Leiden University, The Netherlands3HP Labs, USA4University of Illinois at Urbana-Champaign, USA(Sept, 2005) LML Speech Recognition 2008

  8. Emotion from Auditory Cues: Prosody • Prosody is the melody or musical nature of the spoken voice • We are able to differentiate many emotions from prosody alone e.g. anger, sadness, happiness • Universal and early skill • Are the neural bases for this ability the same as for differentiating emotion from visual cues? LML Speech Recognition 2008

  9. Bimodal Emotion Recognition: Experiments • Video features • “Connected Vibration” video tracking • Eyebrow position, cheek lifting, mouth opening, etc. • Audio features • “Prosodic features”: ‘Prosody’ ~ the melody of the voice. • logarithm of energy • syllable rate • pitch LML Speech Recognition 2008

  10. Face Tracking • 2D image motions are measured using template matching between frames at different resolutions • 3D motion can be estimated from the 2D motions of many points of the mesh • The recovered motions are represented in terms of magnitudes of facial features • Each feature motion corresponds to a simple deformation of the face LML Speech Recognition 2008

  11. LML Speech Recognition 2008

  12. Bimodal Database LML Speech Recognition 2008

  13. ApplicationsAudio Indexing of Broadcast News • Broadcast news offers some unique • challenges: • Lexicon: important information in • infrequently occurring words • Acoustic Modeling: variations in channel, particularly within the same segment (“ in the studio” vs. “on location”) • Language Model: must adapt (“ Bush,” “Clinton,” “Bush,” “McCain,” “???”) • Language: multilingual systems? language-independent acoustic modeling? LML Speech Recognition 2008

  14. Content Based Indexing • Language identification • Speech Recognition • Speaker Recognition • Emotion Recognition • Environment Recognition: indoor, outdoor, etc. • Object Recognition: car, plane, gun, footsteps, etc. • … LML Speech Recognition 2008

  15. Meta Data Extraction • Relative location of the speaker? • Who is speaking? • What emotions are expressed? • Which language is spoken? • What is spoken? • What are the keywords? (Indexing) • What is the meaning of the spoken text? • Etc. LML Speech Recognition 2008

  16. Open Source Projects • Sphinx (www.speech.cs.cmu.edu) • ISIP (www.ece.msstate.edu/research/isip/ projects/speech) • HTK (htk.eng.cam.ac.uk) • LV CSR Julius (julius.sourceforge.jp) • VoxForge (www.voxforge.org) LML Speech Recognition 2008

  17. Speech Recognition • Some Projects and Applications • Speech Recognition Architecture (Recap) • Speech Production • Speech Perception • Signal/Speech (Pre-)processing LML Speech Recognition 2008

  18. Words Speech Recognition “How are you?” Speech Signal Speech Recognition Goal: Automatically extract the string of words spoken from the speech signal LML Speech Recognition 2008

  19. Recognition Architectures • The signal is converted to a sequence of feature vectors based on spectral and temporal measurements. • Acoustic models represent sub-word units, such as phonemes, as a finite-state machine in which: • states model spectral structure and • transitions model temporal structure. Acoustic Front-end Acoustic Models P(A|W) • The language model predicts the next • set of words, and controls which models are hypothesized. Search • Search is crucial to the system, since • many combinations of words must be • investigated to find the most probable • word sequence. Recognized Utterance Input Speech Language Model P(W) LML Speech Recognition 2008

  20. Words Speech Recognition “How are you?” Speech Signal Speech Recognition Goal: Automatically extract the string of words spoken from the speech signal • How is SPEECH produced? • Characteristics of Acoustic Signal LML Speech Recognition 2008

  21. Words Speech Recognition “How are you?” Speech Signal Speech Recognition Goal: Automatically extract the string of words spoken from the speech signal How is SPEECH perceived? => Important Features LML Speech Recognition 2008

  22. Speech Signals • The Production of Speech • Models for Speech Production • The Perception of Speech • Frequency, Noise, and Temporal Masking • Phonetics and Phonology • Syntax and Semantics LML Speech Recognition 2008

  23. Human Speech Production • Physiology • Schematic and X-ray Saggital View • Vocal Cords at Work • Transduction • Spectrogram • Acoustics • Acoustic Theory • Wave Propagation LML Speech Recognition 2008

  24. Saggital Plane View of the Human Vocal Apparatus LML Speech Recognition 2008

  25. Saggital Plane View of the Human Vocal Apparatus LML Speech Recognition 2008

  26. Characterization of English Phonemes LML Speech Recognition 2008

  27. Vocal Chords • The Source of Sound LML Speech Recognition 2008

  28. Models for Speech Production LML Speech Recognition 2008

  29. Models for Speech Production LML Speech Recognition 2008

  30. English Phonemes Bet Debt Get Pin Sp i n Allophone p LML Speech Recognition 2008

  31. The Vowel Space • We can characterize a vowel sound by the locations of the first and second spectral resonances, known as formant frequencies: • Some voiced sounds, such as diphthongs, are transitional sounds that move from one vowel location to another. LML Speech Recognition 2008

  32. PhoneticsFormant Frequency Ranges LML Speech Recognition 2008

  33. Words Speech Recognition “How are you?” Speech Signal Speech Recognition Goal: Automatically extract the string of words spoken from the speech signal How is SPEECH perceived? LML Speech Recognition 2008

  34. The Perception of SpeechSound Pressure • The ear is the most sensitive human organ. Vibrations on the order of angstroms are used to transduce sound. It has the largest dynamic range (~140 dB) of any organ in the human body. • The lower portion of the curve is an audiogram - hearing sensitivity. It can vary up to 20 dB across listeners. • Above 120 dB corresponds to a nice pop-concert (or standing under a Boeing 747 when it takes off). • Typical ambient office noise is about 55 dB. x dB = 10 log10(x/x0)), x0 = 1kHz signal with intensity that is just hearable. LML Speech Recognition 2008

  35. LML Speech Recognition 2008

  36. The Perception of SpeechThe Ear • Three main sections, outer, middle, and inner: • The outer and middle ears reproduce the analog signal (impedance matching) • the inner ear transduces the pressure wave into an electrical signal. • The outer ear consists of the external visible part and the auditory canal. The tube is about 2.5 cm long. • The middle ear consists of the eardrum and three bones (malleus, incus, and stapes). It converts the sound pressure wave to displacement of the oval window (entrance to the inner ear). LML Speech Recognition 2008

  37. The Perception of SpeechThe Ear • The inner ear primarily consists of a fluid-filled tube (cochlea) which contains the basilar membrane. Fluid movement along the basilar membrane displaces hair cells, which generate electrical signals. • There are a discrete number of hair cells (30,000). Each hair cell is tuned to a different frequency. • Place vs. Temporal Theory: firings of hair cells are processed by two types of neurons (onset chopper units for temporal features and transient chopper units for spectral features). LML Speech Recognition 2008

  38. PerceptionPsychoacoustics • Psychoacoustics: a branch of science dealing with hearing, the sensations produced by sounds. • A basic distinction must be made between the perceptual attributes of a sound vs measurable physical quantities: • Many physical quantities are perceived on a logarithmic scale (e.g. loudness). Our perception is often a nonlinear function of the absolute value of the physical quantity being measured (e.g. equal loudness). • Timbre can be used to describe why musical instruments sound different. • What factors contribute to speaker identity? LML Speech Recognition 2008

  39. PerceptionEqual Loudness • Just Noticeable Difference (JND): The acoustic value at which 75% of responses judge stimuli to be different (limen) • The perceptual loudness of a sound is specified via its relative intensity above the threshold. A sound's loudness is often defined in terms of how intense a reference 1 kHz tone must be heard to sound as loud. 0 dB LML Speech Recognition 2008

  40. Perception Non-Linear Frequency Warping: Bark and Mel Scale • Critical Bandwidths: correspond to approximately 1.5 mm spacings along the basilar membrane, suggesting a set of 24 bandpass filters. • Critical Band: can be related to a bandpass filter whose frequency response corresponds to the tuning curves of auditory neurons. A frequency range over which two sounds will sound like they are fusing into one. • Bark Scale: • Mel Scale: LML Speech Recognition 2008

  41. PerceptionBark and Mel Scale • The Bark scale implies a nonlinear frequency mapping LML Speech Recognition 2008

  42. PerceptionBark and Mel Scale • Filter Banks used in ASR: • The Bark scale implies a nonlinear frequency mapping LML Speech Recognition 2008

  43. Comparison of Bark and Mel Space Scales LML Speech Recognition 2008

  44. PerceptionTone-Masking Noise • Frequency masking: one sound cannot be perceived if another sound close in frequency has a high enough level. The first sound masks the second. • Tone-masking noise: noise with energy EN (dB) at Bark frequency g masks a tone at Bark frequency b if the tone's energy is below the threshold: TT(b) = EN - 6.025 - 0.275g + Sm(b-g)   (dB SPL) where the spread-of-masking function Sm(b) is given by: Sm(b) = 15.81 + 7.5(b+0.474)-17.5* sqrt(1 + (b+0.474)2)   (dB) • Temporal Masking: onsets of sounds are masked in the time domain through a similar masking process. • Thresholds are frequency and energy dependent. • Thresholds depend on the nature of the sound as well. LML Speech Recognition 2008

  45. PerceptionNoise-Masking Tone • Noise-masking tone: a tone at Bark frequency g with energy ET (dB) masks noise at Bark frequency bif the noise energy is below the threshold: TN(b) = ET - 2.025 - 0.17g + Sm(b-g)   (dB SPL) • Masking thresholds are commonly referred to as Bark scale functions of just noticeable differences (JND). • Thresholds are not symmetric. • Thresholds depend on the nature of the noise and the sound. LML Speech Recognition 2008

  46. Masking LML Speech Recognition 2008

  47. Perceptual Noise Weighting • Noise-weighting: shaping the spectrum to hide noise introduced by imperfect analysis and modeling techniques (essential in speech coding). • Humans are sensitive to noise introduced in low-energy areas of the spectrum. • Humans tolerate more additive noise when it falls under high energy areas of the spectrum. The amount of noise tolerated is greater if it is spectrally shaped to match perception. • We can simulate this phenomena using "bandwidth-broadening": LML Speech Recognition 2008

  48. Perceptual Noise Weighting Simple Z-Transform interpretation: • can be implemented by evaluating the Z-Transform around a contour closer to the origin in the z-plane: Hnw(z) = H(az). • Used in many speech compression systems (Code Excited Linear Prediction). • Analysis performed on bandwidth-broadened speech; synthesis performed using normal speech. Effectively shapes noise to fall under the formants. LML Speech Recognition 2008

  49. PerceptionEcho and Delay • Humans are used to hearing their voice while they speak - real-time feedback (side tone). • When we place headphones over our ears, which dampens this feedback, we tend to speak louder. • Lombard Effect: Humans speak louder in the presence of ambient noise. • When this side-tone is delayed, it interrupts our cognitive processes, and degrades our speech. • This effect begins at delays of approximately 250 ms. • Modern telephony systems have been designed to maintain delays lower than this value (long distance phone calls routed over satellites). • Digital speech processing systems can introduce large amounts of delay due to non-real-time processing. LML Speech Recognition 2008

  50. PerceptionAdaptation • Adaptation refers to changing sensitivity in response to a continued stimulus, and is likely a feature of the mechano-electrical transformation in the cochlea. • Neurons tuned to a frequency where energy is present do not change their firing rate drastically for the next sound. • Additive broadband noise does not significantly change the firing rate for a neuron in the region of a formant. Visual Adaptation • The McGurk Effect is an auditory illusion which results from combining a face pronouncing a certain syllable with the sound of a different syllable. The illusion is stronger for some combinations than for others. For example, an auditory 'ba' combined with a visual 'ga' is perceived by some percentage of people as 'da'. A larger proportion will perceive an auditory 'ma' with a visual 'ka' as 'na'. Some researchers have measured evoked electrical signals matching the "perceived" sound. LML Speech Recognition 2008

More Related