1 / 71

Introduction to Speech Recognition

Introduction to Speech Recognition. Preliminary Topics Overview of Audio Signals Overview of the interdisciplinary nature of the problem Review of Digital Signal Processing Physiology of human sound production and perception. Science of Language. Morphology: Language structure

graceland
Télécharger la présentation

Introduction to Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Speech Recognition Preliminary Topics • Overview of Audio Signals • Overview of the interdisciplinary nature of the problem • Review of Digital Signal Processing • Physiology of human sound production and perception

  2. Science of Language • Morphology: Language structure • Acoustics: Study of sound • Phonology: Classification of linguistic sounds • Semantics: Study of meaning • Pragmatics: How language is used • Phonetics: Speech production and perception Natural Language Processing draws from these fields to engineer practical systems that work.

  3. Language Components • Phoneme: Smallest discrete unit of sound that distinguishes words (Minimal Pair Principle) • Syllable: Acoustic component perceived as a single unit • Morpheme: Smallest linguistic unit with meaning • Word: Speaker identifiable unit of meaning • Phrase: Sub-message of one or more words • Sentence: Self-contained message derived from a sequence of phrases and words

  4. Natural Language Characteristics • Phones are the set of all possible sounds that humans can articulate. Each phone has unique audio signal characteristics. • Each language selects a set of phonemes from the larger set of phones (English ≈ 40). Our hearing is tuned to respond to this smaller set. • Speech is a highly redundant sequential sequence of sounds (phonemes) , pitch (prosody), gestures, and expressions that vary with time.

  5. Audio Signal Redundancy • Continuous signal (virtually infinite) • Sampled • Mac: 44,100 2-byte samples per second (705kbps) • PC: 16,000 2-byte samples per second (256kbps) • Telephone: 4k 1-byte sample per second (32kbps) • Code Excited Linear Prediction (CELP) Compression: 8kbps • Research: 4kbps, 2.4 kbps • Military applications: 600 bps • Human brain: 50 bps

  6. Sample Sound Waves (Sound Editor) Download and install from ACORNS web-site Top: “this is a demo” Bottom: “A goat …. A coat” Time domain

  7. Complex Wave Patterns • Sound waves occupying the same space combine to form a new wave of a different shape • Harmonically related waves add together and can create any complex wave pattern • Harmonically related waves have frequencies that are multiples of a basic frequency • Speech consists of sinusoids combined together mostly by linear addition

  8. Nyquist Frequency (fN) = highest detectible frequency Sampling Frequency (fs) = samples per time period Maximum Signal Frequency (fmax) Theorem: fN = 2 * fmax; fs >= fN Nyquist Theorem What is the optimal sample rate for speech? Adequate Sampling Inadequate Sampling Most speech information is below 4 kHz, human perception is below 22khz Telephone speech is sampled at 8 kHz, computer algorithms sample ≤ 44 kHz

  9. Audio File Formats • Amplitude measurements in samples/second stored in an array • Wav File format - Pulse Code Modulation (PCM) • Usually 2 bytes per sample (can be 3 or 4 bytes per sample) • Big or Little Endian • Single or Stereo channels • Ulaw and Alaw • Takes advantage of human perception which is logarithmic • One byte per sample containing logarithmic values • Compression algorithms code speech differently, but we convert to PCM for processing • Examples: spx, ogg, mp3 • Algorithms: Run length compression, Linear prediction coding (CELP) • Java Sound and Tritonus support various formats/conversions

  10. Time vs. Frequency Domain Time Domain: Signal is a composite wave of different frequencies Frequency Domain: Split time domain into the individual frequencies Fourier: We can compute the phase and amplitude of each composite sinusoid FFT: An efficient algorithm to perform the decomposition

  11. Formant “a” from “this is a demo” • Formant:The spectral peaks of the sound spectrum, or harmonics of the fundamental frequency • Harmonic: A wave whose frequency is a integral multiple of that of a reference wave • F0 or fundamental frequency or audio pitch: The frequency at which the vocal folds resonate. Male F0 = 80 to 180 Hz, Female F0 = 160 to 260 Hz • Octave: doubling (or halving) frequency between two waves Note: The vocal fold vibration is somewhat noisy, (a combination of frequencies)

  12. Frequency Domain Audio: “This is a Demo” Narrow band: Shows harmonics – horizontal lines Wide Band: Shows pitch – pitch periods are vertical lines Horizontal axis = time, vertical axis = frequency, frequency amplitude = darkness

  13. Signal Filters Purposes (General) Examples Eliminate frequencies without speech information Enhance poor quality recordings Reduce background Noise Adjust frequencies to mimic human perception • Separate Signals • Eliminate distortions • Remove unwanted data • Compress and decompress • Extract important features • Enhance desired components How: Execute a convolution algorithm

  14. Filter Characteristics Note: The ideal filter would require infinite computation

  15. Filter Terminology • Rise time: Time for step response to go from 10% to 90% • Linear phase: Rising edges match falling edges • Overshoot: amount amplitude exceeds the desired value • Ripple: pass band oscillations • Ringing: stop band oscillations • Pass band: the allowed frequencies • Stop band: the blocked frequencies • Transition band: frequencies between pass or stop bands • Cutoff frequency: point between pass and transition bands • Roll off: transition sharpness between pass and stop bands • Stop band attenuation: reduced amplitude in the stop band

  16. Filter Performance

  17. Time Domain Filters • Finite Impulse Response • Filter only affects the data samples, hence the filter only effects a fixed number of data point • y[n] = b0 sn+ b1 sn-1+ …+ bM-1 sn-M+1=∑k=0,M-1bk sn-k • Infinite Impulse Response (also called recursive) • Filter affects the data samples and previous filtered output, hence the effect can be infinite • t[n] = ∑k=0,M-1bk sn-k + ∑k=0,M-1 ak tn-k • If a signal was linear, so is the filtered signal • Why? We summed samples multiplied by constants, we didn’t multiply or raise samples to a power

  18. Convolution The algorithm used for creating Time Domain filters public static double[] convolution(double[] signal, double[] b, double[] a) { double[] y = new double[signal.length + b.length - 1]; for (int i = 0; i < signal.length; i ++) { for (int j = 0; j < b.length; j++) { if (i-j>=0) y[i] += b[j]*signal[i - j]; } if (a!=null) { for (int j = 1; j < a.length; j ++) { if (i-j>=0) y[i] -= a[j] * y[i - j]; } } } return y; }

  19. Convolution Theorem • Multiplication in the time domain is equivalent to convolution in the frequency domain • Multiplication in the frequency domain equivalent to convolution in the time domain • Application: We can design a filter by creating its desired frequency response and then perform an inverse FFT to derive the filter kernel • Theoretically, we can create an ideal (“perfect”) low pass filter with this approach

  20. Amplify • Top Figure (original signal) • Bottom Figure • The signal’s amplitude is multiplied by 1.6 • Attenuation can occur by picking a magnitude that is less than one y[n] = k δ[n]

  21. Moving Average FIR Filter Convolution using a simple filter kernel int[] average(int x[]) { int[] y[x.length]; for (int i=50; i<x.length-50; i++) { for (int j=-50; j<=50; j++) { y[i] += x[i + j]; } y[i] /= 101; } } Formula: Example Point: Example Point (Centered):

  22. IIR (Recursive) Moving Average Two additions per point no matter the length of the filter • Example:y[50] = x[47]+x[48]+x[49]+x[50]+x[51]+x[52]+x[53]y[51] = x[48]+x[49]+x[50]+x[51]+x[52]+x[53]+x[54] = y[50] + (x[54] – x[47])/7 • The general casey[i] = y[i-1] + (x[i+M/2] - x[i-(M+1)/2])/M Note: Integers work best with this approach to avoid round off drift

  23. Characteristics of Moving Average Filters • Longer kernel filters more noise • Long filters lose edge sharpness • Distorts the frequency domain • Very fast • Frequency response • sync function (sin(x)/x) • A degrading sine wave • Speech • Great for smoothing a pitch contour • Horrible for identifying formants

  24. Speech Noisy channel • Encode – send – signal – receive – decode • Communication tends to be effective and efficient • Speech is as easy on the mouth as possible while still being understood • Speakers adjust their enunciation according to implied knowledge they share with their listeners Synthesis Recognition

  25. Overview of the Noisy Channel The Noisy Channel • Computational Linguistics • Replace the ear with a microphone • Replace the brain with a computer algorithm

  26. Vocal Tract (for Speech Production) Note: Velum (soft palate) position controls nasal sounds, epiglottis closes when swallowing

  27. Another look at the vocal tract

  28. Vocal Source • Speaker alters vocal tension of the vocal folds • If folds are opened, speech is unvoiced resembling background noise • If folds are stretched close, speech is voiced • Air pressure builds and vocal folds blow open releasing pressure and elasticity causes the vocal folds to fall back • Average fundamental frequency (F0): 60 Hz to 300 Hz • Speakers control vocal tension to alter F0 and the perceived pitch Open Closed Period

  29. Different Voices • Falsetto – The vocal cords are stretched and become thin causing high frequency • Creaky – Only the front vocal folds vibrate, giving a low frequency • Breathy – Vocal cords vibrate, but air is escaping through the glottis • Each person tends to consistently use particular phonation patterns. This makes the voice uniquely theirs

  30. Place of the Articulation Articulation: Shaping the speech sounds • Bilabial – The two lips (p, b, and m) • Labio-dental – Lower lip and the upper teeth (v) • Dental – Upper teeth and tongue tip or blade (thing) • Alveolar –Alveolar ridge and tongue tip or blade (d, n, s) • Post alveolar –Area just behind the alveolar ridge and tongue tip or blade (jug ʤ, ship ʃ, chip ʧ, vision ʒ) • Retroflex – Tongue curled and back (rolling r) • Palatal – Tongue body touches the hard palate (j) • Velar – Tongue body touches soft palate (k, g, ŋ (thing)) • Glottal – larynx (uh-uh, voiced h)

  31. Manner of Articulation • Voiced: The vocal cords are vibrating, Unvoiced: vocal cords don’t vibrate • Obstruent: Frequency domain is similar to noise • Fricative: Air flow not completely shut off • Affricate: A sequence of a stop followed by a fricative • Sibilant: a consonant characterized by a hissing sound (like s or sh) • Trill: A rapid vibration of one speech organ against another (Spanish r). • Aspiration: burst of air following a stop. • Stop: Air flow is cut off • Ejective: airstream and the glottis are closed and suddenly released (/p/). • Plosive: Voiced stop followed by sudden release • Flap: A single, quick touch of the tongue (t in water). • Nasality: Lowering the soft palate allows air to flow through the nose • Glides: vowel-like, syllable position makes them short without stress (w, y). An On-glide is a glide before a vowel; an off-glide is a glide after vowel • Approximant (semi-vowels): Active articulator approaches the passive articulator, but doesn’t totally shut of (L and R). • Lateral: The air flow proceeds around the side of the tongue

  32. Vowels No restriction of the vocal tract, articulators alter the formants • Diphthong: Syllabics which show a marked glide from one vowel to another, usually a steady vowel plus a glide • Nasalized: Some air flow through the nasal cavity • Rounding: Shape of the lips • Tense: Sound more extreme (further from the schwa) and tend to have the tongue body higher • Relaxed: Sounds closer to schwa (tonally neutral) • Tongue position: Front to back, high to low Schwa: unstressed central vowel (“ah”)

  33. Consonants • Significant obstruction in the nasal or oral cavities • Occur in pairs or triplets and can be voiced or unvoiced • Sonorant: continuous voicing • Unvoiced: less energy • Plosive: Period of silence and then sudden energy burst • Lateral, semi vowels, retroflex: partial air flow block • Fricatives, affricatives: Turbulence in the wave form

  34. English Consonants

  35. Consonant Place and Manner

  36. Example word

  37. Speech Production Analysis Devices used to measure speech production • Plate attached to roof of mouth measuring contact • Collar around the neck measuring glottis vibrations • Measure air flow from mouth and nose • Three dimension images using MRI Note: The International Phonetic Alphabet (IPA) was designed before the above technologies existed. They were devised by a linguist looking down someone’s mouth or feeling how sounds are made.

  38. ARPABET: English-based phonetic system Phone Example Phone Example Phone Example [iy] beat [b] bet [p] pet [ih] bit [ch] chet [r] rat [eh] bet [d] debt [s] set [ah] but [f] fat [sh] shoe [x] bat [g] get [t] ten [ao] bought [hh] hat [th] thick [ow] boat [hy] high [dh] that [uh] book [jh] jet [dx] butter [ey] bait [k] kick [v] vet [er] bert [l] let [w] wet [ay] buy [m] met [wh] which [oy] boy [em] bottom [arr] dinner [n] net [y] yet [aw] down [en] button [z] zoo [ax] about [ng] sing [zh] measure [ix] roses [eng] washing [aa] cot [-] silence

  39. The International Phonetic Alphabet A standard that attempts to create a notation for all possible human sounds

  40. IPA Vowels Caution: American English tongue positions don’t exactly match the chart. For example, ‘father’ in English does not have the tongue position as far back as the IPA vowel chart shows.

  41. IPA Diacritics

  42. IPA: Tones and Word Accents

  43. IPA: Supra-segmental Symbols

  44. Phoneme Tree Categorization from Rabiner and Juang

  45. Characteristics: Vowels & Diphthongs • Vowels • /aa/, /uw/, /eh/, etc. • Voiced speech • Average duration: 70 msec • Spectral slope: higher frequencies have lower energy (usually) • Resonant frequencies (formants) at well-defined locations • Formant frequencies determine the type of vowel • Diphthongs • /ay/, /oy/, etc. • Combination of two vowels • Average duration: about 140 msec • Slow change in resonant frequencies from beginning to end

  46. Perception • Some perceptual components are understood, but knowledge concerning the entire human perception model is rudimentary • Understood Components • The inner ear works as a bank of filters • Sounds are perceived logarithmically, not linearly • Some sounds will mask others

  47. The Inner Ear Two sensory organs are located in the inner ear. • The vestibule is the organ of equilibrium • The cochlea is the organ of hearing

  48. Hearing Sensitivity Frequencies Human hearing is sensitive to about 25 ranges of frequencies • Cochlea transforms pressure variations to neural impulses • Approximately 30,000 hair cells along basilar membrane • Each hair cell has hairs that bend to basilar vibrations • High-frequency detection is near the oval window. • Low-frequency detection is at far end of the basilar membrane. • Auditory nerve fibers are ``tuned'' to center frequencies.

  49. Basilar Membrane Note: Basilar Membrane shown unrolled • Thin elastic fibers stretched across the cochlea • Short, narrow, stiff, and closely packed near the oval window • Long, wider, flexible, and sparse near the end of the cochlea • The membrane connects to a ligament at its end. • Separates two liquid filled tubes that run along the cochlea • The fluids are very different chemically and carry the pressure waves • A leakage between the two tubes causes a hearing breakdown • Provides a base for sensory hair cells • The hair cells above the resonating region fire more profusely • The fibers vibrate like the strings of a musical instrument.

  50. Place Theory Decomposing the sound spectrum • Georg von Bekesy’s Nobel Prize discovery • High frequencies excite the narrow, stiff part at the end • Low frequencies excite the wide, flexible part by the apex • Auditory nerve input • Hair cells on the basilar membrane fire near the vibrations • The auditory nerve receives frequency coded neural signals • A large frequency range possible; basilar membrane’s stiffness is exponential Demo at: http://www.blackwellpublishing.com/matthews/ear.html

More Related