1 / 70

Speech Recognition

Speech Recognition. Definition. Speech recognition is the process of converting an acoustic signal, captured by a microphone or a telephone, to a set of words.

Olivia
Télécharger la présentation

Speech Recognition

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Speech Recognition

  2. Definition • Speech recognition is the process of converting an acoustic signal, captured by a microphone or a telephone, to a set of words. • The recognised words can be an end in themselves, as for applications such as commands & control, data entry, and document preparation. • They can also serve as the input to further linguistic processing in order to achieve speech understanding

  3. Speech Processing • Signal processing: • Convert the audio wave into a sequence of feature vectors • Speech recognition: • Decode the sequence of feature vectors into a sequence of words • Semantic interpretation: • Determine the meaning of the recognized words • Dialog Management: • Correct errors and help get the task done • Response Generation • What words to use to maximize user understanding • Speech synthesis (Text to Speech): • Generate synthetic speech from a ‘marked-up’ word string

  4. Dialog Management • Goal: determine what to accomplish in response to user utterances, e.g.: • Answer user question • Solicit further information • Confirm/Clarify user utterance • Notify invalid query • Notify invalid query and suggest alternative • Interface between user/language processing components and system knowledge base

  5. What you can do with Speech Recognition • Transcription • dictation, information retrieval • Command and control • data entry, device control, navigation, call routing • Information access • airline schedules, stock quotes, directory assistance • Problem solving • travel planning, logistics

  6. Transcription and Dictation • Transcription is transforming a stream of human speech into computer-readable form • Medical reports, court proceedings, notes • Indexing (e.g., broadcasts) • Dictation is the interactive composition of text • Report, correspondence, etc.

  7. Speech recognition and understanding • Sphinx system • speaker-independent • continuous speech • large vocabulary • ATIS system • air travel information retrieval • context management

  8. Speech Recognition and Call Centres • Automate services, lower payroll • Shorten time on hold • Shorten agent and client call time • Reduce fraud • Improve customer service

  9. Applications related to Speech Recognition • Speech Recognition • Figure out what a person is saying. • Speaker Verification • Authenticate that a person is who she/he claims to be. • Limited speech patterns • Speaker Identification • Assigns an identity to the voice of an unknown person. • Arbitrary speech patterns

  10. Many kinds of Speech Recognition Systems • Speech recognition systems can be characterised by many parameters. • An isolated-word (Discrete) speech recognition system requires that the speaker pauses briefly between words, whereas a continuous speech recognition system does not.

  11. Spontaneous V Scripted • Spontaneous, speech contains disfluencies, periods of pause and restart, and is much more difficult to recognise than speech read from script.

  12. Enrolment • Some systems require speaker enrolment, a user must provide samples of his or her speech before using them, whereas other systems are said to be speaker-independent, in that no enrolment is necessary.

  13. Large V small vocabularies • Some of the other parameters depend on the specific task. Recognition is generally more difficult when vocabularies are large with many similar-sounding words. • When speech is produced in a sequence of words, language models or artificial grammars are used to restrict the combination of words. • The simplest language model can be specified as a finite-state network, where the permissible words following each word are given explicitly.

  14. Perplexity • One popular measure of the difficulty of the task, combining the vocabulary size and the language model, is perplexity. • Loosely defined as the geometric mean of the number of words that can follow a word after the language model has been applied., (Zue, Cole, and Ward, 1995).

  15. Finally, some external parameters can affect speech recognition system performance. These include the characteristics of the environmental noise and the type and the placement of the microphone.

  16. Properties of RecognizersSummary • Speaker Independent vs. Speaker Dependent • Large Vocabulary (2K-200K words) vs. Limited Vocabulary (2-200) • Continuous vs. Discrete • Speech Recognition vs. Speech Verification • Real Time vs. multiples of real time

  17. Continued • Spontaneous Speech vs. Read Speech • Noisy Environment vs. Quiet Environment • High Resolution Microphone vs. Telephone vs. Cellphone • Push-and-hold vs. push-to-talk vs. always-listening • Adapt to speaker vs. non-adaptive • Low vs. High Latency • With online incremental results vs. final results • Dialog Management

  18. Features That Distinguish Products & Applications • Words, phrases, and grammar • Models of the speakers • Speech flow • Vocabulary: How many words • How you add new words • Grammars Branching Factor (Perplexity) • Available languages

  19. Systems are also defined by Users • Different Kinds of Users • One time vs. Frequent users • Homogeneity • Technically sophisticated • Based on Users have different speaker models

  20. Speaker Models • Speaker Dependent • Speaker Independent • Speaker Adaptive

  21. Automate services, lower payroll Shorten time on hold Shorten agent and client call time Reduce fraud Improve customer service Sample Market: Call Centers

  22.  A TIMELINE OF SPEECH RECOGNITION • 1890s Alexander Graham Bell discovers Phone while trying to develop speech recognition system for deaf people. • 1936AT&T's Bell Labs produced the first electronic speech synthesizer called the Voder (Dudley, Riesz and Watkins). • This machine was demonstrated in the 1939 World Fairs by experts that used a keyboard and foot pedals to play the machine and emit speech. • 1969John Pierce of Bell Labs said automatic speech recognition will not be a reality for several decades because it requires artificial intelligence.

  23. Early 70s • Early 1970'sThe Hidden Markov Modeling (HMM) approach to speech recognition was invented by Lenny Baum of Princeton University and shared with several ARPA (Advanced Research Projects Agency) contractors including IBM. • HMM is a complex mathematical pattern-matching strategy that eventually was adopted by all the leading speech recognition companies including Dragon Systems, IBM, Philips, AT&T and others.

  24. 70+ • 1971DARPA (Defense Advanced Research Projects Agency) established the Speech Understanding Research (SUR) program to develop a computer system that could understand continuous speech. • Lawrence Roberts, who initiated the program, spent $3 million per year of government funds for 5 years. Major SUR project groups were established at CMU, SRI, MIT's Lincoln Laboratory, Systems Development Corporation (SDC), and Bolt, Beranek, and Newman (BBN). It was the largest speech recognition project ever. • 1978The popular toy "Speak and Spell" by Texas Instruments was introduced. Speak and Spell used a speech chip which led to huge strides in development of more human-like digital synthesis sound.

  25. 80+ • 1982Covox founded. Company brought digital sound (via The Voice Master, Sound Master and The Speech Thing) to the Commodore 64, Atari 400/800, and finally to the IBM PC in the mid ‘80s. • 1982Dragon Systems was founded in 1982 by speech industry pioneers Drs. Jim and Janet Baker. Dragon Systems is well known for its long history of speech and language technology innovations and its large patent portfolio. • 1984SpeechWorks, the leading provider of over-the-telephone automated speech recognition (ASR) solutions, was founded.

  26. 90s • 1993 Covox sells its products out to Creative Labs, Inc. • 1995 Dragon released discrete word dictation-level speech recognition software. It was the first time dictation speech recognition technology was available to consumers. IBM and Kurzweil followed a few months later. • 1996 Charles Schwab is the first company to devote resources towards developing up a speech recognition IVR system with Nuance. The program, Voice Broker, allows for up to 360 simultaneous customers to call in and get quotes on stock and options... it handles up to 50,000 requests each day. The system was found to be 95% accurate and set the stage for other companies such as Sears, Roebuck and Co., and United Parcel Service of America Inc., and E*Trade Securities to follow in their footsteps. • 1996 BellSouth launches the world's first voice portal, called Val and later Info By Voice.

  27. 95+ • 1997 Dragon introduced "Naturally Speaking", the first "continuous speech" dictation software available (meaning you no longer need to pause between words for the computer to understand what you're saying). • 1998 Lernout & Hauspie bought Kurzweil. Microsoft invested $45 million in Lernout & Hauspie to form a partnership that will eventually allow Microsoft to use their speech recognition technology in their systems. • 1999 Microsoft acquired Entropic, giving Microsoft access to what was known as the "most accurate speech recognition system" in the Old VCR!

  28. 2000 2000 Lernout & Hauspie acquired Dragon Systems for approximately $460 million. 2000 TellMe introduces first world-wide voice portal. 2000 NetBytel launched the world's first voice enabler, which includes an on-line ordering application with real-time Internet integration for Office Depot.

  29. 2000s 2001ScanSoft Closes Acquisition of Lernout & Hauspie Speech and Language Assets. 2003ScanSoft Ships Dragon NaturallySpeaking 7 Medical, Lowers Healthcare Costs through Highly Accurate Speech Recognition. 2003ScanSoft closes deal to distribute and support IBM ViaVoice Desktop Products.

  30. Signal Variability • Speech recognition is a difficult problem, largely because of the many sources of variability associated with the signal. • The acoustic realisations of phonemes, the recognition systems smallest sound units of which words are composed, are highly dependent on the context in which they appear. • These phonetic variables are exemplified by the acoustic differences of the phoneme 't/'in two, true, and butter in English. • At word boundaries, contextual variations can be quite dramatic, and devo andare sound like devandare in Italian.

  31. More • Acoustic variability can result from changes in the environment as well as in the position and characteristics of the transducer. • Within-speaker variability can result from changes in the speaker's physical and emotional state, speaking rate, or voice quality. • Differences in socio-linguistic background, dialect, and vocal tract size and shape can contribute to across-speaker variability.

  32. What is a speech recognition system? • Speech recognition is generally used as a human computer interface for other software. When it functions in this role, three primary tasks need be performed. • Pre-processing, the conversion of spoken input into a form the recogniser can process. • Recognition, the identification of what has been said. • Communication, to send the recognised input to the application that requested it.

  33. How is pre-processing performed • To understand how the first of these functions is performed, we must examine, • Articulation, the production of the sound. • Acoustics, the stream of the speech itself. • What characterises the ability to understand spoke input, Auditory perception.

  34. Articulation • The science of articulation is concerned with how phonemes are produced. The focus of articulation is on the vocal apparatus of the throat, mouth and nose where the sounds are produced. • The phonemes themselves need to be classified, the system most often used by speech recognition is the ARPABET, (Rabiner and Juang, 1993) The ARPABET was created in the 1970’s by and for contractors working on speech processing for the Advanced Research Projects Agency of the U.S. department of defence.

  35. ARPABET • Like most phoneme classifications, the ARPABET separates consonants from vowels. • Consonants are characterised by a total or partial blockage of the vocal tract. • Vowels are characterised by strong harmonic patterns and relatively free passage of air through the vocal tract. • Semi-Vowels, such as the ‘y’ in you, fall between consonants and vowels.

  36. Consonant Classifcation • Consonant classification uses the, • Point of articulation. • Manner of articulation. • Presence or absence of voicing.

  37. Acoustics • Articulation provides valuable information about how speech sounds are produced, but a speech recognition system cannot analyse movements of the mouth. • Instead, the data source for speech recognition is the stream of speech itself. • This is an analogue signal, a sound stream, and a continuous flow of sound waves and silence.

  38. Important Features (Acoustics) • Four important features of the acoustic analysis of speech are, (Carter, 1984) • Frequency, the number of vibrations per second a sound produces • Amplitude, the loudness of the sound. • Harmonic structure added to the fundamental frequency of a sound are other frequencies that contribute to its quality or timbre. • Resonance.

  39. Auditory perception, hearing speech. • "Phonemes tend to be abstractions that are implicitly defined by the pronunciation of the words in the language. In particular, the acoustic realisation of a phoneme may heavily depend on the acoustic context in which it occurs. This effect is usually called co-articulation", (Ney, 1994). • The way a phoneme is pronounced can be affected by its position in a word, neighbouring phonemes and even the word's position in a sentence. This affect is called the co-articulation effect. • The variability in the speech signal caused by co-articulation and other sources make speech analysis very difficult.

  40. Human Hearing • The human ear can detect frequencies from 20Hz to 20,000Hz but it is most sensitive in the critical frequency range, 1000Hz to 6000Hz, (Ghitza, 1994). • Recent Research has uncovered the fact that humans do not process individual frequencies. • Instead, we hear groups of frequencies, such as format patterns, as cohesive units and we are capable of distinguishing them from surrounding sound patterns, (Carrell and Opie, 1992) . • This capability, called auditory object formation, or auditory image formation, helps explain how humans can discern the speech of individual people at cocktail parties and separate a voice from noise over a poor telephone channel, (Markowitz, 1995).

  41. Pre-processing Speech • Like all sounds, speech is an analogue waveform. In order for a Recognition System to perform action on speech, it must be represented in a digital manner. • All noise patterns silences and co-articulation effects must be captured. • This is accomplished by digital signal processing. The way the analogue speech is processed is one of the most complex elements of a Speech Recognition system.

  42. Recognition Accuracy • To achieve high recognition accuracy the speech representation process should, (Markowitz, 1995), • Include all critical data. • Remove Redundancies. • Remove Noise and Distortion. • Avoid introducing new distortions.

  43. Signal Representation • In statistically based automatic speech recognition, the speech waveform is sampled at a rate between 6.6 kHz and 20 kHz and processed to produce a new representation as a sequence of vectors containing values of what are generally called parameters. • The vectors typically comprise between 10 and 20 parameters, and are usually computed every 10 or 20 milliseconds.

  44. Parameter Values • These parameter values are then used in succeeding stages in the estimation of the probability that the portion of waveform just analysed corresponds to a particular phonetic event that occurs in the phone-sized or whole-word reference unit being hypothesised. • In practice, the representation and the probability estimation interact strongly: what one person sees as part of the representation another may see as part of the probability estimation process.

  45. Emotional State • Representations aim to preserve the information needed to determine the phonetic identity of a portion of speech while being as impervious as possible to factors such as speaker differences, effects introduced by communications channels, and paralinguistic factors such as the emotional state of the speaker. • They also aim to be as compact as possible.

  46. Representations used in current speech recognisers, concentrate primarily on properties of the speech signal attributable to the shape of the vocal tract rather than to the excitation, whether generated by a vocal-tract constriction or by the larynx. • Representations are sensitive to whether the vocal folds are vibrating or not (the voiced/unvoiced distinction), but try to ignore effects due to variations in their frequency of vibration.

  47. Future Improvements in Speech Representation. • The vast majority of major commercial and experimental systems use representations akin to those described here. • However, in striving to develop better representations, wave-let transforms (Daubechies, 1990) are being explored, and neural network methods are being used to provide non-linear operations on log spectral representations.

  48. Work continues on representations more closely reflecting auditory properties (Greenberg, 1988) and on representations reconstructing articulatory gestures from the speech signal (Schroeter & Sondhi, 1994). • It is attractive because it holds out the promise of a small set of smoothly varying parameters that could deal in a simple and principled way with the interactions that occur between neighbouring phonemes and with the effects of differences in speaking rate and of carefulness of enunciation.

  49. The ultimate challenge is to match the superior performance of human listeners over automatic recognisers. • This superiority is especially marked when there is little material to allow adaptation to the voice of the current speaker, and when the acoustic conditions are difficult. • The fact that it persists even when nonsense words are used shows that it exists at least partly at the acoustic/phonetic level and cannot be explained purely by superior language modelling in the brain. • It confirms that there is still much to be done in developing better representations of the speech signal, (Rabiner and Schafer, 1978; Hunt, 1993).

  50. Signal Recognition Technologies • Signal Recognition methodologies fall into to four categories, most system will apply one or more in the conversion process.

More Related