1 / 83

Spoken Language Understanding

Spoken Language Understanding. Error Correction and Adaptation. ASR Engine. Post-Processor. Acoustic Model. ECLM. Language Model. Semantic LM. ASR outputs. Transcripts. Dialog System and Adaptation. Dialog System. ASR output. Adaptation or Re-ranking. Corpus A (Adaptation).

bobbymyers
Télécharger la présentation

Spoken Language Understanding

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Spoken Language Understanding Intelligent Robot Lecture Note

  2. Error Correction and Adaptation Intelligent Robot Lecture Note

  3. ASR Engine Post-Processor Acoustic Model ECLM Language Model Semantic LM ASR outputs Transcripts Dialog System and Adaptation Dialog System ASR output Adaptation or Re-ranking Intelligent Robot Lecture Note

  4. Corpus A (Adaptation) Information Extraction Task-specific Knowledge Adapted P(w1…wn) SLM Adaptation Corpus B (Background) SLM Estimation Initial PB(w1…wn) Error Correction and Adaptation • ‘Error correction’ is to automatically recover ASR errors by post-processing [Ringger and Allen, 1996]. • ‘Adaptation’ is a process to fit ASR to new domain, speaker, and speech styles [Bellegarda, 2004]. *From [Bellegarda, 2004] Intelligent Robot Lecture Note

  5. Channel Model (Translation) Language Model (Source) Error Correction and Adaptation • Error Correction • Rule-based approach: pattern mining / transformation rule learning • Statistical model: Noisy-Channel framework as a statistical machine translation • Adaptation • Re-ranking with complicate linguistic information (e.g. POStag, phrase chunk, parse tree, topic, WordNet, etc.) [Bellegarda, 2004, Rosenfeld 1996] • Interpolated LM: • Error Corrective Language Model (ECLM) is a marriage of two methods. • Domain-specific / User-oriented adaptation • ECLM is a statistical re-ranking method to interactively adapt speech recognizer’s language models [Jeong et al. 2005]. Intelligent Robot Lecture Note

  6. Error Corrective Adaptation • Two-stage re-ranking • CORRECTOR: generating corrected candidates by translation model • RERANKER: re-rankingwith semantic information An Example for Error Corrective Re-ranking Intelligent Robot Lecture Note

  7. Word mapping Fertility model Error Correction • Correcting the ASR hypotheses as MAP process • Noisy-Channel correction model • Modeling the word-to-word translation pairs • (e.g.) p(TAKE A | TICKET), p(TRAIN | TRAIN), p(FROM | FROM), p(CHICAGO | CHICAGO), p(TO | TO), p(TOLEDO | TO LEAVE) • Fertility model (IBM model 4 [Brown et al. 1993]) Intelligent Robot Lecture Note

  8. Estimating the Channel Model • Maximum Likelihood Estimates • Counting the pair (oi,wi) in training data (A) • Discounting methods • To reduce over-fitting • Applying well-known techniques to smooth language models • Absolute (abs), Good-Turing (GT), and modified Kneser-Ney (KN)[Chen and Goodman, 1998] Intelligent Robot Lecture Note

  9. Re-ranking with Language Understanding • Re-ranking the n-best hypotheses or word lattice • PLM can be any type of language models which are trained using background and domain-specific text data. • Two baselines: trigram and class-based n-gram (CLM) • Exploiting the Semantic Information • Semantic Class-based Language Model (SCLM) • s={s1,…,sn} : semantic class ~ (domain specific) named entity categories • Estimation as same as class-based n-gram [Brown et al. 1992] Intelligent Robot Lecture Note

  10. Re-ranking with Language Understanding • Exploiting the Semantic Information • Another method for estimating SCLM = MaxEnt exponential model • xi={xi1,xi2,xi3,…} : lexical, syntactic and semantic features Defined Semantic Classes for Two Domains Intelligent Robot Lecture Note

  11. Reading Lists • J. R. Bellegarda. January 2004. Statistical language model adaptation: review and perspectives. Speech Communication, 42(1):93-108. • S. F. Chen, and J. Goodman. 1998. An empirical study of smoothing techniques for language modeling. Technical Report TR.10.98, Harvard University. • M. Jeong, J. Eun, S. Jung, and G. G. Lee. 2005. An error-corrective language-model adaptation for automatic speech recognition. Interspeech 2005-Eurospeech, Lisbon. • M. Jeong, and G. G. Lee. 2006. Experiments on error-corrective language model adaptation. WESPAC-9, Seoul. • E. K. Ringger, and J. F. Allen. 1996. Error correction via a post-processor for continuous speech recognition. In ICASSP. • R. Rosenfeld, 1996. A Maximum Entropy approach to adaptive statistical language modeling. Computer, Speech and Language, 10:187-228. Intelligent Robot Lecture Note

  12. Spoken Language Understanding Intelligent Robot Lecture Note

  13. Semantic Frame Speech ASR Text SLU SQL Database Response SQL Generate Spoken Language Understanding • Spoken language understanding is to map natural language speech to frame structure encoding of its meanings. • What’s difference between NLU and SLU? • Robustness; noise and ungrammatical spoken language • Domain-dependent; further deep-level semantics (e.g. Person vs. Cast) • Dialog; dialog history dependent and utt. by utt. analysis • Traditional approaches; natural language to SQL conversion A typical ATIS system (from [Wang et al., 2005]) Intelligent Robot Lecture Note

  14. “Show me flights from Seattle to Boston” ShowFlight <frame name=‘ShowFlight’ type=‘void’> <slot type=‘Subject’> FLIGHT</slot> <slot type=‘Flight’/> <slot type=‘DCity’>SEA</slot> <slot type=‘ACity’>BOS</slot> </slot> </frame> Subject Flight FLIGHT Departure_City Arrival_City SEA BOS Semantic Representation • Semantic frame (frame and slot/value structure) [Gildeaand Jurafsky, 2002] • An intermediate semantic representation to serve as the interface between user and dialog system • Each frame contains several typed components called slots. The type of a slot specifies what kind of fillers it is expecting. Semantic representation on ATIS task; XML format (left) and hierarchical representation (right) [Wang et al., 2005] Intelligent Robot Lecture Note

  15. Semantic Representation • Two common components in semantic frame • Dialog acts (DA); the meaning of an utt. At the discourse level, and it it approximately the equivalent of intent or subject slot in the practice. • Named entities (NE); the identifier of entity such as person, location, organization, or time. In SLU, it represents domain-specific meaning of a word (or word group). • Example (ATIS and EPG domain, simplified representation) Intelligent Robot Lecture Note

  16. Info. Source Feature Extraction / Selection + Dialog Act Identification Frame-Slot Extraction Relation Extraction + + + Unification + Semantic Frame Extraction • Semantic Frame Extraction (~Information ExtractionApproach) • Dialog act / Main action Identification ~ Classification • Frame-Slot Object Extraction ~ Named Entity Recognition • Object-Attribute Attachment ~ Relation Extraction • 1) + 2) + 3) ~ Unification Examples of semantic frame structure Overall architecture for semantic analyzer Intelligent Robot Lecture Note

  17. Knowledge-based Systems • Knowledge-based systems: • Developers write a syntactic/semantic grammar • A robust parser analyzes the input text with the grammar • Without a large amount of training data • Previous works • MIT: TINA (natural language understanding) [Seneff, 1992] • CMU: PHEONIX [Pellom et al., 1999] • SRI: GEMINI [Dowding et al., 1993] • Disadvantages • Grammar development is an error-prone process • It takes multiple rounds to fine-tune a grammar • Combined linguistic and engineering expertise is required to construct a grammar with good coverage and optimized performance • Such a grammar is difficult and expensive to maintain Intelligent Robot Lecture Note

  18. Statistical Systems • Statistical SLU approaches: • System can automatically learn from example sentences with their corresponding semantics • The annotation are much easier to create and do not require specialized knowledge • Previous works • Microsoft: HMM/CFG composite model [Wang et al., 2005] • AT&T: CHRONUS (Finite-state transducers) [Levin and Pieraccini, 1995] • Cambridge Univ: Hidden vector state model [He and Young, 2005] • Postech: Semantic frame extraction using statistical classifiers [Eun et al., 2004; Eun et al., 2005; Jeong and Lee, 2006] • Disadvantages • Data-sparseness problem; system requires a large amount of corpus • Lack of domain knowledge Intelligent Robot Lecture Note

  19. Machine Learning for SLU • Relational Learning (RL) or Structured Prediction (SP) [Dietterich, 2002; Sutton and McCallum, 2004] • Structured or relationalpatterns are important because they can be exploited to improve the prediction accuracy of our classier • Argmax search (e.g. Sum-Max, Belief propagation, Viterbi etc) • Basically, RL for language processing is to use a left-to-right structure (a.k.a linear-chain or sequence structure) • Algorithms: CRFs, Max-Margin Markov Net (M3N), SVM for Independent and Structured Output (SVM-ISO), Structured Perceptron, etc. Intelligent Robot Lecture Note

  20. z yt-1 yt yt+1 x xt-1 xt xt+1 Machine Learning for SLU • Background: Maximum Entropy (a.k.a logistic regression) • Conditional and discriminative manner • Unstructured! (no dependency in y) • Dialog act classification problem • Conditional Random Fields [Lafferty et al. 2001] • Structured versions of MaxEnt (argmax search in inference) • Undirected graphical models • Popular in language and text processing • Linear-chain structure for practical implementation • Named entity recognition problem hk fk gk Intelligent Robot Lecture Note

  21. Semantic NER as Sequence Labeling • Relational Learning for language processing • Left-to-right n-th order Markov model (linear-chain or sequence) • E.g. Part-of-speech tagging, Noun Phrase chunking, Information Extraction, Speech Recognition, etc. • Very large size of feature space (e.g. state-of-the-art NP chunking > 1M) • Open problem is how to reduce the training cost (even 1st order Markov) • Transformation to BIO representation [Ramshaw and Marcus, 1995] • Begin of entity, Inside of entity, and Outside Intelligent Robot Lecture Note

  22. Theoretical Background for SLU Intelligent Robot Lecture Note

  23. Maximum Entropy • (Conditional) Maximum Entropy= log-linear, exponential model, multinomial logistic regression • Criterion is based on Information Theory • We want a distribution which is uniform with constraints = high entropy • Given the parametric form, fitting an Maximum Entropy to a collection of training data entails finding values for the parameter vector Λ which or Intelligent Robot Lecture Note

  24. MaxEnt - Derivation • Introduce the Lagrangian • Derivative the Lagrangian and Get the parametric form • Dual form Intelligent Robot Lecture Note

  25. inner product MaxEnt - Parametric Form • Parametric form • Log-likelihood function is concave global maximum • Constrained optimization  Unconstrained optimization • Regularization (= MAP with Gaussian prior, ) • Why Maximum Entropy Classifier? • Efficient calculation (inner product of the parameter and a feature vector) • e.g. SVM learning (1 week) vs. MaxEnt (1 hour) • Handling of overlapping features • Automatically inducing feature (for exponential model) Intelligent Robot Lecture Note

  26. MaxEnt - Parameter Estimation • Generalized Iterative Scaling • one popular method for iteratively refining the parameters • from Darroch and Ratcliff (1972), Deming and Stephan (1940) • It only require the expected values but is very slowly convergent • Conjugate Gradient (Fletcher-Reeves and Polak-Ribiere-Positive) • The expected values required by GIS essentially is the gradient, so we can use the gradient information. • Limited memory quasi-Newton (L-BFGS) • For large-scale problem, the evaluation of the Hessian matrix is computationally impractical  quasi-Newton method • Furthermore, limited memory version of BFGS algorithm saves the storage in large-scale problem. Intelligent Robot Lecture Note

  27. yt-1 yt yt+1 xt-1 xt xt+1 Conditional Random Fields • An undirected graphical model [Lafferty et al. 2001] • Linear-chain structure • A label bias problem • State whose following-state dist. have low entropy will be preferred Intelligent Robot Lecture Note

  28. CRFs - Parameter Estimation • A 2-norm penalized (conditional) log-likelihood: • To maximize log-likelihood, we get • Optimization techniques: • GIS or IIS, Conjugate Gradient or L-BFGS • Voted perceptron, Stochastic meta gradient Empirical distribution Expectation Marginal distribution Intelligent Robot Lecture Note

  29. CRFs - Inference • In training • Compute marginal distribution p(y,y’,xt(i)) • It’s forward-backward recursion. • In testing • Viterbi recursion Intelligent Robot Lecture Note

  30. Exploiting Non-Local Informationfor SLU Intelligent Robot Lecture Note

  31. fly from denver to chicago on dec. 10th 1999 … … DEPART.MONTH return from denver to chicago on dec. 10th 1999 … … RETURN.MONTH Long-distance Dependency Problem • Most practical NLP models employ the local feature set • Local context feature (sliding window) • E.g.) for “dec.”; current = dec., prev-2 = on, prev-1 = chicago, next+1 = 10th, next+2 = 1999, POS-tag = NN, chunk = PP • However, there is exactly same feature set for two states “dec.” (even different labels) • Non-local feature or high-order dependency should be considered Intelligent Robot Lecture Note

  32. Using Non-local Information • Syntactic parser-based approach • From fields of Semantic Role Labeling and Relation Extraction • Parse tree path, governing category, or head word • Adv : global structure of language • Disadv : informal language – speech, email • Data-driven approach • From fieldsof Information Retrieval and Lang. Modeling • Identical word, lexical co-occurrence, or triggering • Adv : easy to extract, fit to ungrammatical form • Disadv : depends on data set Intelligent Robot Lecture Note

  33. Hidden Vector State Model • CFG-like Model [He and Young, 2005] • Extends ‘flat-concept’ HMM model • Represents hierarchical structure (right-branching) using hidden state vectors • Each state expanded to encode stack of a push down automaton Intelligent Robot Lecture Note

  34. Previous Slot Context • CRF-based Model [Wang et al., 2005] • Conditional model for ATIS domain SLU system • Efficient heuristic functions that encode non-local information • But, it should have a domain-specific lexicon and knowledge • More general alternatives; skip-chain CRF [Sutton and McCallum, 2006] and two-step approach [Jeong and Lee, 2007] Intelligent Robot Lecture Note

  35. Tree Path and Head Word • Using Syntactic Parse Tree [Gildea and Jurafsky, 2002] • Motivated by semantic role labeling • Full structural information of language • Disadvantage • Reliable for only TEXT process but not SPEECH • Heavy computation to parse a sentence Intelligent Robot Lecture Note

  36. Trigger Feature Selection • Using Concurrence Information [Jeong and Lee, 2006] • Finding non-local and long-distance dependency • Based on a feature selection method for exponential models Definition 1 (Trigger Pair)Let two elements a ∈ A and b ∈ B, Ais a set of history features and B is a set of target features in trainingexamples. If a feature element a is significantly correlated with anotherelement b, then a → b is considered as a trigger, with a being the triggerelement and b the triggered element. Intelligent Robot Lecture Note

  37. Trigger Features • Trigger word pair [Rosenfeld, 1992] • the basic element for extracting information from the long-distance document history • (wi→wj) : word wi and wj are significantly correlated • When wioccurs in the document, it triggers wj, causing its probability estimate to change. • Selecting the triggers • E.g) dec.  return : 0.001179, dec.  fly : < 0.0001 (reject) • Is it enough to extract trigger features from the training data? NO! Intelligent Robot Lecture Note

  38. Inducing (Trigger) Features • Basic idea • Incrementally adding only bundle of (trigger) features which increase log-likelihood of exponential model [Della Pietra et al., 1997] • Feature gain • Measure to evaluate the (trigger) features using Kullback-Leibler divergence [Della Pietra et al., 1997] • Current model (CRFs) + a new feature [McCallum, 2003] • Mean field approximation Intelligent Robot Lecture Note

  39. Inducing (Trigger) Features • Feature Gain (final form) • No closed form solution  Newton’s method • MI vs FI? Intelligent Robot Lecture Note

  40. Efficient Feature Induction Algorithm • How to reduce time cost for FI? • Local feature set as base-model • P) Original algorithm iteratively includes some of all features to empty set • S) We only consider to induce non-local trigger features • Maximum entropy inducer • P) CRFs are heavy to learn! • S) We extract non-local trigger features from light-weight MaxEnt model, and then only learn CRFs at last time. Outline of our feature induction algorithm Intelligent Robot Lecture Note

  41. Effect of Trigger Feature Intelligent Robot Lecture Note

  42. Joint Prediction for SLU Intelligent Robot Lecture Note

  43. Motivation: Independent System • DA and NE tasks are totally independent. • System produces the DA (z) and NE (y) given input words (x), and passes them to DM. • MaxEnt for DA classification, CRFs for NE recognition • Preliminary result in previous section Intelligent Robot Lecture Note

  44. Motivation: Cascaded System • Current state-of-the-art system design [Gupta et al. 2006, AT&T system] • Training the NE module, and use its prediction as a feature for the DA module (or vice versa) • Significant drawback; cannot take advantage of information from DA task (or vice versa) • Cascaded system can improve only single task performance rather than both. Intelligent Robot Lecture Note

  45. Motivation: Joint System • Joint Prediction of DA and NE [Jeong and Lee, 2006] • DA and NE are mutually dependent • An integration of DA and NE model  encoding inter-dependency • GOAL is to improve both performances of DA and NE task. Intelligent Robot Lecture Note

  46. hk xz z f2k yt-1 yt yt+1 f1k gk xt-1 xt xt+1 Triangular-chain CRFs • Modeling the inter-dependency (y↔ z) • Factorizing the potential edge-transition NE-observation DA-observation y,y dependency y,z dependency In general, fk can be a function of triangulated cliques. However, we assume that NE state transition is independent from DA, i.e., DA operates as an observation feature to identify NE labels. Intelligent Robot Lecture Note

  47. CRFs Family Graphical illustrations of linear-chain CRF (left), factorial CRF [Sutton et al. 2004] (middle) and triangular-chain CRF (right) Intelligent Robot Lecture Note

  48. Joint Inference of Tri-CRFs • Performed by multiple exact inferences (of linear-chain CRFs) • Imagine the |Z| planes of linear-chain CRFs. • Forward-backward recursion and final alpha yields the mass of all state seq. • Beam search (for z variables) • Truncate z planes with p < ε(=0.001) where Intelligent Robot Lecture Note

  49. Parameter Estimation of Tri-CRFs • Log-likelihood (for joint task optimization) given D={x,y,z}i=1,…,N • Derivatives • Numerical optimization; L-BFGS • Gaussian regularization (σ2=20) Intelligent Robot Lecture Note

  50. Reducing the Human Effort Intelligent Robot Lecture Note

More Related