1 / 35

Training Paradigms for Correcting Errors in Grammar and Usage

Training Paradigms for Correcting Errors in Grammar and Usage. Alla Rozovskaya and Dan Roth University of Illinois at Urbana-Champaign. NAACL-HLT 2010 Los Angeles, CA. TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: A A A. Error correction tasks.

limei
Télécharger la présentation

Training Paradigms for Correcting Errors in Grammar and Usage

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Training Paradigms for Correcting Errors in Grammar and Usage Alla Rozovskaya and Dan Roth University of Illinois at Urbana-Champaign NAACL-HLT 2010 Los Angeles, CA TexPoint fonts used in EMF. Read the TexPoint manual before you delete this box.: AAA

  2. Error correction tasks • Context-sensitive spelling mistakes • I would like a peace*/piece of cake. • English as a Second Language (ESL) mistakes • Mistakes involving prepositions • To*/in my mind, this is a serious problem. • Mistakes involving articles • Nearly 30000 species of plants are under the*/a serious threat of disappearing. • Laziness is the engine of the*/<NONE> progress.

  3. The standard training paradigm for error correction • Example: Correcting article mistakes [Izumi et al., ’03; Han et al., ’06; De Felice and Pulman, ’08; Gamon et al., ’08] • Cast the problem as a classification task • Provide a set of candidates: {a,the,NONE} • Task: select the appropriate candidate in context • Define features based on the surrounding context and train a classifier on correct (native) data Laziness is the engine of [the] progress Features: w1B=of, w1A=progress, w2Bw1B=engine-of, …

  4. The standard training paradigm for error correction • Correcting article mistakes [Izumi et al., ’03; Han et al., ’06; De Felice and Pulman, ’08; Gamon et al., ’08] • Correcting preposition mistakes [Eeg-Olofsson and Knutsson, ’03; Gamon et al., ’08; Tetreault and Chodorow, ’08, others] • Context-sensitive spelling correction [Golding and Roth, ’96,’99; Carlson et al., ’01, others]

  5. But this is a paradigm for a selection task! • Selection task (e.g. WSD): • We have a set of candidates • Task: select the correct candidate from a set of candidates The selection paradigm is appropriate for WSD, because there is no proposed candidate in context

  6. The typical error correction training paradigm is the paradigm of a selection task! Why? • Easy to obtain training data – can use correct text • No need for annotation

  7. Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for the correction task? • The ESL corpus • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions

  8. Selection tasks versus error correction tasks • Article selection task Nearly 30000 species of plants are under ___ serious threat of disappearing. • Article correction task Nearly 30000 species of plants areunder the serious threat of disappearing. Set of candidates: {a,the,NONE} Set of candidates: {a,the,NONE} source

  9. Correction versus selection • Article selection classifier • Accuracy on native English data 87-90% • Baseline for the article selection task 60-70%(use the most common article) • Non-native data accuracy >90% • If we use the writer’s selection, the results are very good already! Conclusion: Need to use the proposed candidate (or will make more mistakes than there are in the data) Error rate=10% With a selection model – can use it as a threshold Can we do better if we use the proposed candidate in training?

  10. The proposed article is a useful resource We want to use the proposed article in training 90% of articles are used correctly Article mistakes are not random Selection paradigm: Can we use the proposed candidate in training? - No: In native data, the proposed article always corresponds to the label Page 10

  11. How can we use the proposed article in training? Using annotated data for training Laziness is the engine of <the,NONE> progress. Annotating data for training is expensive *Need a method to generate training data for the error correction task without expensive annotation. source label Page 11

  12. Contributions of this work • We propose a method to generate training datafor the error correction task • Avoid expensive data annotation • We use the generated data to train classifiers in the paradigm of correction • With the proposed candidate in training • We show that error correction training paradigms are superior to the selection paradigm of training

  13. Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL corpus • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions

  14. The annotated ESL corpus • Annotated a corpus of ESL sentences (60K words) • Extracted from two corpora of ESL essays: • ICLE [Granger et al.,’02] • CLEC [Gui and Yang,’03] • Sentences written by ESL students of 9 first languages • Each sentence is fully corrected and error tagged • Annotated by native English speakers • Experiments: Chinese, Czech, Russian

  15. The annotated ESL corpus Sentence for annotation • Annotating ESL sentences with an annotation tool

  16. The annotated ESL corpus Each sentence is fully corrected and error-tagged For details about the annotation, please see [Rozovskaya and Roth, ’10, NAACL-BEA5] • Before annotation “This time asks for looking at things with our eyes opened.” • With annotation comments “This time @period, age, time@ asks $us$ for <to> looking *look* at things with our eyes opened .” • After annotation “This period asks us to look at things with our eyes opened.”

  17. Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL data used in the evaluation • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions

  18. Training paradigms for the error correction task • Generate artificial article errors in native training data • The source article can be used in training as a feature • Constraint: We want training data to be similar to non-native text • Other works that use artificial errors do not take into account error patterns in non-native data [Sjöbergh and Knutsson, ’05; Brockett et al., ’06, Foster and Andersen, ’09] Key idea: We want to be able to use the proposed candidate in training

  19. Training paradigms for the error correction task • We examine article errors in the annotated data: • Add errors selectively • Mimic • the article distribution • theerror rate • the error patternsof the non-native text

  20. Error rates in article usage Very common mistakes made by non-native speakers of English • TOEFL essays by Russian, Chinese, and Japanese speakers:13% of noun phrases have article mistakes [Han et al., ’06] • Essays by advanced Chinese, Czech, Russian learners of ESL: 10% of noun phrases have article mistakes.

  21. Distribution of articles in the annotated ESL data This error rate sets the baseline for the task around 90%

  22. Distribution of article errors in the annotated ESL text Errors are dependent on the first language of the writer Not all confusions are equally likely

  23. Characteristics of the non-native data: Summary • Article distribution • Error rates • Error patterns of the non-native text We use this knowledge to generate errors for error correction training paradigms

  24. Error correction training paradigm 1: General • General Add errors uniformly at random with error rateconf, where conf 2{5%,10%,12%,14%,16%,18%} Example: Let error rate=10% the a NONE replace(the, a, 0.05) replace(the,NONE,0.05) replace(a, the, 0.05) replace(a,NONE,0.05) replace(NONE, a, 0.05) replace(NONE,the,0.05)

  25. Error correction training paradigm 2: ArticleDistr • ArticleDistr Mimic the distribution of the ESL source articles in training A linear program is set up to find p1 and p2 Example: the Constraints: (1) ProbTrain(the)=ProbCzech(the) (2) p1, p2¸minConf, where minConf2{0.02, 0.03, 0.04, 0.05} replace(the, a, p1) replace(the,NONE,p2)

  26. Error correction training paradigm 3: ErrorDistr • ErrorDistr Add article mistakes to mimic the error rate and confusion patterns observed in the ESL data. Example: Chinese Error rate: 9.2% Article confusions by error type

  27. Error correction training paradigms: Summary • Key idea: generate artificial errors in native training data • We can use the source article in training as a feature • Important constraints: • Errors mimic the error patterns of the ESL text • Error rate • Distribution of different article confusions

  28. Error correction training paradigms: Costs • 3 error generation methods • Use different knowledge (and have different costs) • Paradigm 1 (error rate in the data) • Paradigm 2 (distribution of articles in the ESL data) – no annotation required • Paradigm 3 (error rate and article confusions) – requires annotated data (the most costly method)

  29. Outline • The error correction task: Problem statement • The typical training paradigm – does selection rather than correction • Selection versus correction • What is the appropriate training paradigm for correction? • The ESL data used in the evaluation • Training paradigms for the error correction task • Key idea • Methods of error generation • Experiments • Conclusions

  30. Experimental setup • Train a TrainClean classifier using the selection paradigm • 3 classifiers are Trained With artificial Errors (TWE classifiers) • Online learning paradigm and the Averaged Perceptron Algorithm.

  31. Features source feature – TWE systems only Features are based on the 3-word window around the target. If we take [a] brief look back if-IN we-PRP take-VBP [a] brief-JJ look-NN back-RB Word features: headWord=look, w3B=if, w2B=we,w1B=take, w1A=brief, etc. Tag features: p3B=IN, p2B=PRP, etc. Composite features: w2Bw1B=we-take w1Bw1A= take-brief , etc.

  32. Performance on the data by Russian speakers • All TWE’s outperform the selection paradigm TrainClean for all languages • On average, TWE (Error Distr.) provides the best improvement

  33. Improvement due to training with errors

  34. Conclusions • We argued that the error correction task should be studied in the error correction paradigm rather than the current selection paradigm • The baselinefor the error correction task is high • Mistakes are not random • We have proposed a method to generate training data for error correction tasks using artificial errors • The artificial errors mimic error rates and error patterns in the non-native text • The method allows us to train with the proposed candidate, in the paradigm of error correction • The error correction training paradigms are superior to the typical selection training paradigm

  35. Thank you! Questions?

More Related