1 / 22

The Impact of Grammar Enhancement on Semantic Resources Induction

The Impact of Grammar Enhancement on Semantic Resources Induction. Luca Dini (dini@celi.it) Giampaolo Mazzini (mazzini@celi.it). Objectives. Bridging from dependency parsing to kowledge representation; Need of an intermediate level Semantic Role Labelling Easily configurable; Rule based;

dermot
Télécharger la présentation

The Impact of Grammar Enhancement on Semantic Resources Induction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The Impact of Grammar Enhancement on Semantic Resources Induction Luca Dini (dini@celi.it) Giampaolo Mazzini (mazzini@celi.it)

  2. Objectives • Bridging from dependency parsing to kowledge representation; • Need of an intermediate level • Semantic Role Labelling • Easily configurable; • Rule based; • Moderately learning based (MLN) • Production of a reasonably large repository of lexical units with assigned frames and mappings to syntax. • Objective of this presentation: To measure the inpact of grammar enhancement on the derivation of semantic resources.

  3. Plan of This Talk • Architecture and Methodology; • First Evaluation; • The Effect of Grammar Improvement;

  4. Architecture and Methodology;

  5. Machine Translation Target Example <LU, FRAME> Parsing Parsing parsed Annotation parsed Example Target LU Identification annotated Example FE alignement <tLU,FRAME,VALENCE> Dep. Extraction Architecture Source Annotation

  6. …foreign policy dispute …disputa di politica straniera Example <dispute.n,Quarreling> <disputa.n,Quarreling, <Issue,Prep[di]>>

  7. Ingredients • Bilingual MT System (Systran) • Comparable parsers for Italian and English (XIP, Xerox Incremental Parser) • Lexicon look up module (350.000 it <-> en) • Word sense disambiguation and clustering • Semantic vectors for source and target

  8. Challenges • Ambiguity of translation: • Write.v ->{scrivere, fare lo scrittore, scolpire, vergare,documentare, comporre, scrivere una lettera, cantare, trascrivere}. • Lack of translation. • Identification of the semantic head of the Frame Element. • Grammatical transformations. • Grammar Errors.

  9. Results (1)

  10. Results (2)

  11. Results (3)

  12. Evaluation

  13. Evaluation(1): SRL (1) • Manual annotation of TUT corpus (Lesmo et Al. 2002): • 1000 sentences • Corpus annotated only with frame bearing induced LU; • Selection of correct frame (if any) • FE annotation of all dependants • Export in CoNLL format

  14. Evaluation (1): SRL (2) • Second step: “parse” the corpus for SRL: • No real parser; • Very simple algorithm for assignement; • Random choice in case of ambiguity; • Results: According to Toutanova et al. (2008) F-Measure metrics: • precision of 0.53, a recall of 0.33 and a consequent precision of 0.41. • Poor comparison with state of the art SRL.

  15. Evaluation (2) • “Standard” corpus annotation: • 20 sentences X 20 lexical units (no ambiguity). • Creation of a DB of <Lunit, frame, Valence> triples. • Comparison with induced resources based on standard precision and recall metrics. • A hit counts as positive if Part-of-speech, Grammatic Function and Frame element all matches • A “boost” was assigned on the basis of the importance of valence population (based both number and variety of realization). • Global precision and recall is the arithmetic mean of all weights: • Precision: 0,65 • Recall: 0,41

  16. The Effects of Grammar Improvement;

  17. Errors • No translation for a lexical unit (7,815); • Absence of examples in the source FrameNet (4,922); • No translated example contains the candidate translation(s) of the lexical unit (1,736). • No head could be identified for English frame element realization (parse error or difficult structure, e.g. coordination) (6,191) • The translation of the semantic head of the frame element or of the frame bearing head could not be matched in the Italian example. (99,808) • The semantic heads of both the lexical unit and the frame element are found in the Italian example but the parser could not find any dependency among them. (94,004)

  18. The Enhancement Phase • Improvements concerned only one side of the parsing mechanism, i.e. the Italian Dependency Grammar; • Development: • Using the XIP IDE (Mokhtar et al., 2001). • The development period lasted about 6 month (Testa & al. ,2009)). • It was based on iterative verification on different corpus (TUT/ISST). • Improvement in LAS 40% -> 70%

  19. Consequences • The architecture was kept exactly the same and the source code “frozen” during the six month period. • Results

  20. Comments • Both evaluation types shows an increase in precision of about 6%; • Strangely recall stay almost constant in ev1, while it increases considerably in ev2 • Explanation (?): • Unmapped phenomena; • “Random” effect due to small evaluation set.

  21. Issues & Conclusions • Was it worth 6 month labour ? • Probably not, if grammar enhancement is finalized just to the acquisition of the resources. • Probably yes, if it is independently motivated. • In general evaluation of the impact of lower modules on high level application is something crucial for strategic choices and a rather “neglected” aspect. • We need to understand the correct trade-off. • Convergency: IFRAME project (http://sag.art.uniroma2.it/iframe/doku.php)

  22. Thank You!

More Related