1 / 23

Generating Impact-Based Summaries for Scientific Literature

Generating Impact-Based Summaries for Scientific Literature. Qiaozhu Mei, ChengXiang Zhai University of Illinois at Urbana-Champaign. Motivation. Fast growth of publications >100k papers in DBLP; > 10 references per paper Summarize a scientific paper Author’s view: Abstracts, introductions

deiondre
Télécharger la présentation

Generating Impact-Based Summaries for Scientific Literature

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Generating Impact-Based Summaries for Scientific Literature Qiaozhu Mei, ChengXiang Zhai University of Illinois at Urbana-Champaign

  2. Motivation • Fast growth of publications • >100k papers in DBLP; > 10 references per paper • Summarize a scientific paper • Author’s view: Abstracts, introductions • May not be what the readers received • May change over time • Reader’s view: impact of the paper • Impact Factor: numeric • Summary of the content? Author’s view: Proof of xxx; new definition of xxx; apply xxx technique State-of-the-art algorithm; Evaluation metric Reader’s view 20 years later

  3. What should an impact summary look like?

  4. Citation Contexts  Impact, but… • Describes how other authors view/comment on the paper • Implies the impact • Similar to anchor text on web graph, but: • Usually more than one sentences (informative). • Usually mixed with discussions/comparison about other papers (noisy). … They have been also successfully used in part of speech tagging [7], machine translation [3, 5], information retrieval [4, 20], transliteration [13] and text summarization [14]. ... For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. …

  5. Our Definition of Impact Summary Target: extractive summary (pick sentences) of the impact of a paper … Ponte and Croft [20] adopt a language modeling approach to information retrieval. … Abstract:…. Introduction: ….. Content: …… References: …. Citation Contexts … probabilistic models, as well as to the use of other recent models [19, 21], the statistical properties … Author picked sentences: good for summary, but doesn’t reflect the impact Reader composed sentences: good signal of impact, but too noisy to be used as summary Solution: Citation context  infer impact; Original content  summary

  6. Rest of this Talk • An Feasibility study: • A Language modeling based approach • Sentence retrieval • Estimation of impact language models • Experiments • Conclusion

  7. Language Modeling in Information Retrieval Rank with neg. KL Divergence Documents Doc LM Query LM d1 q d2 Smooth using collection LM dN

  8. Impact-based Summarization as Sentence Retrieval Rank with neg. KL Divergence Sentences Sent LM Impact LM s1 D D s2 c1 Key problem: estimate θI c2 sN Use top ranked sentences as a summary cM

  9. Estimating Impact Language Models • Interpolation of document language model and citation language model D c1 Constant coefficient: c2 Dirichlet smoothing: cM Set λj with features of cj : f1(cj) = |cj|, and…

  10. Specific Feature – Citation-based Authority • Assumption: High authority paper has more trustable comments (citation context) • Weight more in impact language model • Authority  pagerank on the citation graph d2 d1

  11. Specific Feature – Citation Context Proximity • Weight citation sentences according to the proximity to the citation label • k  distance to the citation label … There has been a lot of effort in applying the notion of languagemodeling and its variations to other problems.For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. They argue that much of the difficulty for IR lies in the lack of an adequate indexing model. Instead of making prior parametric assumptions about the similarity of documents, they propose a non-parametric approach to retrieval based probabilistic language modeling. Empirically, their approach significantly outperforms traditional tf*idf weighting on two different collections and query sets. …

  12. Experiments • Gold standard: • human generated summary • 14 most cited papers in SIGIR • Baselines: • Random; LEAD (likely to cover abs/intro.); • MEAD – Single Doc; • MEAD – Doc + Citations; (multi-document) • Evaluation Metric: • ROUGE-1, ROUGE-L (unigram cooccurrence; longest common sequence)

  13. Basic Results

  14. Component Study • Impact language model: • Document LM << Citation Context LM << Interpolation (Doc LM, Cite LM) • Dirichlet interpolation > constant coefficient

  15. Component Study (Cont.) • Authority and Proximity • Both Pagerank and Proximity improves • Pagerank + Proximity improves marginally • Q: How to combine pagerank and proximity?

  16. Non-impact-based Summary Paper = “A study of smoothing methods for language models applied to ad hoc information retrieval” 1. Language modeling approaches to information retrieval are attractive and promising because they connect the problem of retrieval with that of language model estimation, which has been studied extensively in other application areas such as speech recognition. 2. The basic idea of these approaches is to estimate a language model for each document, and then rank documents by the likelihood of the query according to the estimated language model. 3. On the one hand, theoretical studies of an underlying model have been developed; this direction is, for example, represented by the various kinds of logic models and probabilistic models (e.g., [14, 3, 15, 22]). Good big picture of the field (LMIR), but not about contribution of the paper (smoothing in LMIR)

  17. Impact-based Summary Paper = “A study of smoothing methods for language models applied to ad hoc information retrieval” 1. Figure 5: Interpolation versus backoff for Jelinek-Mercer (top), Dirichlet smoothing (middle), and absolute discounting (bottom). 2. Second, one can de-couple the two different roles of smoothing by adopting a two stage smoothing strategy in which Dirichlet smoothing is first applied to implement the estimation role and Jelinek-Mercer smoothing is then applied to implement the role of query modeling 3. We find that the backoff performance is more sensitive to the smoothing parameter than that of interpolation, especially in Jelinek-Mercer and Dirichlet prior. Specific to smoothing LM in IR; especially for the concrete smoothing techniques (Dirichlet and JM)

  18. Related Work • Text summarization (extractive) • E.g., Luhn ’58; McKeown and Radev ’95; Goldstein et al. ’99; Kraaij et al. ’01 (using language modeling) • Technical paper summarization • Paice and Jones ’93; Saggion and Lapalme ’02; Teufel and Moens ’02 • Citation context • Ritchie et al. ’06; Schwartz et al. ’07 • Anchor text and hyperlink structure • Language Modeling for information retrieval • Ponte and Croft ’98; Zhai and Lafferty ’01; Lafferty and Zhai ’01

  19. Conclusion • Novel problem of Impact-based Summarization • Language Modeling approach • Citation context  Impact language model • Accommodating authority and proximity features • Feasibility study rather than optimizing • Future work • Optimize features/methods • Large scale evaluation

  20. Thanks!

  21. Feature Study • What we have explored: • Unigram language models - doc; citation context; • Length features • Authority features; • Proximity features; • Position-based re-ranking; • What we haven’t done: • Redundancy removal (Diversity); • Deeper NLP features; ngram features; • Learning to weight features;

  22. Scientific Literature with Citations … While the statistical properties of text corpora are fundamental to the use of probabilistic models, as well as to the use of other recent models [19, 21], the statistical properties … paper paper Citation … They have been also successfully used in part of speech tagging [7], machine translation [3, 5], information retrieval [4, 20], transliteration [13] and text summarization [14]. ... For example, Ponte and Croft [20] adopt a language modeling approach to information retrieval. … Citation paper Citation context

  23. Language Modeling in Information Retrieval • Estimate document language models • Unigram multinomial distribution of words • θd: {P(w|d)} • Ranking documents with query likelihood • R(doc, Q) ~ P(q|d), a special case of • negative KL-divergence: R(doc, Q) ~ -D(θq || θd) • Smooth the document language model • Interpolation-based (p(w|d) ~ pML(w|d) + p(w|REF)) • Dirichlet smoothing empirically performs well

More Related