1 / 24

ICASSP Paper Survey

ICASSP Paper Survey. Presenter: Chen Yi-Ting. Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis (PLSA) Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA)

Télécharger la présentation

ICASSP Paper Survey

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ICASSP Paper Survey Presenter: Chen Yi-Ting

  2. Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis (PLSA) • Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA) • Topic and Stylistic Adaptation for Speech Summarisation • Automatic Sentence Segmentation of Speech for Automatic Summarization

  3. Improved Spoken Document Retrieval With Dynamic Key Term Lexicon and Probabilistic Latent Semantic Analysis (PLSA)

  4. In this paper, using a “dynamic key term lexicon” automatically extracted from the ever-changing document archives as an extra feature set in the retrieval task.

  5. An important part of the proposed approach is the automatic key term extraction from the archives • The second important part of the proposed approach is the key term recognition from the user spoken query • Here special approaches to recognize correctly the key term in the user query were developed, including emphasizing the possible key term candidates during search through the phone lattice, and key term matching using a phone similarity matrix including two different distance measures. • Two different versions of models can be used: the general lexicon including all terms except those stop terms deleted and the other based on the much smaller but semantically rich key term lexicon

  6. Named Entity Recogniion • The first is to recognize the NEs from a text document (or the transcription of a spoken document) using global information • The second special approach used here is for spoken documents to recover the OOV NEs using external knowledge • Key Term extraction by term entropy based on PLSA

  7. Key term recognition from user query • The user spoken query is transcribed not only into a word graph as usual recognition process, but into a phone lattice as well • Then matching the phone lattice with the phone sequences of the key terms in the dynamic lexicon using dynamic programming (threshold) • The price paid here is of course the overall word error rate may be increased • The experimental conditions • Word error rates 27%、character error rates 14.29%、syllable error rate 8.91% • 32 topics were used in PLSA modeling • 1000new stories (test set) 50 queries • The length of the queries is roughly 8-11 words • A lexicon of 61521 word was used here • A total of 1708 NE were obtained (from 9836 news stories) • Picked up the top 2000 terms ranked by term entropy

  8. Experimental Results

  9. Improved Spoken Document Summarization Using Probabilistic Latent Semantic Analysis (PLSA)

  10. where some statistical measure s(tj) (such as TF/IDF or the like)linguistic measure l(tj) (e.g., named entities)c(tj) is calculated from the confidence scoreg(tj) is N-gram score for the term tj b(S) is calculated from the grammatical structure of the sentence Sλ1, λ2, λ3, λ4 and λ5 are weighting parameters • Two useful measures, referred to as topic significance and term entropy in this paper are proposed based on the PLSA modeling to determine the terms and thus sentences important for the document which can then be used to construct the summary • The statistical measure s(tj) which has been proved extremely useful is called “significance score”:

  11. Topic significance • The topic significance score of a term tj with respect to a topic Tk • The statistical measure: • Term Entropy • is a scaling factor

  12. Experiments configuration • The test corpus included 200 news stories broadcast • Word accuracy 66.46%、character accuracy 74.95%、syllable accuracy 81.70% • Sentence recall/precision is the evaluation metric for automatic summarization of documents

  13. Experiments configuration

  14. Topic and Stylistic Adaptation for Speech Summarisation

  15. In this paper they investigate LiM topic and stylistic adaptation using combinations of LiMs each trained on different adaptation data • Focusing on adapting the linguistic component, which is not related at all to the language model used during the recognition process, to make it more suited for the summarisation task • Experiments were performed both on spontaneous speech, using 9 talks from the Translanguage English Database (TED) corpus, and speech read from text, using 5 talks from CNN broadcast news from 1995 • The measure of summary quality used in this paper is summarisation accuracy (SumACCY)

  16. Automatic speech summarisation system

  17. Summarisation Method • Important sentences are first extracted according to the following score for each sentence, obtained from the automatic speech recognition out (ASR) • Starting with a baseline LiM (LiMB) we perform LiM adaptation by linearly interpolating the baseline model with other component models trained on different data Where • Different types of component LiM are built, coming from different sources of data, and using either unigram, bigram or trigram information

  18. Experimental Setup • Due to lack of data we had to use the talks both for development and evaluation with a rotating form of cross-validation: all talks but one are used for development, the remaining talk being used for testing • Summaries from the development talks are generated automatically by the system using different sets of parameters • For the TED data, two type of component linguistic models: • The first type are built on the small corpus of hand-made summaries, made for the same summarisation ratio. • The second type are built from the papers in the conference proceeding for talk we want to summarise • For the CNN data, one type of component linguistic models • the small corpus of hand-made summaries

  19. Experimental Setup • Reference results: random summarisation, the humman summaries and the baseline TED CNN

  20. Automatic Sentence Segmentation of Speech for Automatic Summarization

  21. This paper presents an automatic sentence segmentation method for an automatic speech summarization system • The segmentation method is based on combining word- and class-based statistical language models to predict sentence and non-sentence boundaries • Studying both the effect of the segmentation on the sentence segmentation system itself and the effect of the segmentation on the summarization accuracy • To judge the quality of the sentence segmentation we used the F-measure metrics

  22. Automatic sentence segmentation • This probability was combined with the matching recursive path probability from • Three LMs were used in sentence segmentation, two word-based LMs and a class-based LM • The LMs were combined by linear interpolation as follows:

  23. Experimental results

More Related