280 likes | 420 Vues
Retrieval and Feedback Models for Blog Feed Search SIGIR 2008. Advisor : Dr. Koh Jia-Ling Speaker : Chou-Bin Fan Date : 2009.10.05. Outline. Introduction Approach Retrieval Models : Large Document Model Small Document Models Query Expansion Models :
E N D
Retrieval and Feedback Models for Blog Feed SearchSIGIR 2008 Advisor:Dr. Koh Jia-Ling Speaker:Chou-Bin Fan Date:2009.10.05
Outline • Introduction • Approach Retrieval Models : Large Document Model Small Document Models Query Expansion Models: Pseudo-Relevance Feedback Wikipedia Link-based Query Expansion Wikipedia PRF Query Expansion • Conclusion
Introduction • What is a feed ? Posts Blog ( HMTL ) Entries Feed( XML ) <xml> <feed> <entry> <author>Peter …</> <title>Good, Evil…</> <content>I’ve said…</> </entry> <entry> <author>Peter …</> <title>Agreeing…</> <content>Some peo…</> </entry> …
Introduction • What is the task “feed search”? Ranking feeds (collections of entries) in response to a user’s query “Q” . A relevant feed should have a principle and recurring interest in Q .
Introduction • Challenges in Feed Search: • How does relevance at the entry level correspond to relevance at the feed level? • Can we favor entries close to the central topic of the feed? • Feeds are noisy. (Spam blogs, Spam & off topic comments) • Retrieval models: models for ranking collections, account for the topical diversity within the collection. (Solve Challenge 1,2.) • Feedback models: overcome noise in the blog collection, aimed at addressing multifaceted information needs. (Solve Challenge 3.)
Retrieval models • Challenge : ranking topically diverse collections. • Representation: feed ,entry . • Model topical relationship between entries.
Feed Document Collection Ranked Feeds <?xml… … </…> <?xml… … </…> `<?xml… … </…> <?xml… … </…> <?xml… … </…> <?xml… … </…> <?xml… <feed> <entry> <entry> <entry> <entry> <entry> … </…> <?xml… <feed> <entry> <entry> <entry> <entry> <entry> … </…> Retrieval models – Large Document (Feed) Model [Query]
Retrieval models – Large Document (Feed) Model Note: The query likelihood model. Remember that ?
Retrieval models – Large Document (Feed) Model • Advantages : A straightforward application of existing retrieval techniques. • Potential Pitfalls: Large entries dominate a feed’s language model. Ignores relationship among entries.
Ranked Feeds Entry Document Collection <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <entry> <entry> <?xml… <entry> <entry> <?xml… <entry> Retrieval models – Small Document (Entry) Model [Query] Ranked Entries Rank by P(Q|E) Rank aggregation function
Retrieval models – Small Document (Entry) Model ReDDE Federated Resource Ranking Algortihm a resource ranking algorithm which scores a document collection, Cj , by the estimated number of relevant documents in that collection. |C| is an estimate of total number of documents in collection Cj .
Retrieval models – Small Document (Entry) Model • Query Likelihood
Retrieval models – Small Document (Entry) Model • Entry Centrality Two purposes: To balance scoring across feeds with differing numbers of entries. Without this balancing, the summation in the small document model, Equation 2, would favor longer feeds. To favor some feeds over others based on their similarity to the feed’s language model. In general, any measure of similarity could be used here. Geometric Mean : Uniform :
Q Retrieval models – Small Document (Entry) Model Advantages: • Controls for differing entry length • Models topical relationship among entries Disadvantages: • Centrality computation is slow(er). • Feed Prior PLOG(F) ∝ log(NF ) <grows logarithmically with the feed size> PUNIF (F) ∝ 1.0 <does not influence the documentranking at all> Not only improves speed, Also performance
Retrieval models – Results • 45 Queries from the TREC 2007 Blog Distillation Task • BLOG06 test collection, XML feeds only • 5-Fold Cross Validation for all retrieval model smoothing parameters Statistical significance at the 0.05 level is indicated by † for improvement from φGM, + for improvement from PLOG , ∗ for improvements over the best LD model.
Feedback Models • Challenge:Noisy collection with general & ongoing information needs. • Use a cleaner external collection for query expansion (Wikipedia) • With an expansion technique designed to identify multiple query . • Pseudo-Relevance Feedback (PRF) [Lavrenko & Croft, 2001] • Wikipedia PRF Query Expansion [Diaz & Metzler, 2006] • Wikipedia Link-based Query Expansion
Feedback Models –Pseudo-Relevance Feedback(PRF) BLOG06 Collection [Q] Related Terms from top K documents [Q + Terms]
Feedback Models –Pseudo-Relevance Feedback(PRF) • PRF assumes the top N retrieved documents, DN, are relevant to the base query and extracts highly discriminative terms or phrases from those document as query expansion terms. • PRF.FEED: PRF where DN are the top N feeds. • PRF.ENTRY: PRF where DN are the top N feed entries. • PRF.WIKI: PRF where DN are the top N Wikipedia articles when the base query is run on the Wikipedia. • PRF.WIKI.P: PRF where DN are the top N Wikipedia passages (sequences of at most 220 words,the average entry length in the BLOG06 corpus) when the query is run on the Wikipedia.
Feedback Models –Pseudo-Relevance Feedback(PRF) • RF.TTR: RF where expansion terms originate from the top 10 relevant feeds (TTR =“top 10 relevant”). • RF.RTT: RF where expansion terms originate from the relevant feeds within the top 10 (RTT =“relevant in top 10”). • NO_EXP (baseline): No query expansion.
Feedback Models –Pseudo-Relevance Feedback(PRF) • Example : give a query “Photography” Ideal digital photography depth of field photographic film photojournalism cinematography PRF photography nude erotic art girl free teen fashion women
Feedback Models –Wikipedia PRF Query Expansion Wikipedia [Q] Related Terms from top K documents BLOG06 Collection [Q + Terms]
Feedback Models –Wikipedia PRF Query Expansion • Example : give a query “Photography” Ideal digital photography depth of field photographic film photojournalism cinematography PRF photography nude erotic art girl free teen fashion women Wikipedia PRF photography director special film art camera music cinematographer photographic
Feedback Models –Wikipedia Link-based Query Expansion Wikipedia [Q] Related Terms from link structure BLOG06 Collection [Q + Terms]
Feedback Models –Wikipedia Link-based Query Expansion • Example : give a query “Photography” PRF photography nude erotic art girl free teen fashion women Wikipedia Link-Based photography photographer digital photography photographic depth of field feature photography film photographic film photojournalism Ideal digital photography depth of field photographic film photojournalism cinematography
Feedback Models –Wikipedia Link-based Query Expansion • After we give a query to Wikipedia , two sets we can get: The relevant and working sets, SR and SW, contain articles ranking within the top R or top W retrieved results. • Then, each anchor phrase, ai, occuring in an article in SW and linking to an article in SR is scored according to
Feedback Models –Result On the BD07 tests, significance at the 0.05 level over the NO_EXP and PRF.WIKI models is indicated by ∗ and †, respectively.
Conclusion • Feed Search Challenges: • Feeds are topically diverse, noisy collections. • Ranked against ongoing & general information needs. • Novel Retrieval Models: • Ranking collections, sensitive to topical relationship among entries. • Novel Feedback Models: • Discover multiple query facets & robust to collection noise.