1 / 43

Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery

Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery. Soumen Chakrabarti (IIT Bombay) David Gibson (Berkeley) Kevin McCurley (IBM Almaden) Martin van Den Berg (Xerox) Byron Dom (IBM Almaden). Quote 1.

derex
Télécharger la présentation

Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Focused Crawling:A New Approach to Topic-SpecificWeb Resource Discovery Soumen Chakrabarti (IIT Bombay)David Gibson (Berkeley)Kevin McCurley (IBM Almaden)Martin van Den Berg (Xerox)Byron Dom (IBM Almaden)

  2. Quote 1 Portals and search pages are changing rapidly, in part because their biggest strength — massive size and reach — can also be a drawback. The most interesting trend is the growing sense of natural limits, a recognition that covering a single galaxy can be more practical — and useful — than trying to cover the entire universe. Dan Gillmore, San Jose Mercury News

  3. Scenario • Disk drive research group wants to track magnetic surface technologies • Compiler research group wants to trawl the web for graduate student resumés • ____ wants to enhance his/her collection of bookmarks about ____ with prominent and relevant links • Virtual libraries like Yahoo!, the Open Directory Project and the Mining Co.

  4. Structured web queries • How many links were found from an environment protection agency site to a site about oil and natural gas in the last year? • Apart from cycling, what is the most common topic cited by pages on cycling? • Find Web research pages which are widely cited by Hawaiian vacation pages Answer: “first-aid”

  5. Quote 2 As people become more savvy users of the Net, they want things which are better focused on meeting their specific needs. We're going to see a whole lot more of this, and it's going to potentially erode the user base of some of the big portals. • Jim HakeFounder, Global Information Infrastructurehttp://www.gii-awards.com/

  6. Goals • Spontaneous, decentralized formation of topical communities • Automatic construction of a “focused portal” containing resources that are • Relevant to the user’s focus of interest • Of high influence and quality • Collectively comprehensive • Discovery that combine structure and content

  7. Model All • Taxonomy with some ‘chosen’ topics • Each page has a relevance score w.r.t. chosen topics • Mendelzon and Milo’s web access cost model • Goal is to ‘expand’ start set to maximize average relevance Science Sports Physics Cycling Zoology Hiking

  8. Properties to be exploited • A page with high relevance tends to link to at least some other relevant pages (radius-one rule) • Given that a page u links to relevant page(s), chances are increased that u points to other relevant pages (radius-two rule) ?

  9. Syntactic “query-by-example” • If part of the answer is known, trivial search techniques may do quite well • E.g., “European airlines” • +swissair +iberia +klm • E.g., “Car makers” • Which pages link to www.honda.com and www.toyota.com?

  10. Who points to S2/P2? Local Backlink Database C’ The backlink architecture GET /P2 HTTP/1.0 Referer: http://S1/P1 S1 S2 C http://S1/P1 http://S2/P2 www.cs.berkeley.edu/~soumen/doc/www99back/userstudy

  11. Backlink rationale • Limited additional storage per server • Turn hyperlinks into undirected edges • A series of forward and backward ‘clicks’ can quickly build a topical community

  12. Backlink example 1

  13. Backlink example 2

  14. Backlink example 3

  15. Backlink example 4

  16. Estimating popularity • Extensive research on social network theory • Wasserman and Faust • Hyperlink based • Large in-degree indicates popularity/authority • Not all votes are worth the same • Several similar ideas and refinements • Googol (Page and Brin) and HITS (Kleinberg) • Resource compilation (Chakrabarti et al) • Topic distillation (Bharat and Henzinger)

  17. Topic distillation overview • Given web graph and query • Search engine selects sub-graph • Expansion, pruning and edge weights • Nodes iteratively transfer authority to cited neighbors The Web Search Engine Query Selected subgraph

  18. Preliminary distillation-based approach • Design a keyword query to represent topics of focus • Using a large web crawl, run topic distillation on the query • Refine query by inspecting result and trial-and-error

  19. Problems with preliminary approach • Unreliability of keyword match • Engines differ significantly on a given query due to small overlap [Bharat and Bröder] • Narrow, arbitrary view of relevant subgraph • Topic model does not improve over time • Dependence on large web crawl and index (lack of “output sensitivity”) • Difficulty of query construction

  20. Output sensitivity • Say the goal is to find a comprehensive collection of recreational and competitive bicycling sites and pages • Ideally effort should scale with size of the result • Time spent crawling and indexing sites unrelated to the topic is wasted • Likewise, time that does not improve comprehensiveness is wasted

  21. Query construction /Companies/Electronics/Power_Supply +“power suppl*” “switch* mode” smps -multiprocessor* “uninterrupt* power suppl*” ups -parcel*

  22. Query complexity • Complex queries needed for distillation • Typical Alta Vista queries are much simpler (Silverstein, Henzinger, Marais and Moricz) • Forcing a hub or authority helps 86% of the time

  23. Proposed solution • Resource discovery system that can be customized to crawl for any topic by giving examples • Hypertext mining algorithms learn to recognize pages and sites about the given topic, and a measure of their centrality • Crawler has guidance hooks controlled by these two scores

  24. Administration scenario Current Examples Drag Taxonomy Editor Suggested Additional Examples

  25. Relevance Path nodes All Arts Bus&Econ Recreation Companies ... Cycling ... Bike Shops Clubs Mt.Biking Good nodes Subsumed nodes

  26. Classification • How relevant is a document w.r.t. a class? • Supervised learning, filtering, classification, categorization • Many types of classifiers • Bayesian, nearest neighbor, rule-based • Hypertext • Both text and links are class-dependent clues • How to model link-based features?

  27. The “bag-of-words” document model • Decide topic; topic c is picked with prior probability (c); c(c) = 1 • Each c has parameters (c,t) for terms t • Coin with face probabilities t (c,t) = 1 • Fix document length and keep tossing coin • Given c, probability of document is

  28. Exploiting link features • c=class, t=text, N=neighbors • Text-only model: Pr[t|c] • Using neighbors’ textto judge my topic:Pr[t, t(N) | c] • Better model:Pr[t, c(N)| c] • Non-linear relaxation ?

  29. Improvement using link features • 9600 patents from 12 classes marked by USPTO • Patents have text and cite other patents • Expand test patent to include neighborhood • ‘Forget’ fraction of neighbors’ classes

  30. Taxonomy Editor Example Browser Topic Distiller Scheduler Feedback Taxonomy Database Crawl Database Workers Hypertext Classifier (Learn) Hypertext Classifier (Apply) TopicModels Putting it together

  31. Monitoring the crawler One URL Relevance Moving Average Time

  32. Measures of success • Harvest rate • What fraction of crawled pages are relevant • Robustness across seed sets • Separate crawls with random disjoint samples • Measure overlap in URLs and servers crawled • Measure agreement in best-rated resources • Evidence of non-trivial work • #Links from start set to the best resources

  33. Focused Harvest rate Unfocused

  34. Crawl robustness URL Overlap Server Overlap Crawl 1 Crawl 2

  35. Robustness of resource discovery • Sample disjoint sets of starting URL’s • Two separate crawls • Find best authorities • Order by rank • Find overlap in the top-rated resources

  36. Cycling: cooperative Mutual funds: competitive Distance to best resources

  37. Observations • Random walk on the Web “rapidly mixes” topics • Yet, there are large coherent paths and clusters • Focused crawling gives topic distillation richer data to work on • Combining content with link structure eliminates the need to tune link-based heuristics

  38. Related work • WebWatcher, HotList and ColdList • Filtering as post-processing, not acquisition • ReferralWeb • Social network on the Web • Ahoy!, Cora • Hand-crafted to find home pages and papers • WebCrawler, Fish, Shark, Fetuccino, agents • Crawler guided by query keyword matches

  39. Agents usually look for keywords and hand-crafted patterns Cannot learn new vocabulary dynamically Do not use distance-2 centrality information Client-side assistant We use taxonomy with statistical topic models Models can evolve as crawl proceeds Combine relevance and centrality Broader scope: inter-community linkage analysis and querying Comparison with agents

  40. Conclusion • New architecture for example-driven topic-specific web resource discovery • No dependence on full web crawl and index • Modest desktop hardware adequate • Variable radius goal-directed crawling • High harvest rate • High quality resources found far from keyword query response nodes

More Related