1 / 57

Anatomy of Google (circa 1999)

Project part B due a month from now (10/26). Anatomy of Google (circa 1999). Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm. Fancy hits? Why two types of barrels? How is indexing parallelized? How does Google show that it doesn’t quite care about recall?

cianna
Télécharger la présentation

Anatomy of Google (circa 1999)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Project part B due a month from now (10/26) Anatomy of Google(circa 1999) Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm

  2. Fancy hits? Why two types of barrels? How is indexing parallelized? How does Google show that it doesn’t quite care about recall? How does Google avoid crawling the same URL multiple times? What are some of the memory saving things they do? Do they use TF/IDF? Do they normalize? (why not?) Can they support proximity queries? How are “page synopses” made? Some points…

  3. Spam Text Spam Link Spam Cloaking Content Quality Anchor text quality Quality Evaluation Indirect feedback Web Conventions Articulate and develop validation Duplicate Hosts Mirror detection Vaguely Structured Data Page layout The advantage of making rendering/content language be same Challenges in Web Search Engines

  4. The “google” paper Discusses google’s Architecture circa 99 Search Engine Size over Time Number of indexed pages, self-reported Google: 50% of the web? Information from searchenginewatch.com

  5. Google Search Engine Architecture URL Server- Provides URLs to be fetched Crawler is distributed Store Server - compresses and stores pages for indexing Repository - holds pages for indexing (full HTML of every page) Indexer - parses documents, records words, positions, font size, and capitalization Lexicon - list of unique words found HitList – efficient record of word locs+attribs Barrels hold (docID, (wordID, hitList*)*)* sorted: each barrel has range of words Anchors - keep information about links found in web pages URL Resolver - converts relative URLs to absolute Sorter - generates Doc Index Doc Index - inverted index of all words in all documents (except stop words) Links - stores info about links to each page (used for Pagerank) Pagerank - computes a rank for each page retrieved Searcher - answers queries SOURCE: BRIN & PAGE

  6. Major Data Structures • Big Files • virtual files spanning multiple file systems • addressable by 64 bit integers • handles allocation & deallocation of File Descriptions since the OS’s is not enough • supports rudimentary compression

  7. Major Data Structures (2) • Repository • tradeoff between speed & compression ratio • choose zlib (3 to 1) over bzip (4 to 1) • requires no other data structure to access it

  8. Major Data Structures (3) • Document Index • keeps information about each document • fixed width ISAM (index sequential access mode) index • includes various statistics • pointer to repository, if crawled, pointer to info lists • compact data structure • we can fetch a record in 1 disk seek during search

  9. Major Data Structures (4) • Lexicon • can fit in memory for reasonable price • currently 256 MB • contains 14 million words • 2 parts • a list of words • a hash table

  10. Major Data Structures (4) • Hit Lists • includes position font & capitalization • account for most of the space used in the indexes • 3 alternatives: simple, Huffman , hand-optimized • hand encoding uses 2 bytes for every hit

  11. Major Data Structures (4) • Hit Lists (2)

  12. Major Data Structures (5) • Forward Index • partially ordered • used 64 Barrels • each Barrel holds a range of wordIDs • requires slightly more storage • each wordID is stored as a relative difference from the minimum wordID of the Barrel • saves considerable time in the sorting

  13. Major Data Structures (6) • Inverted Index • 64 Barrels (same as the Forward Index) • for each wordID the Lexicon contains a pointer to the Barrel that wordID falls into • the pointer points to a doclist with their hit list • the order of the docIDs is important • by docID or doc word-ranking • Two inverted barrels—the short barrel/full barrel

  14. Major Data Structures (7) • Crawling the Web • fast distributed crawling system • URLserver & Crawlers are implemented in phyton • each Crawler keeps about 300 connection open • at peek time the rate - 100 pages, 600K per second • uses: internal cached DNS lookup • synchronized IO to handle events • number of queues • Robust & Carefully tested

  15. Major Data Structures (8) • Indexing the Web • Parsing • should know to handle errors • HTML typos • kb of zeros in a middle of a TAG • non-ASCII characters • HTML Tags nested hundreds deep • Developed their own Parser • involved a fair amount of work • did not cause a bottleneck

  16. Major Data Structures (9) • Indexing Documents into Barrels • turning words into wordIDs • in-memory hash table - the Lexicon • new additions are logged to a file • parallelization • shared lexicon of 14 million pages • log of all the extra words

  17. Major Data Structures (10) • Indexing the Web • Sorting • creating the inverted index • produces two types of barrels • for titles and anchor (Short barrels) • for full text (full barrels) • sorts every barrel separately • running sorters at parallel • the sorting is done in main memory Ranking looks at Short barrels first And then full barrels

  18. Algorithm 1. Parse the query 2. Convert word into wordIDs 3. Seek to the start of the doclist in the short barrel for every word 4. Scan through the doclists until there is a document that matches all of the search terms 5. Compute the rank of that document 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough 7. If were not at the end of any doclist goto step 4 8. Sort the documents by rank return the top K (May jump here after 40k pages) Searching

  19. The Ranking System • The information • Position, Font Size, Capitalization • Anchor Text • PageRank • Hits Types • title ,anchor , URL etc.. • small font, large font etc..

  20. The Ranking System (2) • Each Hit type has it’s own weight • Counts weights increase linearly with counts at first but quickly taper off this is the IR score of the doc • (IDF weighting??) • the IR is combined with PageRank to give the final Rank • For multi-word query • A proximity score for every set of hits with a proximity type weight • 10 grades of proximity

  21. Feedback • A trusted user may optionally evaluate the results • The feedback is saved • When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

  22. Results • Produce better results than major commercial search engines for most searches • Example: query “bill clinton” • return results from the “Whitehouse.gov” • email addresses of the president • all the results are high quality pages • no broken links • no bill without clinton & no clinton without bill

  23. Storage Requirements • Using Compression on the repository • about 55 GB for all the data used by the SE • most of the queries can be answered by just the short inverted index • with better compression, a high quality SE can fit onto a 7GB drive of a new PC

  24. Web Page Statistics Storage Statistics

  25. System Performance • It took 9 days to download 26million pages • 48.5 pages per second • The Indexer & Crawler ran simultaneously • The Indexer runs at 54 pages per second • The sorters run in parallel using 4 machines, the whole process took 24 hours

  26. Computing Page Rank

  27. Practicality • Challenges • M no longer sparse (don’t represent explicitly!) • Data too big for memory (be sneaky about disk usage) • Stanford version of Google : • 24 million documents in crawl • 147GB documents • 259 million links • Computing pagerank “few hours” on single 1997 workstation • But How? • Next discussion from Haveliwala paper…

  28. Efficient Computation: Preprocess • Remove ‘dangling’ nodes • Pages w/ no children • Then repeat process • Since now more danglers • Stanford WebBase • 25 M pages • 81 M URLs in the link graph • After two prune iterations: 19 M nodes

  29. Source node (32 bit int) Outdegree (16 bit int) Destination nodes (32 bit int) 0 4 12, 26, 58, 94 1 3 5, 56, 69 2 5 1, 9, 10, 36, 78 Representing ‘Links’ Table • Stored on disk in binary format • Size for Stanford WebBase: 1.01 GB • Assumed to exceed main memory

  30. source node =  dest node Dest Links (sparse) Source Algorithm 1 s Source[s] = 1/N while residual > { d Dest[d] = 0 while not Links.eof() { Links.read(source, n, dest1, … destn) for j = 1… n Dest[destj] = Dest[destj]+Source[source]/n } d Dest[d] = c * Dest[d] + (1-c)/N /* dampening */ residual = Source – Dest /* recompute every few iterations */ Source = Dest }

  31. Analysis of Algorithm 1 • If memory is big enough to hold Source & Dest • IO cost per iteration is | Links| • Fine for a crawl of 24 M pages • But web ~ 800 M pages in 2/99 [NEC study] • Increase from 320 M pages in 1997 [same authors] • If memory is big enough to hold just Dest • Sort Links on source field • Read Source sequentially during rank propagation step • Write Dest to disk to serve as Source for next iteration • IO cost per iteration is | Source| + | Dest| + | Links| • If memory can’t hold Dest • Random access pattern will make working set = | Dest| • Thrash!!!

  32. Block-Based Algorithm • Partition Dest into B blocks of D pages each • If memory = P physical pages • D < P-2 since need input buffers for Source & Links • Partition Links into B files • Linksi only has some of the dest nodes for each source • Linksi only has dest nodes such that • DD*i <= dest < DD*(i+1) • Where DD = number of 32 bit integers that fit in D pages source node  = dest node Dest Links (sparse) Source

  33. Partitioned Link File Source node (32 bit int) Outdegr (16 bit) Num out (16 bit) Destination nodes (32 bit int) 0 4 2 12, 26 Buckets 0-31 1 3 1 5 2 5 3 1, 9, 10 0 4 1 58 Buckets 32-63 1 3 1 56 2 5 1 36 0 4 1 94 Buckets 64-95 1 3 1 69 2 5 1 78

  34. Block-based Page Rank algorithm

  35. Analysis of Block Algorithm • IO Cost per iteration = • B*| Source| + | Dest| + | Links|*(1+e) • e is factor by which Links increased in size • Typically 0.1-0.3 • Depends on number of blocks • Algorithm ~ nested-loops join

  36. Comparing the Algorithms

  37. PageRank Convergence…

  38. PageRank Convergence…

  39. Summary of Key Points • PageRank Iterative Algorithm • Rank Sinks • Efficiency of computation – Memory! • Single precision Numbers. • Don’t represent M* explicitly. • Break arrays into Blocks. • Minimize IO Cost. • Number of iterations of PageRank. • Weighting of PageRank vs. doc similarity.

  40. 2/24 Shopping at job fairs Push my resume [But] jobs aren't what I seek I will be your walking student advertisement Can't live on my research stipend Everybody wants a Google shirt HP, Amazon Pixar, Cray, and Ford I just can't decide Help me score the most free pens and free umbrellas or a coffee mug from Bell Labs Everybody wants a Google.. [Un]til I find a steady funder I'll make do with cheap-a## plunder Everybody wants a Google.. Wait! You will never never never need it It's free; I couldn't leave it Everybody wants a Google shirt Shameless corp'rate carrion crows Turn your backs and show your logos Everybody wants a Google shirt ("Everybody Wants a Google Shirt" is based on "Everybody Wants to Rule the World" by Tears for Fears. Alternate lyrics by Andy Collins, Kate Deibel, Neil Spring, Steve Wolfman, and Ken Yasuhara.)

  41. Discussion • What parts of Google did you find to be in line with what you learned until now? • What parts of Google were different?

  42. Beyond Google (and Pagerank) • Are backlinks reliable metric of importance? • It is a “one-size-fits-all” measure of importance… • Not user specific • Not topic specific • There may be discrepancy between back links and actual popularity (as measured in hits) • The “sense” of the link is ignored (this is okay if you think that all publicity is good publicity) • Mark Twain on Classics • “A classic is something everyone wishes they had already read and no one actually had..” (paraphrase) • Google may be its own undoing…(why would I need back links when I know I can get to it through Google?) • Customization, customization, customization… • Yahoo sez about their magic bullet.. (NYT 2/22/04) • "If you type in flowers, do you want to buy flowers, plant flowers or see pictures of flowers?"

  43. The rest of the slides on Google as well as crawling were notspecifically discussed one at a time, but have been discussed in essence(read “you are still responsible for them”)

  44. SPIDER CASE STUDY

  45. Robot (4) • How to extract URLs from a web page? Need to identify all possible tags and attributes that hold URLs. • Anchor tag: <a href=“URL” … > … </a> • Option tag: <option value=“URL”…> … </option> • Map: <area href=“URL” …> • Frame: <frame src=“URL” …> • Link to an image: <img src=“URL” …> • Relative path vs. absolute path: <base href= …>

More Related