1 / 36

Web Data Collection Methods for Big Data Analysis

Learn how to collect online data using web crawling, APIs, and download methods for big data analysis. Explore web crawling techniques, data acquisition from websites, API integration, and data collection limitations.

Télécharger la présentation

Web Data Collection Methods for Big Data Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Big Data Analysis Lecture 3: Data Collection

  2. Outline of Today’s Lecture • Online data have become increasingly prevalent and are useful for many applications • Example applications: • Measurement: • User sentiment about a brand name, an organization, etc • Event Detection and Monitoring: • flu/diseases, earthquake/tsunami/wildfire, sport events, etc • Prediction • Election outcomes, stock market, etc. • This lecture focuses on how to acquire data from the Web

  3. General Approaches Download raw data files Raw ASCII or binary files made available to the public for download Crawling a website Using automated programs to scour the data on a website Follows the links on web pages to move to other pages Search engines use this mechanism to index websites Application Programming Interface (API) Websites increasingly provide an API to gather their data

  4. Download Raw Data Files • Some websites provide easy access to their raw data

  5. Download Raw Data Files Easy to automate data downloading process

  6. Download Raw Data Files • Some websites require users to fill out a form to select range of data to download Harder to automate data downloading process

  7. Download Raw Data Files • Browser automation tools are available to perform repetitious web clicks and to autofill web forms, etc • How it works? • Records the actions you made when browsing a web site (include filling out forms, etc) • Produces a script file that can be edited by user • Allows user to replay the script over and over again • Example: https://www.youtube.com/watch?v=2ncKQxD3xVM

  8. A crawler (also known as a Web robot or spider) is a computer program that automatically traverses the hyperlink structure of the World Wide Web to gather Web pages (e.g., for indexing by search engines) Snowball sampling Start from one or more seed URLs and recursively extract hyperlinks to other URLS Website Crawling Seed

  9. Anatomy of a Web Crawler Initialize: append seed URLs to Queue yes Terminate? Terminated no Dequeue: remove a URL from Queue Fetch: retrieve web page associated with URL Parse: extract URLs from retrieved web page Enqueue: append extracted URLs to queue

  10. Robot Exclusion Protocol • Web crawlers can overwhelm server with too many requests • Robot Exclusion Protocol is a set of guidelines for robot behavior at a given Web site • Enforced by a special file located at the root directory of the web server (called robot.txt) that specifies the restrictions at a site • Allow: list of pages that can be accessed • Disallow: list of pages that should not be indexed • A “well-behaved” robot should follow the protocol • Robot can choose to ignore file but will have to face consequences – e.g., blacklisted by web site administrator

  11. Example: Robots.txt

  12. Meta Tags • META tags on a webpage also tell a crawler what not to do • Meta tags are placed between <head> … </head> tags in HTML • <META NAME="ROBOTS" CONTENT="NOFOLLOW"> • To not follow links on this page • <META NAME=“GOOGLEBOT" CONTENT=“NOINDEX"> • To not appear in Google’s index • <META NAME="GOOGLEBOT" CONTENT="NOARCHIVE"> • To not archive copy in search results 1http://googleblog.blogspot.com/2007/02/robots-exclusion-protocol.html

  13. Python Web Crawlers • There are many Python libraries available • HTMLParser, lxml, BeautifulSoup, etc • Example: display all the links embedded in a web page

  14. Application Programming Interface (API) • Wikipedia: an application programming interface (API) is a set of routines, protocols, and tools for building software and applications • API defines the standard way for a program/application to request services from another program/application • Many websites provide APIs to access their data • Twitter: https://dev.twitter.com/ • Facebook: http://developers.facebook.com/ • Reddit: https://github.com/reddit/reddit/wiki/API

  15. Acceptable Use Policy • Each website has its own policy • Read through the whole policy before development • Important details to note: • Rate limit • Authentication key • When in doubt, ask. • Most APIs have message boards where you can ask the company or other developers.

  16. Rate Limiting Limitation imposed by API on how many requests can be made per day or per hour If rate exceeded, API returns error If rate constantly exceeded, API blocks IP address from further request Examples: Google Geocode: 25,000 requests per day Twitter REST API: 180 queries per 15 minute window. Reddit: 30 requests/minute

  17. Google Maps Geocoding API • Provides a service for geocoding and reverse geocoding of addresses. https://developers.google.com/maps/documentation/geocoding/start • Geocoding: the process of converting addresses into geographic coordinates (e.g., latitude and longitude) • Reverse geocoding: the process of converting geographic coordinates into an address • Example • You can use the requests or geocoder Python libraries

  18. Python Example 1

  19. Python Example 1

  20. Python Example 2 • A simpler way is to use the geocoder library > pip install geocoder Other options: g.content, g.city, g.state, g.country, etc

  21. Twitter API (version 1.1) • Streaming API • Twitter’s firehose delivers all tweets containing a given keyword or from specific users as they are posted on Twitter • Search (REST) API • Submit a query to Twitter • Returns last 15 tweets that satisfy the query

  22. Example: How to Use Twitter API • Step 1: Create an account on Twitter • Step 2: Register an application to obtain authentication keys (your app needs key and access tokens) • Step 3: Download the libraries (native to the programming language you want to use) • Step 4: Write your code using the functions provided by the libraries (see examples on how to call the functions in the libraries) • Step 5: Deploy the program

  23. Create a Twitter Account • You need a Twitter account to use the API • Go to apps.twitter.com and sign in (or create new account if you don’t have an account yet)

  24. Registering Your Twitter Application • After signing in, click on “Create a new application”

  25. Registering Your Twitter Application • Fill in the application details

  26. Registering Your Twitter Application

  27. Authentication Tokens from Twitter • Click on the “Keys and Access Tokens” tab • Click on the buttons to generate • Consumer key and secret • Access token and secret

  28. Authentication Tokens from Twitter • Click the Test OAuth button and note the consumer key, consumer secret, access token, and access token secret These fields will be filled up with values specific to your application

  29. Python for Twitter API • You can install tweepy library to query Twitter API • pip install tweepy • For Twitter Search (REST) API • Import OAuthHandler and API from tweepy • Create OAuthHandler object • Need to set the customer keys and access tokens) • Create API object • Call api.search(query) to retrieve the tweets • For more information, go to http://docs.tweepy.org/en/v3.5.0/

  30. Python Twitter Search API Example

  31. JSON key fields To obtain the individual keys:

  32. JSON key fields To obtain user information:

  33. Python Twitter Streaming API • For Twitter Streaming API • Create a class that inherits from StreamListener class • Create a Stream object • Start the Stream • When using Twitter Streaming API, you should • Set a timer for the data collection (stop if it exceeds time limit) • Save the output to a file (especially if there are lots of tweets collected)

  34. Python Twitter Streaming API Example

  35. Python Twitter Streaming API Example

  36. Summary • This lecture presents an overview of methods for downloading online data • Many websites provide APIs for users to download their data • Some require authentication; requires user to register their app and use Oauth protocol to authenticate access • Next lecture: • Data storing and querying with SQL

More Related