1 / 37

Introduction to Data Mining

Introduction to Data Mining. Why Mine Data? Commercial Viewpoint. Lots of data is being collected and warehoused Web data, e-commerce purchases at department/ grocery stores Bank/Credit Card transactions Twice as much information was created in 2002 as in 1999 (~30% growth rate)

ronaldw
Télécharger la présentation

Introduction to Data Mining

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Data Mining

  2. Why Mine Data? Commercial Viewpoint • Lots of data is being collected and warehoused • Web data, e-commerce • purchases at department/grocery stores • Bank/Credit Card transactions • Twice as much information was created in 2002 as in 1999 (~30% growth rate) • Other growth rate estimates even higher

  3. Largest databases in 2003 • Commercial databases by Winter Corp. 2003 Survey: • France Telecom has largest decision-support DB, ~30TB; • AT&T ~ 26 TB • Web • Alexa internet archive: 7 years of data, 500 TB • Google searches 4+ Billion pages, many hundreds TB • IBM WebFountain, 160 TB (2003) • Internet Archive (www.archive.org),~ 300 TB

  4. Why Mine Data? Scientific Viewpoint • Data is collected and stored at enormous speeds (GB/hour). E.g. • remote sensors on a satellite • telescopes scanning the skies • scientific simulations generating terabytes of data • Very little data will ever be looked at by a human • Knowledge Discovery is NEEDED to make sense and use of data.

  5. From: R. Grossman, C. Kamath, V. Kumar, “Data Mining for Scientific and Engineering Applications” Data Mining • Data mining is the process of automatically discovering useful information in large data repositories. • Human analysts may take weeks to discover useful information. • Much of the data is never analyzed at all. The Data Gap Total new disk (TB) since 1995 Number of analysts

  6. What is (not) Data Mining? • What is not Data Mining? • Look up phone number in phone directory • Query a Web search engine for information about “Amazon” • What is Data Mining? • Certain names are more prevalent in certain locations (O’Brien, O’Rurke, O’Reilly… in Boston area) • Group together similar documents returned by search engines according to their context

  7. Origins of Data Mining • Draws ideas from: • machine learning/AI, statistics, and database systems • Traditional Techniquesmay be unsuitable due to • Enormity of data • High dimensionality of data • Heterogeneous, distributed nature of data Statistics Machine Learning Data Mining Database systems

  8. Data Mining Tasks • Data mining tasks are generally divided into two major categories: • Predictive tasks [Use some attributes to predict unknown or future values of other attributes.] • Classification • Regression • Deviation Detection • Descriptive tasks [Find human-interpretable patterns that describe the data.] • Association Rule Discovery • Sequential Pattern Discovery • Clustering

  9. Predictive Data Mining or Supervised learning • Given a set of example input/output pairs, find a rule that does a good job of predicting the output associated with a new input. • Let's say you are given the weights and lengths of a bunch of individual salmon fish, and the weights and lengths of a bunch of individual tuna fish. • The job of a supervised learning system would be to find a predictive rule that, given the weight and length of a fish, would predict whether it was a salmon or a tuna.

  10. Learning We can think of at least three different problems being involved in learning: • memory, • averaging, and • generalization.

  11. Example problem • Imagine that I'm trying predict whether my neighbor is going to drive into work tomorrow, so I can ask for a ride. • Whether she drives into work seems to depend on the following attributes of the day: • temperature, • expected precipitation, • day of the week, • whether she needs to shop on the way home, • what she's wearing.

  12. Temp Precip Day Shop Clothes 25 None Sat No Casual Walk -5 Snow Mon Yes Casual Drive 15 Snow Mon Yes Casual Walk Memory • Okay. Let's say we observe our neighbor on three days:

  13. Memory • Now, we find ourselves on a snowy “–5” – degree Monday, when the neighbor is wearing casual clothes and going shopping. • Do you think she's going to drive?

  14. Memory • The standard answer in this case is "yes". • This day is just like one of the ones we've seen before, and so it seems like a good bet to predict "yes." • This is about the most rudimentary form of learning, which is just to memorize the things you've seen before.

  15. Noisy Data • Things aren’t always as easy as they were in the previous case. • What if you get this set of noisy data? • Now, we are asked to predict what's going to happen. • We have certainly seen this case before. • But the problem is that it has had different answers. Our neighbor is not entirely reliable.

  16. Averaging • One strategy would be to predict the majority outcome. • The neighbor walked more times than she drove in this situation, so we might predict "walk".

  17. We might plausibly make any of the following arguments: • She's going to walk because it's raining today and the only other time it rained, she walked. • She's going to drive because she has always driven on Mondays… Generalization • Dealing with previously unseen cases • Will she walk or drive?

  18. Classification: Definition • Given a collection of records (training set) • Each record contains a set of attributes, one of the attributes is the class. • Find a model for class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible. • A test set is used to determine the accuracy of the model. • Usually, the given data set is divided into training and test sets, with training set used to build the model and test set used to validate it.

  19. Test Set Model Classification Another Example categorical categorical continuous class Learn Classifier Training Set

  20. categorical categorical continuous class Example of a Decision Tree Splitting Attributes Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO Model: Decision Tree Training Data

  21. Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO Apply Model to Test Data Test Data Start from the root of tree.

  22. Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO Apply Model to Test Data Test Data

  23. Apply Model to Test Data Test Data Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO

  24. Apply Model to Test Data Test Data Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO

  25. Apply Model to Test Data Test Data Refund Yes No NO MarSt Married Single, Divorced TaxInc NO < 80K > 80K YES NO

  26. Apply Model to Test Data Test Data Refund Yes No NO MarSt Assign Cheat to “No” Married Single, Divorced TaxInc NO < 80K > 80K YES NO

  27. Classification: Application 1 • Direct Marketing • Goal: Reduce cost of mailing by targeting a set of consumers likely to buy a new cell-phone product. • Approach: • Use the data for a similar product introduced before. • We know which customers decided to buy and which decided otherwise. This {buy, don’t buy} decision forms the class attribute. • Collect various demographic, lifestyle, and other related information about all such customers. E.g. • Type of business, • where they stay, • how much they earn, etc. • Use this information as input attributes to learn a classifier model.

  28. Classification: Application 2 • Fraud Detection • Goal: Predict fraudulent cases in credit card transactions. • Approach: • Use credit card transactions and the information associated with them as attributes: • when does a customer buy, • what does he buy, • where does he buy, etc. • Label some past transactions as fraud or fair transactions. This forms the class attribute. • Learn a model for the class of the transactions. • Use this model to detect fraud by observing credit card transactions on an account.

  29. Classification: Application 3 • Customer Attrition/Churn: • Situation: Attrition rate for mobile phone customers is around 25-30% a year! • Goal: To predict whether a customer is likely to be lost to a competitor. • Approach: • Use detailed record of transactions with each of the past and present customers, to find attributes. E.g. • How often the customer calls, • where he calls, • what time-of-the day he calls most, • his financial status, • marital status, etc. • Label the customers as loyal or disloyal. • Find a model for loyalty. • Success story: • Verizon Wireless built a customer data warehouse • Identified potential attriters • Developed multiple, regional models • Targeted customers with high propensity to accept the offer • Reduced attrition rate from over 2%/month to under 1.5%/month (huge impact, with >30 M subscribers) • (Reported in 2003)

  30. Classification: Application 4 • Sky Survey Cataloging • Goal: To predict class (star or galaxy) of sky objects, especially visually faint ones, based on the telescopic survey images (from Palomar Observatory). • 3000 images with 23,040 x 23,040 pixels per image. • Approach: • Segment the image. • Measure image attributes (features) - 40 of them per object. • Model the class based on these features. • Success Story: Could find 16 new high red-shift quasars, some of the farthest objects that are difficult to find!

  31. Assessing Credit Risk • Situation: Person applies for a loan • Task: Should a bank approve the loan? • Notes: • People who have the best credit don’t need the loans • People with worst credit are not likely to repay. • Bank’s best customers are in the middle • Banks develop credit models using a variety of data mining methods. • Mortgage and credit card proliferation are the results of being able to successfully predict if a person is likely to default on a loan. • Widely deployed in many countries.

  32. Association Rule Discovery: Definition • Given a set of records each of which contain some number of items from a given collection; • Produce dependency rules which will predict occurrence of an item based on occurrences of other items. Rules Discovered: {Milk} --> {Coke} {Diaper, Milk} --> {Beer}

  33. Association Rule Discovery: Application 1 • Marketing and Sales Promotion: • Let the rule discovered be • {Bagels, … } --> {Potato Chips} • Potato Chipsas consequent => • Can be used to determine what should be done to boost its sales. • Bagels in the antecedent => • Can be used to see which products would be affected if the store discontinues selling bagels.

  34. Association Rule Discovery: Application 2 • Inventory Management: • Goal: A consumer appliance repair company wants to anticipate the nature of repairs on its consumer products and keep the service vehicles equipped with right parts to reduce on number of visits to consumer households. • Approach: Process the data on tools and parts required in previous repairs at different consumer locations and discover the co-occurrence patterns.

  35. E.g. Euclidean Distance Based Clustering in 3-D space. Intracluster distances are minimized Intercluster distances are maximized Clustering • Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that • Data points in one cluster are more similar to one another. • Data points in separate clusters are less similar to one another. • Similarity Measures: • Euclidean Distance if attributes are continuous. • Other Problem-specific Measures.

  36. Clustering: Application 1 • Market Segmentation: • Goal: subdivide a market into distinct subsets of customers where any subset may conceivably be selected as a market target to be reached with a distinct marketing mix. • Approach: • Collect different attributes of customers based on their geographical and lifestyle related information. • Find clusters of similar customers.

  37. Clustering: Application 2 • Document Clustering: • Goal: To find groups of documents that are similar to each other based on the important words appearing in them. • Approach: • Identify frequently occurring words in each document. • Form a similarity measure based on the frequencies of different terms. Use it to cluster. • Gain: Information Retrieval can utilize the clusters to relate a new document to clustered documents. There are two natural clusters in the data set. The first cluster consists of the first four articles, which correspond to news about the economy. The second cluster contains the last four articles, which correspond to news about health care. Each article is represented as a set of word-frequency pairs (w, c).

More Related