1 / 57

Fast Algorithms for Mining Association Rules

Fast Algorithms for Mining Association Rules. Rakesh Agrawal Ramakrishnan Srikant. Outline. Introduction Formal statement Apriori Algorithm AprioriTid Algorithm Comparison AprioriHybrid Algorithm Conclusions. Introduction. Bar-Code technology

locke
Télécharger la présentation

Fast Algorithms for Mining Association Rules

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast Algorithms for Mining Association Rules Rakesh Agrawal Ramakrishnan Srikant

  2. Data Mining Seminar 2003 Outline • Introduction • Formal statement • Apriori Algorithm • AprioriTid Algorithm • Comparison • AprioriHybrid Algorithm • Conclusions

  3. Data Mining Seminar 2003 Introduction • Bar-Code technology • Mining Association Rules over basket data (93) • Tires ^ accessories  automotive service • Cross market, Attached mail. • Very large databases.

  4. Data Mining Seminar 2003 Notation • Items – I = {i1,i2,…,im} • Transaction – set of items • Items are sorted lexicographically • TID – unique identifier for each transaction

  5. Data Mining Seminar 2003 Notation • Association Rule – X  Y

  6. Data Mining Seminar 2003 Confidence and Support • Association rule XY has confidence c, c% of transactions in D that contain X also contain Y. • Association rule XY has support s, s% of transactions in D contain X and Y.

  7. Data Mining Seminar 2003 Notice • X  A doesn’t mean X+YA • May not have minimum support • X  A and A  Z doesn’t mean X  Z • May not have minimum confidence

  8. Data Mining Seminar 2003 Define the Problem Given a set of transactions D, generate all association rules that have support and confidence greater than the user-specified minimum support and minimum confidence.

  9. Data Mining Seminar 2003 Previous Algorithms • AIS • SETM • Knowledge Discovery • Induction of Classification Rules • Discovery of causal rules • Fitting of function to data • KID3 – machine learning

  10. Data Mining Seminar 2003 Discovering all Association Rules • Find all Large itemsets • itemsets with support above minimum support. • Use Large itemsets to generate the rules.

  11. Data Mining Seminar 2003 General idea • Say ABCD and AB are large itemsets • Compute conf = support(ABCD) / support(AB) • If conf >= minconf AB  CD holds.

  12. Data Mining Seminar 2003 Discovering Large Itemsets • Multiple passes over the data • First pass– count the support of individual items. • Subsequent pass • Generate Candidates using previous pass’s large itemset. • Go over the data and check the actual support of the candidates. • Stop when no new large itemsets are found.

  13. Data Mining Seminar 2003 The Trick Anysubset of large itemset is large. Therefore To find large k-itemset • Create candidatesby combining large k-1 itemsets. • Delete those that contain any subset that is not large.

  14. Data Mining Seminar 2003 Algorithm Apriori Count item occurrences Generate new k-itemsets candidates Find the support of all the candidates Take only those with support over minsup

  15. Data Mining Seminar 2003 Candidate generation • Join step • Prune step P and q are 2 k-1 large itemsets identical in all k-2 first items. Join by adding the last item of q to p Check all the subsets, remove a candidate with “small” subset

  16. Data Mining Seminar 2003 Example L3 = { {1 2 3}, {1 24}, {1 3 4}, {1 3 5}, {2 3 4} } After joining { {1 2 3 4}, {1 3 4 5} } After pruning {1 2 3 4} {1 4 5} and {3 4 5} Are not in L3

  17. Data Mining Seminar 2003 Correctness Show that Any subset of large itemset must also be large Join is equivalent to extending Lk-1 with all items and removing those whose (k-1) subsets are not in Lk-1 Preventsduplications

  18. Data Mining Seminar 2003 Subset Function • Candidate itemsets - Ck are stored in a hash-tree • Finds in O(k) time whether a candidate itemset of size k is contained in transaction t. • Total time O(max(k,size(t))

  19. Data Mining Seminar 2003 Problem? • Every pass goes over the whole data.

  20. Data Mining Seminar 2003 Algorithm AprioriTid • Uses the database only once. • Builds a storage set C^k • Members has the form < TID, {Xk} > • Xk are potentially large k-items in transaction TID. • For k=1, C^1 is the database. • Uses C^k in pass k+1. Each item is replaced by an itemset of size 1

  21. Data Mining Seminar 2003 Advantage • C^k could be smaller than the database. • If a transaction does not contain k-itemset candidates, than it will be excluded from C^k . • For large k, each entry may be smaller than the transaction • The transaction might contain only few candidates.

  22. Data Mining Seminar 2003 Disadvantage • For small k, each entry may be larger than the corresponding transaction. • An entry includes all k-itemsets contained in the transaction.

  23. Data Mining Seminar 2003 Algorithm AprioriTid Count item occurrences The storage set is initialized with the database Generate new k-itemsets candidates Build a new storage set Determine candidate itemsets which are containted in transaction TID Find the support of all the candidates Remove empty entries Take only those with support over minsup

  24. Data Mining Seminar 2003 C^1 L1 Database C2 C^2 L2 C^3 L3 C3

  25. Data Mining Seminar 2003 Correctness • Show that Ct generated in the kth pass is the same as set of candidate k-itemsets in Ck contained in transaction with t.TID

  26. Data Mining Seminar 2003 Correctness t of C^k t.set-of-itemsets doesn’t include any k-itemsets not contained in transaction with t.TID t of C^k t.set-of-itemsets includes all large k-itemsets contained in transaction with t.TID Lemma 1 k >1, ifC^k-1 is correct and complete, and Lk-1 is correct, Then the set Ct generated at the kth pass is the same as the set of candidate k-itemsets in Ck contained in transaction with t.TID Same as the set of all large k-itemsets

  27. Data Mining Seminar 2003 Proof Suppose a candidate itemset c = c[1]c[2]…c[k] is intransaction t.TID c1 = (c-c[k]) and c2=(c-c[k-1]) were in transaction t.TID  c1 and c2 must be large  c1 and c2 were members of t.set-of-items  c will be a member of Ct Ck was built using apriori-gen(Lk-1) all subsets of c of Ck must be large C^k-1 is complete

  28. Data Mining Seminar 2003 Proof Suppose c1 (c2) is not in transaction t.TID  c1 (c2) is not in t.set-of-itemsets  c of Ck is not in transaction t.TID  c will not be a member of Ct C^k-1 is correct

  29. Data Mining Seminar 2003 Correctness Lemma 2 k >1, if Lk-1 is correct and the set Ct generated in the kth step is the same as the set of candidate k-itemsets in Ck in transaction t.TID, then the set C^k is correct and complete.

  30. Data Mining Seminar 2003 Proof Apriori-gen guarantees  Ct includes all large k-itemsets in t.TID, which are added to C^k C^k is complete. Ct includes only itemsets in t.TID, only items in Ct are added to C^k  C^k is correct.

  31. Data Mining Seminar 2003 Correctness Theorem 1 k >1, the set Ct generated in the kth pass is the same as the set of candidate k-itemsets in Ck contained in transaction t.TID Show: C^k is correct and complete and Lk is correct for all k>=1.

  32. Data Mining Seminar 2003 Proof (by induction on k) • K=1 – C^1 is the database. • Assume it holds for k=n. • Ct generated in pass n+1 consists of exactly those itemsets in Cn+1 contained in transaction t.TID. • Apriori-gen guarantees and Ct is correct  Ln+1 is correctC^n+1 will be correct and complete C^k is correct and complete for all k>=1 The theorem holds Lemma 2 Lemma 1

  33. Data Mining Seminar 2003 General idea (reminder) • Say ABCD and AB are large itemsets • Compute conf = support(ABCD) / support(AB) • If conf >= minconf • AB  CD holds.

  34. Data Mining Seminar 2003 Discovering Rules • For every large itemset l • Find all non-empty subsets of l. • For every subset a • Produce rule a  (l-a) • Accept if support(l) / support(a) >= minconf

  35. Data Mining Seminar 2003 Checking the subsets • For efficiency, generate subsets using recursive DFS. If a subset ‘a’ doesn’t produce a rule, we don’t need to check for subsets of ‘a’. Example Given itemset : ABCD If ABC  D doesn’t have enough confidence then surely AB  CD won’t hold

  36. Data Mining Seminar 2003 Why? For any subset a^ of a:Support(a^) >= support(a) Confidence ( a^ (l-a^) ) =support(l) / support(a^) <=support(l) / support(a) =confidence ( a  (l-a) )

  37. Data Mining Seminar 2003 Simple Algorithm Check all the large itemsets Check all the subsets Check confidence of new rule Output the rule Continue the DFS over the subsets. If there is no confidence the DFS branch cuts here

  38. Data Mining Seminar 2003 Faster Algorithm Idea: If (l-c)  c holds than all the rules (l-c^)  c^ must hold Example: If AB  CD holds, then so do ABC  D and ABD  C C^ is a non empty subset of c

  39. Data Mining Seminar 2003 Faster Algorithm • From a large itemset l, • Generate all rules with one item in it’s consequent. • Use those consequents and Apriori-gen to generate all possible 2 item consequents. • Etc. • The candidate set of the faster algorithm is a subset of the candidate set of the simple algorithm.

  40. Data Mining Seminar 2003 Faster algorithm Find all 1 item consequents (using 1 pass of the simple algorithm) Generate new (m+1)-consequents Check the support of the new rule Continue for bigger consequents If a consq. Doesn’t hold, don’t look for bigger.

  41. Data Mining Seminar 2003 Advantage Example Large itemset : ABCDE One item conseq. : ACDEB ABCED Simple algorithm will check: ABCDE, ABECD, BCEAD and ACEBD. Faster algorithm will check: ACEBD which is also the only rule that holds.

  42. Example Data Mining Seminar 2003 Simple algorithm: ABCDE Large itemset ACDEB Rules with minsup ABCED CDEAB BCEAD ABECD ADEBC ACDBE ACEBD ACEBD ABCED ABCDE Fast algorithm: ACDEB ABCED ACEBD

  43. Data Mining Seminar 2003 Results • Compare Apriori, and AprioriTid performances to each other, and to previous known algorithms: • AIS • SETM • The algorithms differ in the method of generating all large itemsets. Both methods generate candidates “on-the-fly” Designed for use over SQL

  44. Data Mining Seminar 2003 Method • Check the algorithms on the same databases • Synthetic data • Real data

  45. Data Mining Seminar 2003 Synthetic Data • Choose the parameters to be compared. • Transaction sizes, and large itemsets sizes are each clustered around a mean. • Parameters for data generation • D – Number of transactions • T – Average size of the transaction • I – Average size of the maximal potentially large itemsets • L – Number of maximal potentially large itemsets • N – Number of Items.

  46. Data Mining Seminar 2003 Synthetic Data • Expriment values: • N = 1000 • L = 2000 • T5.I2.D100k • T10.I2.D100k • T10.I4.D100k • T20.I2.D100k • T20.I4.D100k • T20.I6.D100k D – Number of transactions T – Average size of the transaction I – Average size of the maximal potentially large itemsets L – Number of maximal potentially large itemsets N – Number of Items. T=5, I=2, D=100,000

  47. Data Mining Seminar 2003 • SETM values are too big to fit the graphs. • Apriori always beats AIS • Apriori is better than AprioriTid in large problems D – Number of transactions T – Average size of the transaction I – Average size of the maximal potentially large itemsets

  48. Data Mining Seminar 2003 Explaining the Results • AprioriTid uses C^k instead of the database. If C^k fits in memory AprioriTid is faster than Apriori. • When C^k is too big it cannot sit in memory, and the computation time is much longer. Thus Apriori is faster than AprioriTid.

  49. Data Mining Seminar 2003 Reality Check • Retail sales • 63 departments • 46873 transactions (avg. size 2.47) • Small database, C^k fits in memory.

  50. Data Mining Seminar 2003 Reality Check Mail Customer 15836 items 213972 transactions (avg size 31) Mail Order 15836 items 2.9 million transactions (avg size 2.62)

More Related