1 / 20

732A02 Data Mining - Clustering and Association Analysis

732A02 Data Mining - Clustering and Association Analysis. Association rules Apriori algorithm FP grow algorithm. ………………… Jose M. Peña jospe@ida.liu.se. Association rules. Mining some data for frequent patterns. In our case, patterns will be rules of the form

Télécharger la présentation

732A02 Data Mining - Clustering and Association Analysis

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 732A02 Data Mining -Clustering and Association Analysis • Association rules • Apriori algorithm • FP grow algorithm ………………… Jose M. Peña jospe@ida.liu.se

  2. Association rules • Mining some data for frequent patterns. • In our case, patterns will be rules of the form Antecedent  consequent, with only conjunctions of bought items in the antecedent and consequent, e.g. milk ^ eggs  bread ^ butter. • Applications: E.g., market basket analysis (to support business decisions): • Rules with “Coke” in the consequent may help to decide how to boost sales of “Coke”. • Rules with “bagels” in the antecedent may help to determine what happens if “bagels” are sold out. FREQUENT ITEMSET

  3. Customer buys both Customer buys diaper Customer buys beer Association rules • Goal: Find all the rules X  Ywith minimum support and confidence • support = p(X, Y) = probability that a transaction contains X  Y • confidence = p(Y | X) = conditional probability that a transaction having X also contains Y = p(X, Y) / p(X). • Let supmin = 50%, confmin = 50%. Association rules: • A  D (60%, 100%) • D  A (60%, 75%)

  4. Association rules • Goal: Find all the rules X  Ywith minimum support and confidence. • Solution: • Find all sets of items (itemsets) with minimum support, i.e. the frequent itemsets (Apriori and FP grow algorithms). • Generate all the rules with minimum confidence from the frequent itemsets. • Note (the downward closure or apriori property): Any subset of a frequent itemset is frequent. Or, any superset of an infrequent itemset set is infrequent.

  5. Association rules • Frequent itemsets can be represented as a tree (the children of a node are a subset of its siblings). • Different algorithms traverse the tree differently, e.g. • Apriori algorithm = breadth first. • FP grow algorithm = depth first. • Breadth first algorithms cannot typically store the projections in memory and, thus, have to scan the database more times. The opposite is typically true for depth first algorithms. • Breadth (resp. depth) is typically less (resp. more) efficient but more (resp. less) scalable.

  6. Apriori algorithm • Scan the database once to get the frequent 1-itemsets • Generate candidates to frequent (k+1)-itemsets from frequent k-itemsets • Test the candidates against database • Terminate when no frequent or candidate itemsets can be generated, otherwise

  7. Apriori algorithm supmin = 2 apriori property Database C1 L1 1st scan C2 C2 L2 2nd scan L3 C3 3rd scan

  8. Apriori algorithm • How to generate candidates? • Step 1: self-joining Lk • Step 2: pruning • Example of candidate generation. • L3={abc, abd, acd, ace, bcd} • Self-joining: L3*L3 • abcd from abc and abd. • acde from acd and ace. • Pruning: • acde is removed because ade is not in L3. • C4={abcd}

  9. Apriori algorithm • Suppose the items in Lk-1 are listed in an order • Self-joining Lk-1 insert intoCk select p.item1, p.item2, …, p.itemk-1, q.itemk-1 from Lk-1 p, Lk-1 q where p.item1=q.item1, …, p.itemk-2=q.itemk-2, p.itemk-1 < q.itemk-1 • Pruning forall itemsets c in Ckdo forall (k-1)-subsets s of c do if (s is not in Lk-1) then delete c from Ck apriori property

  10. Apriori algorithm • Ck : candidate itemset of size k • Lk : frequent itemset of size k • L1 = {frequent items} • for (k = 1; Lk !=; k++) do begin • Ck+1 = candidates generated from Lk • for each transaction t in database d • increment the count of all candidates in Ck+1 that are contained in t • Lk+1 = candidates in Ck+1 with minimum support • end • returnkLk Prove that all the frequent (k+1)-itemsets are in Ck+1

  11. R. Agrawal, R. Srikant: "Fast Algorithms for Mining Association Rules", IBM Research Report RJ9839. Association rules • Generate all the rules of the form a l - a with minimum confidence from a large (= frequent) itemset l. • If a subset a of l does not generate a rule, then neither does any subset of a (≈ apriori property).

  12. Association rules R. Agrawal, R. Srikant: "Fast Algorithms for Mining Association Rules", IBM Research Report RJ9839. • Generate all the rules of the form l - h h with minimum confidence from a large (= frequent) itemset l. • For a subset h of a large item l to generate a rule, so must do all the subsets of h (≈ apriori property). Generate the rules with one item consequent = Apriori algorithm candidate generation

  13. FP grow algorithm • Apriori = candidate generate-and-test. • Problems • Too many candidates to generate, e.g. if there are 104 frequent 1-itemsets, then more than 107 candidate 2-itemsets. • Each candidate implies expensive operations, e.g. pattern matching and subset checking. • Can candidate generation be avoided ? Yes, frequent pattern (FP) grow algorithm.

  14. FP grow algorithm {} Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 f:4 c:1 c:3 b:1 b:1 a:3 p:1 m:2 b:1 p:2 m:1 TID Items bought items bought (f-list ordered) 100 {f, a, c, d, g, i, m, p}{f, c, a, m, p} 200 {a, b, c, f, l, m, o}{f, c, a, b, m} 300 {b, f, h, j, o, w}{f, b} 400 {b, c, k, s, p}{c, b, p} 500{a, f, c, e, l, p, m, n}{f, c, a, m, p} min_support = 3 • Scan the database once, and find the frequent items. Record them as the frequent 1-itemsets. • Sort frequent items in frequency descending order • Scan the database again and construct the FP-tree. f-list=f-c-a-b-m-p.

  15. FP grow algorithm {} Header Table Item frequency head f 4 c 4 a 3 b 3 m 3 p 3 f:4 c:1 c:3 b:1 b:1 a:3 p:1 m:2 b:1 p:2 m:1 • For each frequent item in the header table • Traverse the tree by following the corresponding link. • Record all of prefix paths leading to the item. This is the item’s conditional pattern base. Conditional pattern bases item cond. pattern base c f:3 a fc:3 b fca:1, f:1, c:1 m fca:2, fcab:1 p fcam:2, cb:1  Frequent itemsets found: f: 4, c:4, a:3, b:3, m:3, p:3

  16. FP grow algorithm {} f:3 c:3 am-conditional FP-tree {} f:3 c:3 a:3 m-conditional FP-tree • For each conditional pattern base • Start the process again (recursion). • m-conditional pattern base: • fca:2, fcab:1 am-conditional pattern base: fc:3 cam-conditional pattern base: f:3 {}   f:3 cam-conditional FP-tree    Frequent itemsets found: fm: 3, cm:3, am:3 Frequent itemsets found: fam: 3, cam:3 Frequent itemset found: fcam: 3 Backtracking !!!

  17. FP grow algorithm

  18. FP grow algorithm • Exercise Run the FP grow algorithm on the following database • TID Items bought • 100 {1,2,5} • 200 {2,4} • {2,3} • 400 {1,2,4} • 500 {1,3} • 600 {2,3} • 700 {1,3} • 800 {1,2,3,5} • 900 {1,2,3}

  19. Association rules • Frequent itemsets can be represented as a tree (the children of a node are a subset of its siblings). • Different algorithms traverse the tree differently, e.g. • Apriori algorithm = breadth first. • FP grow algorithm = depth first. • Breadth first algorithms cannot typically store the projections and, thus, have to scan the databases more times. • The opposite is typically true for depth first algorithms. • Breadth (resp. depth) is typically less (resp. more) efficient but more (resp. less) scalable.

More Related