1 / 16

Fast and Scalable Nearest Neighbor Based Classification

Fast and Scalable Nearest Neighbor Based Classification. Taufik Abidin and William Perrizo Department of Computer Science North Dakota State University. Unclassified Object. Search for the k-Nearest Neighbors. Vote the class. Training Set. Classification.

Télécharger la présentation

Fast and Scalable Nearest Neighbor Based Classification

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Fast and Scalable Nearest Neighbor Based Classification Taufik Abidin and William Perrizo Department of Computer Science North Dakota State University

  2. Unclassified Object Search for the k-Nearest Neighbors Vote the class Training Set Classification Given a (large) TRAINING SET, R(A1,…,An, C), with C=CLASSES and {A1…An}=FEATURES Classification is: labeling unclassified objects based on the training set kNN classification goes as follows:

  3. Problems with kNN • Finding k-Nearest Neighbor Set from horizontally structured data (record oriented data) can be expensive for large training set (containing millions or trillions of tuples) • linear to the size of the training set (1 scan) • Closed kNN is much more accurate but requires 2 scans Vertically structuring the data can help.

  4. A data table, R(A1..An), containing horizontal structures (records) is Vertical Predicate-tree (P-tree) structuring: vertically partition table; compress each vertical bit slice into a basic Ptree; R( A1 A2 A3 A4) R[A1] R[A2] R[A3] R[A4] 010 111 110 001 011 111 110 000 010 110 101 001 010 111 101 111 101 010 001 100 010 010 001 101 111 000 001 100 111 000 001 100 010 111 110 001 011 111 110 000 010 110 101 001 010 111 101 111 101 010 001 100 010 010 001 101 111 000 001 100 111 000 001 100 Horizontal structures (records) Scanned vertically R11 R12 R13 R21 R22 R23 R31 R32 R33 R41 R42 R43 0 1 0 1 1 1 1 1 0 0 0 1 0 1 1 1 1 1 1 1 0 0 0 0 0 1 0 1 1 0 1 0 1 0 0 1 0 1 0 1 1 1 1 0 1 1 1 1 1 0 1 0 1 0 0 0 1 1 0 0 0 1 0 0 1 0 0 0 1 1 0 1 1 1 1 0 0 0 0 0 1 1 0 0 1 1 1 0 0 0 0 0 1 1 0 0 0 1 0 1 0 0 0 0 1 0 01 0 1 0 0 1 01 1. Whole file is not pure1 0 2. 1st half is not pure1  0 0 0 0 0 1 01 P11 P12 P13 P21 P22 P23 P31 P32 P33 P41 P42 P43 3. 2nd half is not pure1  0 0 0 0 0 1 0 0 10 01 0 0 0 1 0 0 0 0 0 0 0 1 01 10 0 0 0 0 1 10 0 0 0 0 1 10 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 0 0 0 0 1 0 1 4. 1st half of 2nd half not  0 0 0 1 0 1 01 5. 2nd half of 2nd half is  1 0 1 0 6. 1st half of 1st of 2nd is  1 Eg, to count, 111 000 001 100s, use “pure111000001100”: 0 23-level P11^P12^P13^P’21^P’22^P’23^P’31^P’32^P33^P41^P’42^P’43 = 0 0 22-level=2 01 21-level 7. 2nd half of 1st of 2nd not 0 processed vertically (vertical scans) process P-trees using multi-operand logical ANDs. R11 0 0 0 0 1 0 1 1 The basic (1-D) Ptree for R11 is built by recording the truth of the predicate “pure 1” recursively on halves, until purity is reached. But it is pure (pure0) so this branch ends

  5. Total Variation The Total Variation of a set X, TV(a) is the sum of the squared separations of objects in X from a , defined as follows: TV(a) = xX(x-a)o(x-a) We will use the concept of functional contours (in particular, the TV contours) in this presentation to identify a well-pruned, small superset of the Nearest Neighbor Set of an unclassified sample (which can then be efficiently scanned) First we will discuss functional contours in general then consider the specific TV contours.

  6. A1 A2 An : : . . . graph(f) = { (a1,...,an,f(a1.an)) | (a1..an)R } Y S contour(f,S) A1..An space R* R f A1 A2 An x1 x2 xn : . . . Y f(x) A1 A2 An Af x1 x2 xn f(x) : . . . R f YS Given f:R(A1..An)Y and SY , define contour(f,S)  f-1(S). There is aDUALITY between functions, f:R(A1..An)Y and derived attributes, Af of R given by x.Af  f(x) where Dom(Af)=Y From the derived attribute point of view, Contour(f,S) = SELECT A1..An FROM R* WHERE R*.Af S. If S={a}, f-1({a}) is Isobar(f, a)

  7. 2xRd=1..nad(k2kxdk) + |R||a|2 = xRd=1..n(k2kxdk)2 - 2xRd=1..nad(k2kxdk) + |R||a|2 = xd(i2ixdi)(j2jxdj) - |R||a|2 = xdi,j 2i+jxdixdj- 2 x,d,k2k adxdk + |R|dadad |R||a|2 = x,d,i,j 2i+j xdixdj- = x,d,i,j 2i+j xdixdj- 2|R| dadd + 2 dadx,k2kxdk + TV(a) = i,j,d 2i+j |Pdi^dj| - k2k+1 dad |Pdk| + |R||a|2 dadad ) = x,d,i,j 2i+j xdixdj+ |R|( -2dadd + TV(a) =xR(x-a)o(x-a) If we use d for a index variable over the dimensions, = xRd=1..n(xd2 - 2adxd + ad2) i,j,k bit slices indexes The first term does not depend upon a. Thus, the derived attribute coming from f(a)=TV-TV() (which does not have that 1st term at all) has identical contours as TV (just a lowered graph). We also find it useful to post-compose a log function to reduce the number of bit slices. The resulting functional is called the High-Dimension-ready Total Variation or HDTV(a).

  8. - 2ddad = |R| |a-|2 so = |R|( dad2 + dd2) f()=0 and g(a) HDTV(a) = ln( f(a) )= ln|R| + ln|a-|2 g(x) g(c) g(b) x1  a b c -contour (radius  about a) x2 dadad ) TV(a) = x,d,i,j 2i+j xdixdj + |R| ( -2dadd + From equation 7, f(a)=TV(a)-TV() d(adad- dd) ) = |R| ( -2d(add-dd) + Isobars are hyper-circles centered at  graph(g) is a log-shaped hyper-funnel: For an -contour ring (radius  about a) going inward and outward along a- by  we arrive at inner point, b=+(1-/|a-|)(a-) and outer point, c=-(1+/|a-|)(a-). g(b) and g(c) are the lower and upper endpoints of a vertical interval, S, defining the ε-contour shown. An easy P-tree calculation on that interval provides a P-tree mask for the -contour (no scan requred).

  9. a -contour (radius  about a) If more pruning is needed (i.e., HDTV(a) contour is still to big to scan) use a dimension projection contour (Dim-i projection P-trees are already computed = basic P-trees of R.Ai. Form that contour_mask_P-tree; AND it with the HDTV contour P-tree. The result is a mask for the intersection). If more pruning is needed (i.e., HDTV(a) contour is still to big to scan) As pre-processing, calculate basic P-trees for the HDTV derived attribute. To classify a, 1. Calculate b and c (which depend upon a and ) 2. Form the mask P-tree for training points with HDTV-values in [HDTV(b),HDTV(c)] (Note: the paper was submitted we were still doing this step by sorting TV(a) values. Now we use the contour approach which speeds up this step considerably. The performance evaluation graphs in this paper are still based on the old method. And w/o Gaussian vote weighting). 3. User that mask P-tree to prune down the candidate NNS. 4. If the root count of the candidate set is small enough, proceed to scan and assign class votes using, e.g., a Gaussian vote function, else prune further using a dimension projection). HDTV(x) HDTV(c) HDTV(b) x1 contour of dimension projection f(a)=a1 b c x2

  10. HDTV  TV-TV() TV(x15)-TV() 1 1 2 2 3 3 4 4 5 5 Y X TV TV(x15) TV()=TV(x33) 1 1 2 2 3 3 4 4 5 5 Y X Graphs of TV, TV-TV() and HDTV

  11. Experiements: Dataset • KDDCUP-99 Dataset (Network Intrusion Dataset) • 4.8 millions records, 32 numerical attributes • 6 classes, each contains >10,000 records • Class distribution: • Testing set: 120 records, 20 per class • 4 synthetic datasets (randomly generated): • 10,000 records (SS-I) • 100,000 records (SS-II) • 1,000,000 records (SS-III) • 2,000,000 records (SS-IV)

  12. Speed or Scalability (k=5) Note: SMART-TV was done by sorting the derived attribute. Now we use the much faster P-tree interval mask. Machine used: Intel Pentium 4 CPU 2.6 GHz machine, 3.8GB RAM, running Red Hat Linux

  13. Dataset (Cont.) • OPTICS dataset • ~8,000 points, 8 classes (CL-1, CL-2,…,CL-8) • 2 numerical attributes • Training set: 7,920 points • Testing set: 80 points, 10 per class

  14. Dataset (Cont.) • IRIS dataset • 150 samples • 3 classes (iris-setosa, iris-versicolor, and iris-virginica) • 4 numerical attributes • Training set: 120 samples • Testing set: 30 samples, 10 per class

  15. Overall Accuracy Overall F-score Classification Accuracy Comparison (Note: SMART-TV class voting done with equal votes for each training neighbor – now we use a Gaussian vote weighting and get better accuracy than the other two).

  16. Summary • A nearest-based classification algorithm that starts its classification steps by approximating the Nearest Neighbor Set. • A total variation functional is used prune down the NNS candidate set. • It finishes classification in the traditional way • The algorithm is fast. It scales well to very large dataset. The classification accuracy is very comparable to that of Closed kNN (which is better than kNN).

More Related