1 / 92

VizTree

10001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101.

tyrell
Télécharger la présentation

VizTree

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. 1000100010100100010101010000101010001010111011110101101001011101001010100111010101010010100101010111010101001010101011010101001011001011101111010001110000101000010011101010001110000101010110010111010110001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101 01011001011110011010010000100010100110110101110000101010111011111000110110110111111010011001001000110100011110011011010001011110001011010011011001101000000100110001001110000011101001100101100001010010 Here are two sets of bit strings. Which set is generated by a human and which one is generated by a computer?

  2. VizTree 10001000101001000101010100001010100010101110111101011010010111010010101001110101010100101001010101110101010010101010110101010010110010111011110100011100001010000100111010100011100001010101100101110101 01011001011110011010010000100010100110110101110000101010111011111000110110110111111010011001001000110100011110011011010001011110001011010011011001101000000100110001001110000011101001100101100001010010 Lets put the sequences into a depth limited suffix tree, such that the frequencies of all triplets are encoded in the thickness of branches… “humans usually try to fake randomness by alternating patterns”

  3. Motivating example You go to the doctor because of chest pains. Your ECG looks strange… You doctor wants to search a database to find similar ECGs, in the hope that they will offer clues about your condition... ECG • How do we define similar? • How do we search quickly? Two questions:

  4. Indexing Time Series We have seen techniques for assessing the similarity of two time series. However we have not addressed the problem of finding the best match to a query in a large database (other than the lower bounding trick) Query Q The obvious solution, to retrieve and examine every item (sequential scanning), simply does not scale to large datasets. We need some way to index the data...

  5. We can project time series of length n into n-dimension space. The first value in C is the X-axis, the second value in C is the Y-axis etc. One advantage of doing this is that we have abstracted away the details of “time series”, now all query processing can be imagined as finding points in space...

  6. R10 R11 R12 …these nested MBRs are organized as a tree (called a spatial access tree or a multidimensional tree). Examples include R-tree, Hybrid-Tree etc. R10 R11 R12 R1 R2 R3 R4 R5 R6 R7 R8 R9 Data nodes containing points

  7. R10 R11 R12 We can use the MINDIST(point, MBR), to do fast search.. MINDIST( , R10) = 0 MINDIST( , R11) =10 MINDIST( , R12) =17 R10 R11 R12 R1 R2 R3 R4 R5 R6 R7 R8 R9 Data nodes containing points

  8. If we project a query into n-dimensional space, how many additional (nonempty) MBRs must we examine before we are guaranteed to find the best match? For the one dimensional case, the answer is clearly 2... For the three dimensional case, the answer is 26... More generally, in n-dimension space we must examine 3n -1 MBRs This is known as the curse of dimensionality For the two dimensional case, the answer is 8... n = 21  10,460,353,201 MBRs

  9. Raw Data 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … Magic LB data 1.5698 1.0485

  10. Motivating example revisited… You go to the doctor because of chest pains. Your ECG looks strange… Your doctor wants to search a database to find similar ECGs, in the hope that they will offer clues about your condition... ECG • How do we define similar? • How do we search quickly? Two questions:

  11. The Generic Data Mining Algorithm • Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest • Approximately solve the problem at hand in main memory • Make (hopefully very few) accesses to the original data on disk to confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data But which approximation should we use?

  12. Value Based Hidden Markov Models Grid Clipped Data Slope Based Statistical Models Time Series Representations Data Dictated Model Based Data Adaptive Non Data Adaptive Singular Value Approximation Sorted Coefficients Random Mappings Piecewise Aggregate Approximation Piecewise Polynomial Symbolic Trees Wavelets Spectral Piecewise Linear Approximation Adaptive Piecewise Constant Approximation Discrete Fourier Transform Discrete Cosine Transform Natural Language Strings Orthonormal Bi-Orthonormal Chebyshev Polynomials Symbolic Aggregate Approximation Non Lower Bounding Daubechies dbn n > 1 Interpolation Regression Haar Coiflets Symlets

  13. The Generic Data Mining Algorithm (revisited) • Create an approximation of the data, which will fit in main memory, yet retains the essential features of interest • Approximately solve the problem at hand in main memory • Make (hopefully very few) accesses to the original data on disk to confirm the solution obtained in Step 2, or to modify the solution so it agrees with the solution we would have obtained on the original data This only works if the approximation allows lower bounding

  14. What is Lower Bounding? DLB(Q’,S’) • Recall that we have seen lower bounding for distance measures(DTW and uniform scaling) Lower bounding for representations is a similar idea… Q’ Q Raw Data Approximation or “Representation” S S’ D(Q,S) DLB(Q’,S’) Lower bounding means that for all Q and S, we have:DLB(Q’,S’)  D(Q,S)

  15. In a seminal* paper in SIGMOD 93, I showed that lower bounding of a representation is a necessary and sufficient condition to allow time series indexing, with the guarantee of no false dismissals Christos work was originally with indexing time series with the Fourier representation. Since then, there have been hundreds of follow up papers on other data types, tasks and representations

  16. An Example of a Dimensionality Reduction Technique I Raw Data 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … The graphic shows a time series with 128 points. The raw data used to produce the graphic is also reproduced as a column of numbers (just the first 30 or so points are shown). C 0 20 40 60 80 100 120 140 n = 128

  17. An Example of a Dimensionality Reduction Technique II Fourier Coefficients Raw Data 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002 … … 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … We can decompose the data into 64 pure sine waves using the Discrete Fourier Transform (just the first few sine waves are shown). The Fourier Coefficients are reproduced as a column of numbers (just the first 30 or so coefficients are shown). Note that at this stage we have not done dimensionality reduction, we have merely changed the representation... C 0 20 40 60 80 100 120 140 . . . . . . . . . . . . . .

  18. An Example of a Dimensionality Reduction Technique III Raw Data 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … Truncated Fourier Coefficients Fourier Coefficients 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 0.1635 0.1602 0.0992 0.1282 0.1438 0.1416 0.1400 0.1412 0.1530 0.0795 0.1013 0.1150 0.1801 0.1082 0.0812 0.0347 0.0052 0.0017 0.0002 … … 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 n = 128 N = 8 Cratio = 1/16 C C’ 0 20 40 60 80 100 120 140 … however, note that the first few sine waves tend to be the largest (equivalently, the magnitude of the Fourier coefficients tend to decrease as you move down the column). We can therefore truncate most of the small coefficients with little effect. We have discarded of the data.

  19. Raw Data 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … Truncated Fourier Coefficients 1.5698 1.0485

  20. An Example of a Dimensionality Reduction Technique IIII Truncated Fourier Coefficients 1 Truncated Fourier Coefficients 2 Raw Data 2 Raw Data 1 - - - - - - - - - - - - - - - - - - - - - - - - - - - 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 … … 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 … … 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 1.1198 1.4322 1.0100 0.4326 0.5609 0.8770 0.1557 0.4528 - - - - - - - - The Euclidean distance between the two truncated Fourier coefficient vectors is always less than or equal to the Euclidean distance between the two raw data vectors*. So DFT allows lower bounding! *Parseval's Theorem

  21. Mini Review for the Generic Data Mining Algorithm We cannot fit all that raw data in main memory. We can fit the dimensionally reduced data in main memory. So we will solve the problem at hand on the dimensionally reduced data, making a few accesses to the raw data were necessary, and, if we are careful, the lower bounding property will insure that we get the right answer! Raw Data n Raw Data 2 Raw Data 1 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.7412 0.7595 0.7780 0.7956 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.7412 0.7595 0.7780 0.7956 0.8115 0.8247 0.8345 0.8407 0.8431 0.8423 0.8387 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.4995 0.5264 0.5523 0.5761 0.5973 0.6153 0.6301 0.6420 0.6515 0.6596 0.6672 0.6751 0.6843 0.6954 0.7086 0.7240 0.7412 0.7595 0.7780 0.7956 Truncated Fourier Coefficients 2 Truncated Fourier Coefficients 1 Truncated Fourier Coefficients n 1.5698 1.0485 0.7160 0.8406 0.3709 0.4670 0.2667 0.1928 1.1198 1.4322 1.0100 0.4326 0.5609 0.8770 0.1557 0.4528 1.3434 1.4343 1.4643 0.7635 0.5448 0.4464 0.7932 0.2126 Disk Main Memory

  22. 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 0 20 40 60 80 100 120 a a b b b c c b DFT DWT SVD APCA PAA PLA aabbbccb 0 20 40 60 80 100 120 SAX Lin, J., Keogh, E., Lonardi, S. & Chiu, B.DMKD 2003 Morinaka, Yoshikawa, Amagasa, & Uemura, PAKDD 2001 Korn, Jagadish & Faloutsos. SIGMOD 1997 Chan & Fu. ICDE 1999 Agrawal, Faloutsos, &. Swami. FODO 1993 Faloutsos, Ranganathan, & Manolopoulos. SIGMOD 1994 Keogh, Chakrabarti, Pazzani & Mehrotra KAIS 2000 Yi & Faloutsos VLDB 2000 Keogh, Chakrabarti, Pazzani & Mehrotra SIGMOD 2001

  23. Discrete Fourier Transform I Basic Idea: Represent the time series as a linear combination of sines and cosines, but keep only the first n/2 coefficients. Why n/2 coefficients? Because each sine wave requires 2 numbers, for the phase (w) and amplitude (A,B). X X' 0 20 40 60 80 100 120 140 Jean Fourier 1768-1830 0 1 2 3 4 5 6 7 Excellent free Fourier Primer Hagit Shatkay, The Fourier Transform - a Primer'', Technical Report CS-95-37, Department of Computer Science, Brown University, 1995. http://www.ncbi.nlm.nih.gov/CBBresearch/Postdocs/Shatkay/ 8 9

  24. Discrete Fourier Transform II • Pros and Cons of DFT as a time series representation. • Good ability to compress most natural signals. • Fast, off the shelf DFT algorithms exist. O(nlog(n)). • (Weakly) able to support time warped queries. • Difficult to deal with sequences of different lengths. • Cannot support weighted distance measures. X X' 0 20 40 60 80 100 120 140 0 1 2 3 4 5 6 7 Note: The related transform DCT, uses only cosine basis functions. It does not seem to offer any particular advantages over DFT. 8 9

  25. X X' DWT 0 20 40 60 80 100 120 140 Haar 0 Haar 1 Haar 2 Haar 3 Haar 4 Haar 5 Haar 6 Haar 7 Discrete Wavelet Transform I Basic Idea: Represent the time series as a linear combination of Wavelet basis functions, but keep only the first N coefficients. Although there are many different types of wavelets, researchers in time series mining/indexing generally use Haar wavelets. Haar wavelets seem to be as powerful as the other wavelets for most problems and are very easy to code. Alfred Haar 1885-1933 Excellent free Wavelets Primer Stollnitz, E., DeRose, T., & Salesin, D. (1995). Wavelets for computer graphics A primer: IEEE Computer Graphics and Applications.

  26. Discrete Wavelet Transform II X X' DWT Ingrid Daubechies 1954 - 0 20 40 60 80 100 120 140 Haar 0 We have only considered one type of wavelet, there are many others. Are the other wavelets better for indexing? YES: I. Popivanov, R. Miller. Similarity Search Over Time Series Data Using Wavelets. ICDE 2002. NO: K. Chan and A. Fu. Efficient Time Series Matching by Wavelets. ICDE 1999 Haar 1 Later in this tutorial I will answer this question.

  27. X X' DWT 0 20 40 60 80 100 120 140 Haar 0 Haar 1 Haar 2 Haar 3 Haar 4 Haar 5 Haar 6 Haar 7 Discrete Wavelet Transform III • Pros and Cons of Wavelets as a time series representation. • Good ability to compress stationary signals. • Fast linear time algorithms for DWT exist. • Able to support some interesting non-Euclidean similarity measures. • Signals must have a length n = 2some_integer • Works best if N is = 2some_integer. Otherwise wavelets approximate the left side of signal at the expense of the right side. • Cannot support weighted distance measures.

  28. X X' 0 20 40 60 80 100 120 140 Singular Value Decomposition I Basic Idea: Represent the time series as a linear combination of eigenwaves but keep only the first N coefficients. SVD is similar to Fourier and Wavelet approaches is that we represent the data in terms of a linear combination of shapes (in this case eigenwaves). SVD differs in that the eigenwaves are data dependent. SVD has been successfully used in the text processing community (where it is known as Latent Symantec Indexing ) for many years. Good free SVD Primer Singular Value Decomposition - A Primer. Sonia Leach SVD James Joseph Sylvester 1814-1897 eigenwave 0 eigenwave 1 eigenwave 2 Camille Jordan (1838--1921) eigenwave 3 eigenwave 4 eigenwave 5 eigenwave 6 eigenwave 7 Eugenio Beltrami 1835-1899

  29. eigenwave 0 eigenwave 1 eigenwave 2 eigenwave 3 eigenwave 4 eigenwave 5 eigenwave 6 eigenwave 7 Singular Value Decomposition II How do we create the eigenwaves? We have previously seen that we can regard time series as points in high dimensional space. We can rotate the axes such that axis 1 is aligned with the direction of maximum variance, axis 2 is aligned with the direction of maximum variance orthogonal to axis 1 etc. Since the first few eigenwaves contain most of the variance of the signal, the rest can be truncated with little loss. X X' SVD 0 20 40 60 80 100 120 140 This process can be achieved by factoring a M by n matrix of time series into 3 other matrices, and truncating the new matrices at size N.

  30. eigenwave 0 eigenwave 1 eigenwave 2 eigenwave 3 eigenwave 4 eigenwave 5 eigenwave 6 eigenwave 7 Singular Value Decomposition III • Pros and Cons of SVD as a time series representation. • Optimal linear dimensionality reduction technique . • The eigenvalues tell us something about the underlying structure of the data. • Computationally very expensive. • Time: O(Mn2) • Space: O(Mn) • An insertion into the database requires recomputing the SVD. • Cannot support weighted distance measures or non Euclidean measures. X X' SVD 0 20 40 60 80 100 120 140 Note: There has been some promising research into mitigating SVDs time and space complexity.

  31. Chebyshev Polynomials Basic Idea: Represent the time series as a linear combination of Chebyshev Polynomials X • Pros and Cons of Chebyshev Polynomials as a time series representation. • Time series can be of arbitrary length • Only O(n) time complexity • Is able to support multi-dimensional time series*. X' Cheb Ti(x) = 0 20 40 60 80 100 120 140 1 x 2x2−1 4x3−3x 8x4−8x2+1 16x5−20x3+5x 32x6−48x4+18x2−1 64x7−112x5+56x3−7x 128x8−256x6+160x4−32x2+1 Pafnuty Chebyshev 1821-1946 • Time series must be renormalized to have length between –1 and 1

  32. Chebyshev Polynomials X X' Cheb Ti(x) = 0 20 40 60 80 100 120 140 Pafnuty Chebyshev 1821-1946 In 2006, Dr Ng published a “note of Caution” on his webpage, noting that the results in the paper are not reliable.. “…Thus, it is clear that the C++ version contained a bug. We apologize for any inconvenience caused…” Both Dr Keogh and Dr Michail Vlachos independently found Chebyshev Polynomials are slightly worse than other methods on more than 80 different datasets.

  33. Piecewise Linear Approximation I Basic Idea: Represent the time series as a sequence of straight lines. Lines could be connected, in which case we are allowed N/2 lines If lines are disconnected, we are allowed only N/3 lines Personal experience on dozens of datasets suggest disconnected is better. Also only disconnected allows a lower bounding Euclidean approximation X Karl Friedrich Gauss 1777 - 1855 X' 0 20 40 60 80 100 120 140 • Each line segment has • length • left_height • (right_height can be inferred by looking at the next segment) • Each line segment has • length • left_height • right_height

  34. How do we obtain the Piecewise Linear Approximation? • Optimal Solution is O(n2N), which is too slow for data mining. • A vast body on work on faster heuristic solutions to the problem can be classified into the following classes: • Top-Down • Bottom-Up • Sliding Window • Other (genetic algorithms, randomized algorithms, Bspline wavelets, MDL etc) • Extensive empirical evaluation* of all approaches suggest that Bottom-Up is the best approach overall. Piecewise Linear Approximation II X X' 0 20 40 60 80 100 120 140

  35. Pros and Cons of PLA as a time series representation. • Good ability to compress natural signals. • Fast linear time algorithms for PLA exist. • Able to support some interesting non-Euclidean similarity measures. Including weighted measures, relevance feedback, fuzzy queries… • Already widely accepted in some communities (ie, biomedical) • Not (currently) indexable by any data structure (but does allows fast sequential scanning). Piecewise Linear Approximation III X X' 0 20 40 60 80 100 120 140

  36. x1 x2 x3 x4 x5 x6 x7 x8 Basic Idea: Represent the time series as a sequence of box basis functions. Note that each box is the same length. Piecewise Aggregate Approximation I X X' 0 20 40 60 80 100 120 140 Given the reduced dimensionality representation we can calculate the approximate Euclidean distance as... This measure is provably lower bounding. • Independently introduced by two authors • Keogh, Chakrabarti, Pazzani & Mehrotra, KAIS (2000) / Keogh & Pazzani PAKDD April 2000 • Byoung-Kee Yi, Christos Faloutsos, VLDB September 2000

  37. X1 X2 X3 X4 X5 X6 X7 X8 Piecewise Aggregate Approximation II • Pros and Cons of PAA as a time series representation. • Extremely fast to calculate • As efficient as other approaches (empirically) • Support queries of arbitrary lengths • Can support any Minkowski metric@ • Supports non Euclidean measures • Supports weighted Euclidean distance • Can be used to allow indexing of DTW and uniform scaling* • Simple! Intuitive! • If visualized directly, looks ascetically unpleasing. X X' 0 20 40 60 80 100 120 140

  38. Raw Data (Electrocardiogram) Adaptive Representation (APCA) Reconstruction Error 2.61 Haar Wavelet or PAA Reconstruction Error 3.27 DFT Reconstruction Error 3.11 50 100 150 200 250 0 Basic Idea: Generalize PAA to allow the piecewise constant segments to have arbitrary lengths. Note that we now need 2 coefficients to represent each segment, its value and its length. Adaptive Piecewise Constant Approximation I X X 0 20 40 60 80 100 120 140 <cv1,cr1> <cv2,cr2> <cv3,cr3> <cv4,cr4> The intuition is this, many signals have little detail in some places, and high detail in other places. APCA can adaptively fit itself to the data achieving better approximation.

  39. Adaptive Piecewise Constant Approximation II The high quality of the APCA had been noted by many researchers. However it was believed that the representation could not be indexed because some coefficients represent values, and some represent lengths. However an indexing method was discovered! (SIGMOD 2001 best paper award) Unfortunately, it is non-trivial to understand and implement and thus has only been reimplemented once or twice (In contrast, more than 50 people have reimplemented PAA). X X 0 20 40 60 80 100 120 140 <cv1,cr1> <cv2,cr2> <cv3,cr3> <cv4,cr4>

  40. Adaptive Piecewise Constant Approximation III • Pros and Cons of APCA as a time series representation. • Fast to calculate O(n). • More efficient as other approaches (on some datasets). • Support queries of arbitrary lengths. • Supports non Euclidean measures. • Supports weighted Euclidean distance. • Support fast exact queries , and even faster approximate queries on the same data structure. • Somewhat complex implementation. • If visualized directly, looks ascetically unpleasing. X X 0 20 40 60 80 100 120 140 <cv1,cr1> <cv2,cr2> <cv3,cr3> <cv4,cr4>

  41. Clipped Data Basic Idea: Find the mean of a time series, convert values above the line to “1’ and values below the line to “0”. Runs of “1”s and “0”s can be further compressed with run length encoding if desired. This representation does allow lower bounding! Pros and Cons of Clipped as a time series representation. Ultra compact representation which may be particularly useful for specialized hardware. X X' Tony Bagnall 00000000000000000000000000000000000000000000111111111111111111111110000110000001111111111111111111111111111111111111111111111111 0 20 40 60 80 100 120 140 …110000110000001111…. 44 Zeros 23 Ones 4 Zeros 2 Ones 6 Zeros 49 Ones 44 Zeros|23|4|2|6|49

  42. Pros and Cons of natural language as a time series representation. • The most intuitive representation! • Potentially a good representation for low bandwidth devices like text-messengers • Difficult to evaluate. Natural Language X rise, plateau, followed by a rounded peak 0 20 40 60 80 100 120 140 rise, plateau, followed by a rounded peak To the best of my knowledge only one group is working seriously on this representation. They are the University of Aberdeen SUMTIME group, headed by Prof. Jim Hunter.

  43. 0 1 2 3 4 5 6 7 Basic Idea: Convert the time series into an alphabet of discrete symbols. Use string indexing techniques to manage the data. Potentially an interesting idea, but all work thus far are very ad hoc. Symbolic Approximation I X X' a a b b b c c b • Pros and Cons of Symbolic Approximation as a time series representation. • Potentially, we could take advantage of a wealth of techniques from the very mature field of string processing and bioinformatics. • It is not clear how we should discretize the times series (discretize the values, the slope, shapes? How big of an alphabet? etc). • There are more than 210 different variants of this, at least 35 in data mining conferences. 0 20 40 60 80 100 120 140 a a b b b c c b

  44. SAX:Symbolic Aggregate approXimation • SAX allows (for the first time) a symbolic representation that allows • Lower bounding of Euclidean distance • Dimensionality Reduction • Numerosity Reduction X X' Jessica Lin 1976- 0 20 40 60 80 100 120 140 d c b a c c c b b b aaaaaabbbbbccccccbbccccdddddddd a a - - 0 0 40 60 80 100 120 20 baabccbc

  45. Comparison of all Dimensionality Reduction Techniques • We have already compared features (Does representation X allow weighted queries, queries of arbitrary lengths, is it simple to implement… • We can compare the indexing efficiency. How long does it take to find the best answer to out query. • It turns out that the fairest way to measure this is to measure the number of times we have to retrieve an item from disk.

  46. 0 200 400 600 0 200 400 600 Data Bias Definition: Data bias is the conscious or unconscious use of a particular set of testing data to confirm a desired finding. Example: Suppose you are comparing Wavelets to Fourier methods, the following datasets will produce drastically different results… Good for wavelets bad for Fourier Good for Fourier bad for wavelets

  47. Example of Data Bias: Whom to Believe? For the task of indexing time series for similarity search, which representation is best, the Discrete Fourier Transform (DFT), or the Discrete Wavelet Transform (Haar)? • “Several wavelets outperform the DFT”. • “DFT-based and DWT-based techniques yield comparable results”. • “Haar wavelets perform slightly better that DFT” • “DFT filtering performance is superior to DWT”

  48. Example of Data Bias: Whom to Believe II? • To find out who to believe (if anyone) we performed an extraordinarily careful and comprehensive set of experiments. For example… • We used a quantum mechanical device generate random numbers. • We averaged results over 100,000 experiments! • For fairness, we use the same (randomly chosen) subsequences for both approaches.

  49. I tested on the Powerplant, Infrasound and Attas datasets, and I know DFT outperforms the Haar wavelet Pruning Power

More Related