530 likes | 711 Vues
Data Mining Techniques for Malware Detection R. K. Agrawal School of Computer and Systems Sciences Jawaharlal Nehru University NewDelhi-110067. Outline. Data Mining Classification Clustering Association Rules Experimental Results Conclusion and Future Work.
E N D
Data Mining Techniques for Malware Detection R. K. Agrawal School of Computer and Systems Sciences Jawaharlal Nehru University NewDelhi-110067
Outline • Data Mining • Classification • Clustering • Association Rules • Experimental Results • Conclusion and Future Work
Motivation:“Necessity is the Mother of Invention • Data explosion problem • Automated data collection tools lead to tremendous amounts of data stored in databases and other information repositories • We are drowning in data, but starving for knowledge! • Solution: data mining • Extraction of interesting knowledge (rules, regularities, patterns, constraints) from data in large databases
Commercial Viewpoint • Lots of data is being collected and warehoused • Web data, e-commerce • purchases at department/grocery stores • Bank/Credit Card transactions • Computers have become cheaper and more powerful • Competitive Pressure is Strong • Provide better, customized services for an edge (e.g. in Customer Relationship Management)
Scientific Viewpoint • Data collected and stored at enormous speeds (GB/hour) • remote sensors on a satellite • Network related Log files • microarrays generating gene expression data • scientific simulations generating terabytes of data • Traditional techniques infeasible for raw data • Data mining may help scientists • in classifying and segmenting data • in Hypothesis Formation
What Is Data Mining? • Data mining (knowledge discovery in databases): • Extraction of interesting (non-trivial,implicit, previously unknown and potentially useful)information or patterns from data in large databases • Alternative names: • Knowledge discovery(mining) in databases (KDD), knowledge extraction, data/pattern analysis, data archeology, business intelligence, etc.
Data Mining Tasks • Prediction Tasks • Use some variables to predict unknown or future values of other variables • Description Tasks • Find human-interpretable patterns that describe the data. Common data mining tasks • Classification [Predictive] • Clustering [Descriptive] • Association Rule Discovery [Descriptive] • Sequential Pattern Discovery [Descriptive] • Regression [Predictive] • Deviation Detection [Predictive]
Classification: Definition • Given a collection of records (training set ) • Each record contains a set of attributes, one of the attributes is the class label. • Find a model for class attribute as a function of the values of other attributes. • Goal: previously unseen records should be assigned a class as accurately as possible.
Classification—A Two-Step Process • Model construction: describing a set of predetermined classes • Each tuple/sample is assumed to belong to a predefined class, as determined by the class label attribute • The set of tuples used for model construction is training set • The model is represented as classification rules, decision trees, or mathematical formulae • Model usage: for classifying future or unknown objects • Estimate accuracy of the model • The known label of test sample is compared with the classified result from the model • Accuracy rate is the percentage of test set samples that are correctly classified by the model • If the accuracy is acceptable, use the model to classify data tuples whose class labels are not known
Training Data Classifier (Model) Process (1): Model Construction Classification Algorithms IF rank = ‘professor’ OR years > 6 THEN tenured = ‘yes’
Classifier Testing Data Unseen Data Process (2): Using the Model in Prediction (Jolly Professor, 5) Tenured?
Classification: Application • Malware Detection • Goal: Predict whether the given binary is Malware or not. • Approach: • Use both kind of binaries (Normal and Malware) • Learn a model for the class of the binaries. • Use this model to detect malware by observing a binary.
Clustering Definition • Given a set of data points, each having a set of attributes, and a similarity measure among them, find clusters such that • Data points in one cluster are more similar to one another. • Data points in separate clusters are less similar to one another. • Similarity Measures: • Euclidean Distance if attributes are continuous. • Other Problem-specific Measures.
Illustrating Clustering • Euclidean Distance Based Clustering in 3-D space. Intracluster distances are minimized Intercluster distances are maximized
Clustering: Application • Binaries Segmentation: • Goal: subdivide a given set of binaries into distinct subsets of binaries
Association Rule Discovery: Definition • Given a set of records each of which contain some number of items from a given collection; • Produce dependency rules which will predict occurrence of an item based on occurrences of other items. Rules Discovered: {Bread} --> {Milk} {Diaper} --> {Beer}
The Sad Truth About Diapers and Beer • So, don’t be surprised if you find six-packs stacked next to diapers!
Association Rule Discovery: Application • Malware Rules • Goal: To identify activities that are happen together in a given malware.
Sequential Pattern Discovery: Definition Given is a set of objects, with each object associated with its own timeline of events, find rules that predict strong sequential dependencies among different events: • In telecommunications alarm logs, • (Inverter_Problem Excessive_Line_Current) (Rectifier_Alarm) --> (Fire_Alarm) • In point-of-sale transaction sequences, • Computer Bookstore: (Intro_To_Visual_C) (C++_Primer) --> (Perl_for_dummies) • Athletic Apparel Store: (Shoes) (Racket, Racketball) --> (Sports_Jacket)
Classification Example height weight Training examples Linear classifier:
Classification Techniques • Decision Trees • Naïve Bayes • Support Vector Machines • Neural Networks • Parzen Window • K-nearest neigbor
Issues: Data Preparation • Data cleaning • Preprocess data in order to reduce noise and handle missing values • Relevance analysis (feature selection) • Remove the irrelevant or redundant attributes • Data transformation • Generalize and/or normalize data
Issues: Evaluating Classification Methods • Accuracy • classifier accuracy: predicting class label • predictor accuracy: guessing value of predicted attributes • Speed • time to construct the model (training time) • time to use the model (classification/prediction time) • Robustness: handling noise and missing values • Scalability: efficiency in disk-resident databases • Interpretability • understanding and insight provided by the model • Other measures, e.g., goodness of rules, such as decision tree size or compactness of classification rules
Decision Tree Induction: Training Dataset This follows an example of Quinlan’s ID3
age? <=30 overcast >40 31..40 student? credit rating? yes excellent fair no yes no yes no yes A Decision Tree for “buys_computer”
Algorithm for Decision Tree Induction • Basic algorithm (a greedy algorithm) • Tree is constructed in a top-down recursive divide-and-conquer manner • At start, all the training examples are at the root • Attributes are categorical (if continuous-valued, they are discretized in advance) • Examples are partitioned recursively based on selected attributes • Test attributes are selected on the basis of a heuristic or statistical measure (e.g., information gain) • Conditions for stopping partitioning • All samples for a given node belong to the same class • There are no remaining attributes for further partitioning – majority voting is employed for classifying the leaf • There are no samples left
Attribute Selection Measure: Information Gain (ID3/C4.5) • Select the attribute with the highest information gain • Let pi be the probability that an arbitrary tuple in D belongs to class Ci, estimated by |Ci, D|/|D| • Expected information (entropy) needed to classify a tuple in D: • Information needed (after using A to split D into v partitions) to classify D: • Information gained by branching on attribute A
Class P: buys_computer = “yes” Class N: buys_computer = “no” means “age <=30” has 5 out of 14 samples, with 2 yes’es and 3 no’s. Hence Similarly, Attribute Selection: Information Gain
Computing Information-Gain for Continuous-Value Attributes • Let attribute A be a continuous-valued attribute • Must determine the best split point for A • Sort the value A in increasing order • Typically, the midpoint between each pair of adjacent values is considered as a possible split point • (ai+ai+1)/2 is the midpoint between the values of ai and ai+1 • The point with the minimum expected information requirement for A is selected as the split-point for A • Split: • D1 is the set of tuples in D satisfying A ≤ split-point, and D2 is the set of tuples in D satisfying A > split-point
Linear Classifiers f(x,w,b) = sign(w x+ b) denotes +1 denotes -1 Any of these would be fine.. ..but which is best?
Support Vector Machine M=Margin Width What we know: • w . x+ + b = +1 • w . x- + b = -1 • w . (x+-x-) = 2 x+ “Predict Class = +1” zone Support Vectors are those datapoints that the margin pushes up against X- “Predict Class = -1” zone wx+b=1 wx+b=0 wx+b=-1
Linear SVM Mathematically • Goal: 1) Correctly classify all training data • if yi = +1 • If yi = -1 • for all i • 2) Maximize the Margin • same as minimize • We can formulate a Quadratic Optimization Problem and solve for w and b • Minimize • subject to
Linear SVM. Cont. • Requiring the derivatives with respect to w,b to vanish yields: • KKT conditions yield: • Where:
Linear SVM. Cont. • The resulting separating function is:
Linear SVM. Cont. • Requiring the derivatives with respect to w,b to vanish yields: • KKT conditions yield: • Where:
Linear SVM. Cont. • The resulting separating function is: • Notes: • The points with α=0 do not affect the solution. • The points with α≠0 are called support vectors. • The equality conditions hold true only for the Support Vectors.
Non-separable case • The modifications yield the following problem:
Non Linear SVM • Note that the training data appears in the solution only in inner products. • If we pre-map the data into a higher and sparser space we can get more separability and a stronger separation family of functions. • The pre-mapping might make the problem infeasible. • We want to avoid pre-mapping and still have the same separation ability. • Suppose we have a simple function that operates on two training points and implements an inner product of their pre-mappings, then we achieve better separation with no added cost.
Non-linear SVMs: Feature spaces • General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable: Φ: x→φ(x)
The “Kernel Trick” • The linear classifier relies on inner product between vectors K(xi,xj)=xiTxj • If every datapoint is mapped into high-dimensional space via some transformation Φ: x→φ(x), the inner product becomes: K(xi,xj)= φ(xi)Tφ(xj) • A kernel function is a function that is equivalent to an inner product in some feature space. • Example: 2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= φ(xi)Tφ(xj): K(xi,xj)=(1 + xiTxj)2,= 1+ xi12xj12 + 2 xi1xj1xi2xj2+ xi22xj22 + 2xi1xj1 + 2xi2xj2= = [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = = φ(xi)Tφ(xj), where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2] • Thus, a kernel function implicitly maps data to a high-dimensional space (without the need to compute each φ(x) explicitly).
Mercer Kernels • A Mercer kernel is a function: for which there exists a function: such that: • A function k(.,.) is a Mercer kernel if for any function g(.), such that: the following holds true:
Commonly used Mercer Kernels • Homogeneous Polynomial Kernels: • Non-homogeneous Polynomial Kernels: • Radial Basis Function (RBF) Kernels:
Solution of non-linear SVM • The problem: • The separating function:
Multi-Class SVM • Approaches: • One against One ( K (K-1) / 2 ) binary Classifiers required Outputs of the classifiers are aggregated to make the final decision. • One against All (K binary Classifiers required): It trains k binary classifiers, each of which separates one class from the other (k-1) classes. Given a data point X , the binary classifier with the largest output determines the class of X.
Why Is SVM Effective on High Dimensional Data? • The complexity of trained classifier is characterized by the # of support vectors rather than the dimensionality of the data • The support vectors are the essential or critical training examples —they lie closest to the decision boundary (MMH) • If all other training examples are removed and the training is repeated, the same separating hyperplane would be found • The number of support vectors found can be used to compute an (upper) bound on the expected error rate of the SVM classifier, which is independent of the data dimensionality • Thus, an SVM with a small number of support vectors can have good generalization, even when the dimensionality of the data is high
Source of data: Preprocessed data in terms of API Calls taken from data collected from C-Dac Mohali. Description of data Experiments
Classifier Accuracy Measures • Performance measures sensitivity = t-pos/pos /* true positive recognition rate */ specificity = t-neg/neg /* true negative recognition rate */ accuracy = sensitivity * pos/(pos + neg) + specificity * neg/(pos + neg)
The performance of SVM classifier is significantly better in comparison to C4.5. The performance is dependent on the size of feature size SVM requires less training samples in comparison C4.5. Hence, svm is a better choice as collecting malicious samples is difficult. Observations
SVM is a better classification technique which can be used for detection of Malware. Needs attention to construct better feature representation for better generalization How to extend it to multi-class malware problem Conclusion & Future Work