1 / 24

Module 2: DECISION TREE LEARNING

Module 2: DECISION TREE LEARNING. Pavan d.m . Dept. of cse,bldeacet. Introduction. Decision tree learning is one of the most widely used and practical methods for inductive inference.

steved
Télécharger la présentation

Module 2: DECISION TREE LEARNING

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Module 2:DECISION TREELEARNING Pavan d.m. Dept. of cse,bldeacet

  2. Introduction • Decision tree learning is one of the most widely used and practical methods for inductive inference. • It is a method for approximating discrete-valued functions that is robust to noisy data and capable of learning disjunctive expressions. • Decision tree learning is a method for approximating discrete-valued target functions, in which the learned function is represented by a decision tree. • Learned trees can also be re-represented as sets of if-then rules to improve human readability. • These learning methods are among the most popular of inductive inference algorithms and have been successfully applied to a broad range of tasks from learning to diagnose medical cases to learning to assess credit risk of loan applicants.

  3. DECISION TREE REPRESENTATION • Decision trees classify instances by sorting them down the tree from the root to some leaf node, which provides the classification of the instance. • Each node in the tree specifies a test of some attribute of the instance, and each branch descending from that node corresponds to one of the possible values for this attribute. • An instance is classified by starting at the root node of the tree, testing the attribute specified by this node, then moving down the tree branch corresponding to the value of the attribute in the given example. This process is then repeated for the subtreerooted at the new node.

  4. Contd… (Outlook = Sunny, Temperature = Hot, Humidity = High, Wind = Strong) • Figure 3.1 illustrates a typical learned decision tree. This decision tree classifies Saturday mornings according to whether they are suitable for playing tennis. For example, the instance • would be sorted down the leftmost branch of this decision tree and would therefore be classified as a negative instance (i.e., the tree predicts that PlayTennis= no). • This tree and the example used in Table 3.2 to illustrate the ID3 learning algorithm are adapted from (Quinlan 1986). • In general, decision trees represent a disjunction of conjunctions of constraints on the attribute values of instances. • Each path from the tree root to a leaf corresponds to a conjunction of attribute tests, and the tree itself to a disjunction of these conjunctions • For example, the decision tree shown in Figure 3.1 corresponds to the expression

  5. APPROPRIATE PROBLEMS FOR DECISION TREE LEARNING • Decision tree learning is generally best suited to problems with the following characteristics: • Instances are represented by attribute-value pairs. Instances are described by a fixed set of attributes (e.g., Temperature) and their values (e.g., Hot). The easiest situation for decision tree learning is when each attribute takes on a small number of disjoint possible values (e.g., Hot, Mild, Cold). • The target function has discrete output values. The decision tree in Figure 3.1 assigns a boolean classification (e.g., yes or no) to each example. Decision tree methods easily extend to learning functions with more than two possible output values. • Disjunctive descriptions may be required. As noted above, decision trees naturally represent disjunctive expressions. • The training data may contain errors. Decision tree learning methods are robust to errors, both errors in classifications of the training examples and errors in the attribute values that describe these examples. • The training data may contain missing attribute values. Decision tree methods can be used even when some training examples have unknown values (e.g., if the Humidity of the day is known for only some of the training examples).

  6. Example

  7. Example • If the new training set of data comes in i.e. D15 Rain High Weak ?

  8. Contd…

  9. Contd…

  10. Contd…

  11. Contd…

  12. Contd…

  13. Decision Tree Best Classifier

  14. ID3

  15. Steps of ID3 Algorithm

  16. Contd… Notice that the entropy is 0 if all members of S belong to the same class. Note the entropy is 1 when the collection contains an equal number of positive and negative examples. If the collection contains unequal numbers of positive and negative examples, the entropy is between 0 and 1. Figure 3.2 shows the form of the entropy function relative to a boolean classification, as p, varies between 0 and 1

  17. Example

  18. Calculate Entropy and Gain

  19. Contd…

  20. Contd….

  21. Contd….

  22. Contd…

  23. Contd….

  24. Complete Decision Tree

More Related