1 / 58

Logistic Regression

Logistic Regression. Ram Akella Lecture 3 February 2, 2011 UC Berkeley Silicon Valley Center/SC. Overview. Motivating example Why not ordinary linear regression? The logistic formulation Probability of “success” Odds of “success” Logit of “success” The logistic regression model

telyn
Télécharger la présentation

Logistic Regression

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Logistic Regression Ram Akella Lecture 3 February 2, 2011 UC Berkeley Silicon Valley Center/SC

  2. Overview • Motivating example • Why not ordinary linear regression? • The logistic formulation • Probability of “success” • Odds of “success” • Logit of “success” • The logistic regression model • Running the model • Interpreting the output • Evaluating the goodness of fitting

  3. 0 The Aim of Classification Methods • Similar to ordinary regression models, except the response, Y, is categorical. • Y indicates the “group” membership of each observation (each category is a group). Y = C1, C2,… • Predictors X1, X2,.., are continuous and/or categorical. • Aims: • Profiling (=Explanatory): What are the differences (in terms of X1, X2,…) between the various groups? (as indicated by Y) • Classification (=Prediction): Predict Y (group membership) on the basis of X1, X2,…

  4. 0 Example 1: Classifying Firm Status • Financial analysts are interested in predicting the future solvency of firms. In order to predict whether a firm will go bankrupt in the near future, it is useful to look at different ratio measures of financial health such as : • Cash_Debt: cash flow/total debt • ROA: net income/total assets • Current: current assets/current liabilities • Assets_Sales: current assets/net sales • Status: bankrupt / solvent

  5. 0 Example 2: Profiling Customers by Beer Preference • A beer-maker would like to know the characteristics that distinguish customers who prefer light beer from those who prefer regular beer. The beer-maker would like to use this information to predict any customer’s beer preference based on: • gender, • marital status, • income and age.

  6. Example: Beer Preference Consider the data on beer preferences (light or regular) of 100 customers along with their age, income, gender and marital status. Suppose we code the response variable as Now we fit the multiple regression model Y = 0 + b1Gender + b2Married + b3Income + b4Age + e

  7. Model assumptions: • Observations/residuals are independent • Residuals are normally distributed • Linear model is adequate • Variance of residuals is constant • Which assumptions are violated? • What about predictions from this model?

  8. Different Formulation • Let π = Prob(Y=1). • In the beer example, π is the probability that a customer prefers _______ beer. • It follows that Prob(Y=0) = _________ . • In order to get rid of the 0/1 values, we can look at a function of π and treat it as the response model. light 1- π

  9. Logistic Regression • Logistic regression learns the conditional distribution P(y | x) • We will assume two classes y = 0 and y = 1 • The parametric form for P(y = 1 | x, w) and P(y=0|x,w) is: were w is the parameter vector w=[w1, w2, …, wk]

  10. Logistic Regression • We can represent the probability ratio as a linear combination of features: This is known as log odds

  11. Logistic Regression A linear function wx which ranges [-∞, ∞] can be transformed to a range [0,1] using the sigmoid function g(x,w)

  12. Logistic Regression Given P( y | x) we predict ŷ =1 if the expected loss function of predicting 0L(0,1) is greater than predicting 1L(1,0) (for now assume L(0,1) = L(1,0))

  13. Logistic Regression This assumed L(0,1)=L(1,0) A similar derivation can be done for arbitrary L(0,1) and L(1,0). (If we decide that one class is more important to be detected than the other)

  14. Maximum Likelihood Learning • The likelihood function is the probability of the data (x,y)given the parameters w – p(x,y|w) • It is a function of the parameters • Maximum likelihood learning finds the parameters that maximize this likelihood function • A common trick is to work with log-likelihood, i.e., take the logarithm of the likelihood function – log p(x,y|w)

  15. Computing the Likelihood • In our framework, we assume each training example (xi, yi) is drawn independently from the same (but unknown) distribution P( x ,y ) (the i. i. d assumption) hence, we can write: • This is the function that we will maximize

  16. Computing the Likelihood • Further P(x|w)=P(x) because x because it does not depend on w, so:

  17. Computing the Likelihood • This can be written as: • Then the objective learning function is:

  18. Fitting the Logistic Regression with Gradient Ascend

  19. Fitting the Logistic Regression with Gradient Ascend

  20. Gradient Ascend Algorithm

  21. Multi-class Case • Choose class K to be the “reference class” and represent each of the other classes as a logistic function of the odds of k versus class K:

  22. Multiclass Case • Conditional probability for class k ≠ K can be computed as: • For class K the conditional probability is:

  23. Example A 1959 article presents data concerning the proportion of coal miners who exhibit the symptoms of severe pneumoconiosis and the number of years of exposure. y is the proportion of miners who have severe symptoms

  24. Example • The fitted model is

  25. Example • The covariance matrix is:

  26. Example Logistic Regression Table

  27. Interpretation of the Parameters • Consider we have a single regressor xi • If we increment the value of the regressor in one unit then: • The difference between the two predicted values is:

  28. The odds ratio • The odds ratio can be interpreted as the increase in probability of success associated with a one-unit change in the value of the predictor variable and it is defined as:

  29. Example • Following the pneumoconiosis data we have the model equal to: • The resulting odds ratio is • This implies that every year of additional exposure increases the odds of contracting a severe case of pneumoconiosis by 10%

  30. 0 Overall Usefulness of the Model For maximum likelihood estimation, the fit of a model is measured by its deviance D (similar to sum-of-squared-errors in the case of least-squares estimation) We compare the deviance of a model to the deviance of the naïve model (no explanatory variables: simply classify each observation as belonging to the majority class)

  31. Overall Usefulness of the Model • If the ratio of D/n-p, where p is the number of predictors and n the number of samples is much greater than unity, then the current model is not adequate • Note:This test is similar in intent to the ____-test for overall usefulness of a linear regression model F

  32. 0 Usefulness of Individual Predictors • Each estimated coefficient, , has a standard error, sbj associated with it. • To conduct the hypothesis test H0: ŵj = 0 vs. Ha: ŵj 0 • Use the test statistic / sbj , (called the Wald statistic) • The associated p-value indicates the statistical significance of the predictor xi, or the significance of the contribution of this predictor beyond the other predictors.

  33. 0 Evaluating & Comparing Classifiers • Evaluation of a classifier is necessary for two different purposes: • To obtain the complete specification of a particular model i.e., to obtain numerical values of the parameters of a particular method. • To compare two or more fully specified classifiers in order to select the “best” classifier. • Useful criteria • Reasonableness • Accuracy • Cost measures

  34. 0 Evaluation Criterion 1: Reasonableness • As in regression, time series, and other models we expect the model to be reasonable: • Based on the analyst’s domain knowledge, is there a reasonable basis for a causal relationship between the predictor variables and Y (group membership)? • Are the predictor variables actually available for prediction in the future? • If the classifier implies a certain order of importance among the predictor variables (indicated by p-values for specific predictors), is this order reasonable?

  35. 0 Evaluation Criterion 2: Accuracy Measures • The idea is to compare the predictions with the actual responses (like forecast errors in time series, or residuals in regression models). • In regression/ time series etc. we displayed these as 3 columns (actual values, predicted/fitted values, errors) or plotted them on a graph. • In classification the predictions and actual values are displayed in a compact format called a classification/confusion matrix. • This can be done for the training and/or validation set.

  36. Predicted a b Actual c d 0 Classification/confusion matrix Example with two groups Y = C1 or C2 # of obs that were classified correctly as group C1

  37. 0 Example: Beer Preference • The following classification matrix results from using a certain classifier on the data

  38. 0 Classification Measures • Based on the classification matrix there are 5 popular measures: • The overall accuracy of a classifier is • The overall error rate of a classifier is

  39. 0 Accuracy Measures – cont. • The base accuracy of a dataset is the accuracy of the naïve rule • The base error rate is • The lift of a classifier (aka its improvement) is Proportion of majority class 1 – base accuracy

  40. 0 Accuracy Measures – cont. • Suppose the two groups are asymmetric in that it is more important to correctly predict membership in C1 than in C2. E.g., in a bankruptcy example, it may be more important to correctly predict a firm that is going bankrupt than to correctly predict a firm that is going to stay solvent. The classifier is essentially used as a system for detecting or signaling C1. •  In such a case, the overall accuracy is not a good measure for evaluating the classifier.

  41. 0 Accuracy Measures for Unequal “Importance” of Groups Predicted • Sensitivity of a classifier = its ability to correctly detect the important group members =% of C1 members correctly classified • Specificity of a classifier = its ability to correctly rule out C2 members =% of C2 members correctly classified C1 C2 C1 = “important” group C1 Actual C2

  42. 0 Accuracy Measures for Unequal “Importance” of Groups • = false positive rate of classifier • = false negative rate of classifier  Predicted C1 C2 C1 = “important” group C1 Actual C2

  43. Cost Sensitive Learning • There are two types of errors • Machine Learning methods usually minimize FP+FN • Direct marketing maximizes TP

  44. Cost Sensitive Learning • In practice, true positive and false negative errors often incur different costs • Examples: • Medical diagnostic tests: does X have leukemia? • Loan decisions: approve mortgage for X? • Web mining: will X click on this link? • Promotional mailing: will X buy the product?

  45. Cost Sensitive Learning • Most learning schemes do not perform cost-sensitive learning • They generate the same classifier no matter what costs are assigned to the different classes • Example: standard decision tree learner • Simple methods for cost-sensitive learning: • Re-sampling of instances according to costs • Weighting of instances according to costs • Some schemes are inherently cost-sensitive

  46. Lift Charts • The lift charts help us to see what is the improvement of the classification vs. the random classification The larger the area, the better the model is

  47. How to construct a lift chart Sort the samples by probability Each point of the chart will consist in the cumulative sum of the actual class For the random classification line, calculate the average of the classes (yes=1, no=0). Each point of the line is the multiplication of the average by the number of samples

  48. How to construct a lift chart

  49. ROC Curves • Stands for “receiver operating characteristic” • Used in signal detection to show trade-off between hit rate and false alarm rate over noisy channel • y axis shows percentage of true positives in sample • x axis shows percentage of false positives in sample

  50. ROC Curves

More Related