1 / 68

Chapter 2

Chapter 2. Simple Linear Regression. 1. Regression and Model Building. Regression analysis is a statistical technique for investigating and modeling the relationship between variables . Equation of a straight line (classical) y = mx +b we usually write this as y =  0 + 1 x. 2.

marion
Télécharger la présentation

Chapter 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 Simple Linear Regression 1

  2. Regression and Model Building Regression analysis is a statistical technique for investigating and modeling the relationship between variables. Equation of a straight line (classical) y = mx +b we usually write this as y = 0 +1x 2

  3. Regression and Model Building Not all observations will fall exactly on a straight line. y = 0 + 1x +  where  represents error it is a random variable that accounts for the failure of the model to fit the data exactly.  is a random error 3

  4. 2.1 Simple Linear Regression Model Single regressor, x; response, y 0 – intercept: if x = 0 is in the range, then 0 is the mean of the distribution of the response y, when x = 0; if x = 0 is not in the range, then 0 has no practical interpretation 1 – slope: change in the mean of the distribution of the response produced by a unit change in x We usually assume ~ N(0, 2) Population regression model 4

  5. 2.1 Simple Linear Regression Model The response, y, is a random variable There is a probability distribution for y at each value of x Mean: Variance: 5

  6. 2.2 Least-Squares Estimation of the Parameters 0 and 1 are unknown and must be estimated Least squares estimation seeks to minimize the sum of squaresof the differences between the observed response, yi, and the straight line. Sample regression model 6

  7. 2.2 Least-Squares Estimation of the Parameters Let represent the least squares estimators of 0 and 1, respectively. These estimators must satisfy: 7

  8. 2.2 Least-Squares Estimation of the Parameters Solving the normal equations yields the ordinary least squares estimators: 8

  9. 2.2 Least-Squares Estimation of the Parameters Simplifying yields the least squares normalequations: 9

  10. 2.2 Least-Squares Estimation of the Parameters The fitted simple linear regression model: Sum of Squares Notation: 10

  11. 2.2 Least-Squares Estimation of the Parameters Sovel the least squres normal equation, then, we can get the estimate of the two coefficiants 11

  12. Now let's calculate the least square estimators in R using heights data manually first • Then we can get the two estimators by lm() funtion in R directly. • Compare the two results.

  13. Example 2.1 – The Heights Data 13

  14. 14

  15. Example 2.1- Height Data Let's find the coefficients of the linear model using least square estimator. Let's calculate Sxx and Sxy in R first (See R code in fie Chaper2_simple_lin.txt" ) 15

  16. Example 2.1- Heights Data Sothe least squares regression line is 16

  17. 2.2 Least-Squares Estimation of the Parameters Just because we can fit a linear model doesn’t mean that we should. We need to consider the following seriously: How well does this equation fit the data? Is the model likely to be useful as a predictor? Are any of the basic assumptions (such as constant variance and uncorrelated errors) violated, if so, how serious is this? 17

  18. 2.2 Least-Squares Estimation of the Parameters Residuals: Residuals will be used to determine the adequacy of the model 18

  19. 2.2 Least-Squares Estimation of the Parameters Computer Output (R) will be shown in class. 19

  20. You may expore some of the properties of OLS estimation by heights data For example : • Sum of all the fitted values • sum of the observed values • sum of the errors (residuals) • We will learn more in next lecture.

  21. 2.2.2 Properties of the Least-Squares Estimatorsand the Fitted Regression Model The ordinary least-squares (OLS) estimator of the slope is a linear combinations of the observations, yi: where Useful in showing expected value and variance properties 21

  22. 2.2.2 Properties of the Least-Squares Estimatorsand the Fitted Regression Model The least-squares estimators are unbiasedestimators of their respective parameter: The variances are The OLS estimators are Best Linear Unbiased Estimators (BLUE) 22

  23. 2.2.2 Properties of the Least-Squares Estimatorsand the Fitted Regression Model Useful properties of the least-squares fit 3. The least-squares regression line alwayspasses through the centroid of the data. 23

  24. 2.2.3 Estimation of 2 Residual (error) sum of squares Linear Regression Analysis 5E Montgomery, Peck & Vining 24

  25. 2.2.3 Estimation of 2 Unbiased estimator of 2 The quantity n – 2 is the number of degrees of freedom for the residual sum of squares. 25

  26. 2.2.3 Estimation of 2 depends on the residual sum of squares. Then: Any violation of the assumptions on the model errors could damage the usefulness of this estimate A misspecification of the model can damage the usefulness of this estimate This estimate is model dependent 26

  27. 2.3 Hypothesis Testing on the Slope and Intercept Three assumptions needed to apply procedures such as hypothesis testing and confidence intervals. Model errors, i, are normally distributed are independently distributed have constant variance i.e. i ~ NID(0, 2) 27

  28. 2.3.1 Use of t-tests 28

  29. Let’s test the significance of linear regression using height data. First we can calculate the test statistics and critical value manually and compare. Alternatively we can use p-value approach. Finally we can compare the value we calculated manually with the ones returned from function lm()

  30. T0: test statistic P-value for testing beta1 =0

  31. 2.3.2 Testing Significance of Regression H0: 1 = 0H1: 1  0 This tests the significance of regression; that is, is there a linear relationship between the response and the regressor. Failing to reject 1 = 0, implies that there is no linear relationship between y and x 31

  32. 2.3.1 Use of t-tests Intercept H0: 0 = 00 H1: 0  00 Standard error of the intercept: Test statistic: Reject H0 if |t0| > Can also use the P-value approach 32

  33. Testing significance of regression 33

  34. 2.3.3 The Analysis of Variance Approach Partitioning of total variability It can be shown that 34

  35. 2.3.3 The Analysis of Variance Degrees of Freedom Mean Squares 35

  36. 2.3.3 The Analysis of Variance ANOVA procedure for testing H0: 1 = 0 A large value of F0 indicates that regression is significant; specifically, reject if F0 > Can also use the P-value approach 36

  37. Analysis of Variance Table for height data So the percent of the total varibility in response variable (WIFE) is explained by the regression model is R=4497.9/7792.4 =57%

  38. 2.3.3 The Analysis of Variance Relationship between t0 and F0: For H0: 1 = 0, it can be shown that: So for testing significance of regression, the t-test and the ANOVA procedure are equivalent (only true in simple linear regression) Linear Regression Analysis 5E Montgomery, Peck & Vining 38

  39. Confidence Interval of a true parameter Confidence interval = sample statistic + Margin of error margin of error = critical value * Standard deviation of statistic ( if you know the standard deviation) Margin of error = Critical value x Standard error of the statistic If you know the standard deviation of the statistic, use the first equation to compute the margin of error. Otherwise, use the second equation.

  40. 2.4 Interval Estimation in Simple Linear Regression 100(1-)% Confidence interval for Slope 100(1-)% Confidence interval for the Intercept Linear Regression Analysis 5E Montgomery, Peck & Vining 40

  41. Linear Regression Analysis 5E Montgomery, Peck & Vining 41

  42. 2.4 Interval Estimation in Simple Linear Regression 100(1-)% Confidence interval for 2 Refer “chapter_sample_lin.txt” for how to calculate a 95th for 2 Linear Regression Analysis 5E Montgomery, Peck & Vining 42

  43. 2.4.2 Interval Estimation of the Mean Response Let x0 be the level of the regressor variable at which we want to estimate the mean response, i.e. Point estimator of once the model is fit: In order to construct a confidence interval on the mean response, we need the variance of the point estimator. 43

  44. 2.4.2 Interval Estimation of the Mean Response The variance of is 44

  45. 2.4.2 Interval Estimation of the Mean Response 100(1-)% confidence interval for E(y|x0) 45

  46. See pages 34-35, text 46

  47. 2.5 Prediction of New Observations Suppose we wish to construct a prediction interval on a future observation, y0 corresponding to a particular level of x, say x0. The point estimate would be: The confidence interval on the mean response at this point is not appropriate for this situation. Why? Linear Regression Analysis 5E Montgomery, Peck & Vining 47

  48. 2.5 Prediction of New Observations Let the random variable,  be  is normally distributed with E() = 0 Var() = (y0 is independent of ) 48

  49. 2.5 Prediction of New Observations 100(1 - )% prediction interval on a future observation, y0, at x0 Linear Regression Analysis 5E Montgomery, Peck & Vining 49

  50. Linear Regression Analysis 5E Montgomery, Peck & Vining 50

More Related