1 / 47

Statistics for Business and Economics

Statistics for Business and Economics. Dr. TANG Yu Department of Mathematics Soochow University May 28, 2007. Types of Correlation. Positive correlation Slope is positive. Negative correlation Slope is negtive. No correlation Slope is zero. Hypothesis Test.

jennelle
Télécharger la présentation

Statistics for Business and Economics

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics for Business and Economics Dr. TANG Yu Department of Mathematics Soochow University May 28, 2007

  2. Types of Correlation Positivecorrelation Slope is positive Negativecorrelation Slope is negtive No correlation Slope is zero

  3. Hypothesis Test • For the simple linear regression model • If x and y are linearly related, we must have • We will use the sample data to test the following hypotheses about the parameter

  4. Sampling Distribution • Just as the sampling distribution of the sample mean, X-bar, depends on the the mean, standard deviation and shape of the X population, the sampling distributions of the β0-hat and β1-hat least squares estimators depend on the properties of the {Yj } sub-populations (j=1,…, n). • Given xj, the properties of the {Yj } sub-population are determined by the εj error/random variable.

  5. Model Assumption • As regards the probability distributions of εj (j =1,…, n),it is assumed that: • Each εj is normally distributed, Yj is also normal; • Each εj has zero mean, E(Yj) = β0 + β1 xj • Each εj has the same variance, σε2, Var(Yj) =σε2 is also constant; • The errors are independent of each other, {Yi} and {Yj}, i  j, are also independent; • The error does not depend on the independent variable(s). The effects of X and ε on Y can be separated from each other.

  6. Graph Show Yi: N (β0+β1xi ;σ ) Yj: N (β0+β1xj ;σ ) xi xj The y distributions have the same shape at each x value

  7. Sum of Squares Sum of squares due to error (SSE) Sum of squares due to regression (SSR) Total sum of squares (SST)

  8. ANOVA Table

  9. Example Total

  10. SSE

  11. SST and SSR

  12. ANOVA Table • As F=35.93 > 6.61, where 6.61 is the critical value for F-distribution with degrees of freedom 1 and 5 (significant level takes .05), we reject H0, and conclude that the relationship between x and y is significant

  13. Hypothesis Test • For the simple linear regression model • If x and y are linearly related, we must have • We will use the sample data to test the following hypotheses about the parameter

  14. Standard Errors Standard error of estimate: the sample standard deviation ofε. Replacing σε with its estimate, sε, the estimated standard errorofβ1-hat is

  15. t-test • Hypothesis • Test statistic where t follows a t-distribution with n-2 degrees of freedom

  16. Reject Rule • This is a two-tailed test • Hypothesis

  17. Example Total

  18. SSE

  19. Calculation where 2.571 is the critical value for t-distribution with degree of freedom 5 (significant level takes .025), so we reject H0, and conclude that the relationship between x and y is significant

  20. Confidence Interval β1-hat is an estimator of β1 follows a t-distribution with n-2 degrees of freedom The estimated standard errorofβ1-hat is So the C% confidence interval estimatorsof β1 is

  21. Example The 95% confidence interval estimatorsof β1 in the previous example is i.e., from –12.87 to -5.15, which does not contain 0

  22. Regression Equation • It is believed that the longer one studied, the better one’s grade is. The final mark (Y) on study time (X) is supposed to follow the regression equation: • If the fit of the sample regression equation is satisfactory, it can be used to estimate its mean value or to predict the dependent variable.

  23. Estimate and Predict Estimate Predict For the expected value of a Y sub-population. For a particular element of a Y sub-population. E.g.: What is the mean final mark of all those students who spent 30 hours on studying? I.e., given x= 30, how large is E(y)? E.g.: What is the final mark of Tom who spent 30 hours on studying? I.e., given x= 30, how large is y?

  24. What Is the Same? For a given X value, the point forecast (predict) of Y and the point estimator of the mean of the {Y} sub-population are the same: Ex.1 Estimate the mean final mark of students who spent 30 hours on study. Ex.2 Predict the final mark of Tom, when his study time is 30 hours.

  25. What Is the Difference? The interval prediction of Y and the interval estimation of the mean of the {Y} sub-population are different: • The prediction • The estimation The prediction interval is wider than the confidence interval

  26. Example Total

  27. SSE

  28. Estimation and Prediction • The point forecast (predict) of Y and the point estimator of the mean of the {Y} are the same: For

  29. Estimation and Prediction • But for the interval estimation and prediction, it is different: For

  30. Data Needed For • The prediction • The estimation

  31. Calculation Estimation Prediction

  32. The confidence interval when xg = The confidence interval when xg = The confidence interval when xg = Moving Rule • As xg moves away from x the interval becomes longer. That is, the shortest interval is found at x.

  33. The confidence interval when xg = The confidence interval when xg = The confidence interval when xg = Moving Rule • As xg moves away from x the interval becomes longer. That is, the shortest interval is found at x.

  34. Interval Estimation Prediction Estimation

  35. Residual Analysis • Regression Residual– the difference between an observed y value and its corresponding predicted value • Properties of Regression Residual • The mean of the residuals equals zero • The standard deviation of the residuals is equal to the standard deviation of the fitted regression model

  36. Example

  37. Residual Plot Against x

  38. Residual Plot Against y-hat

  39. Three Situations Good Pattern Non-constant Variance Model form not adequate

  40. Standardized Residual • Standard deviation of the ith residual where • Standardized residual for observation i

  41. Standardized Residual Plot

  42. Standardized Residual • The standardized residual plot can provide insight about the assumption that the error term has a normal distribution • If the assumption is satisfied, the distribution of the standardized residuals should appear to come from a standard normal probability distribution • It is expected to see approximately 95% of the standardized residuals between –2 and +2

  43. Detecting Outlier Outlier

  44. Influential Observation Outlier

  45. Influential Observation Influential observation

  46. High Leverage Points • Leverage of observation • For example

  47. Contact Information • Tang Yu (唐煜) • ytang@suda.edu.cn • http://math.suda.edu.cn/homepage/tangy

More Related