220 likes | 412 Vues
4.2 One Sided Tests. -Before we construct a rule for rejecting H 0 , we need to pick an ALTERNATE HYPOTHESIS -an example of a ONE SIDED ALTERNATIVE would be:. -Which technically expands the null hypothesis to. -Which means we don’t care about negative values of B j
E N D
4.2 One Sided Tests -Before we construct a rule for rejecting H0, we need to pick an ALTERNATE HYPOTHESIS -an example of a ONE SIDED ALTERNATIVE would be: -Which technically expands the null hypothesis to -Which means we don’t care about negative values of Bj -This can be due to introspection or economic theory
4.2 One Sided Tests -If we pick an α (level of significance) of 5%, we are willing to reject H0 when it is true 5% of the time -in order to reject H0, we need a “sufficiently large” positive t value -a one sided test with α=0.05 would leave 5% in the right tail with n-k-1 degrees of freedom -our rejection rule becomes reject H0 if: -where t* is our CRITICAL VALUE
4.2 One Sided Example -Take the following regression where we are interested in testing whether Pepsi consumption has a +’ve effect on coolness: -We therefore have the following hypotheses:
4.2 One Sided Example -We then construct our test statistic: -With degrees of freedom=43-3=40 and a 1% significance level, from a t table we find that our critical t, t*=2.423 -We therefore do not reject H0 at a 1% level of significance; Pepsi has no positive effect on coolness at the 1% significance level in our study
4.2 One Sided Tests -From looking at a t table, we see that as the significance level falls, t* increases -We therefore need a bigger test t statistic in order to reject H0 (the hypothesis that a variable is not significant) -as degrees of freedom increase, the t distribution approximates the normal distribution -after df=120, one can in practice use normal critical values
4.2 One Sided Tests -The other one-sided test we can conduct is: -Which technically expands the null hypothesis to -Here we don’t care about positive values of Bj -We now reject H0 if:
4.2 Two Sided Tests -It is important to decide the nature of our one-sided test BEFORE running our regression -It would be improper to base our alternative on whether Bjhat is positive or negative -A way to avoid this and a more general test is a two-tailed (or two sided) test -Two sided tests work well when a variable’s sign isn’t determined by theory or common sense -Our alternate hypothesis now becomes:
4.2 Two Sided Tests -For a two sided test, we reject H0 if: -In finding our t*, since we now have two rejection regions, α/2 will fit into each tail -For example, if α=0.05, we will have 2.5% in each tail -When we reject H0, we say that “xj is statistically significant at the ()% level” -When we do not reject H0, we say that “xj is statistically insignificant at the ()% level”
4.2 Two Sided Example -Going back to our Pepsi example, we instead ask if Pepsi has ANY effect (positive or negative) on coolness: -We therefore have the following hypotheses:
4.2 Two Sided Example -We then construct our same test statistic as before: -With degrees of freedom=43-3=40 and a 1% significance level, from a t table we find that our critical t, t*=2.704 (bigger than before) -We therefore reject H0 at a 1% level of significance; Pepsi has an effect on coolness at the 1% significance level in our study
4.2 Other Simple Tests -We sometimes want to test whether Bj is equal to a certain number, such as: -Which makes the alternate hypothesis: -Which changes our test t statistic to (t* is found the same from tables):
4.2 Another Pepsi Example -Foolishly, we forget that coolness is a log-log model (see GH 2009), making each slope parameter the partial elasticity: -wanting to see if Pepsi has a unit partial elasticity, we have the following hypotheses:
4.2 Two Sided Example -We then construct our new test statistic: -With degrees of freedom=43-3=40 and a 1% significance level, from a t table we find that our critical t, t*=2.704 (same as 2-tailed) -Therefore don’t reject H0 at a 1% level of significance; Pepsi may have unit partial elasticity at the 1% significance level
4.2 p-values -So far we have taken a CLASSICAL approach to hypothesis tests -choosing an α ahead of time can skew our results -if a variable is insignificant at 1%, but significant at 5%, it is still highly significant! -we can instead ask: “given the observed value of the t statistic, what is the SMALLEST significance level at which the null hypothesis would be rejected? This level is known as the P-VALUE.”
4.2 p-values -P-VALUES relate to probabilities and are therefore always between zero and 1 -regression packages (such as Shazam) usually report p-values for the null hypothesis Bj=0 -testing commands can give other p-values of the form: -ie: P-values are the areas in the tails
4.2 p-values -a small p-value argues for rejecting the null hypothesis -a large p-value argues for not rejecting the null hypothesis -once a level of significance (α) has been chosen, reject H0 if: -regression packages generally list the p-value for a two-tailed test. -for a one-tailed test, simply use p/2
4.2 Statistical Mumbo-Jumbo -If we reject H0, we can state that “Ho is rejected at a ()% level of significance’ -If we do not reject H0, we CANNOT say that “H0 is accepted at a ()% level of significance” -while a null hypothesis of H0:Bj=2 may be not rejected, a similar H0:Bj=2.2 may also not be rejected -Bj cannot equal both 2 and 2.2 -we can conclude a certain number ISN’T valid, but we can’t conclude on ONE valid number
4.2 Economic and Statistical Significance -STATISTICAL significance depends on the value of t -ECONOMIC significance depends upon the size of Bj -since we know that t depends on the size and standard error of Bj: -a coefficient may test significant due to a very small se(Bj); a STATISTICALLY significant coefficient may be too small to be economically significant
4.2 Insignificant Example -Theoretically, World Peace (WP) can only be achieved if House (H) episodes resume and people eat more chicken (C): -although both House and Chicken would test as being significant variables (their standard errors are very small compared to their values), B3 is so small chicken has a very small impact -you’d have to eat so much chicken to cause world peace it’s ECONOMICALLY insignificant
4.2 Significance and Large Samples -As sample size increases, standard errors also tend to increase -coefficients tend to be more statistically significant in large samples -some researchers argue for smaller significance levels in large samples and larger significance levels in small samples -this can often be due to an agenda -in large samples, it is important to examine the MAGNITUDE of any statistically significant variables.
4.2 Multicollinearity Strikes Back -Recall that large standard errors can also be caused by Multicollinearity -This can cause small t stats and insignificance -This can be fought by • Collecting more data • Dropping or combining (preferred) independent variables
4.2 3 Easy (honest) steps for tests When testing, follow these 3 easy steps: • If a variable is significant, examine its coefficient’s magnitude and explain its impact (this may be complicated if not linear) • If a variable is insignificant at usual levels, check it’s p-value to see if some case for significance can be made • If a variable has the “wrong” sign, ask why – are there omitted variables or other issues?