1 / 33

More about tests

Statistics. More about tests. Statistically significant – When the P-value falls below the alpha level, we say that the tests is “statistically significant” at the alpha level.

gayle
Télécharger la présentation

More about tests

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Statistics More about tests

  2. Statistically significant– When the P-value falls below the alpha level, we say that the tests is “statistically significant” at the alpha level. Alpha level– The threshold P-value that determines when we reject a null hypothesis. If we observe a statistic whose P-value based on the null hypothesis is less than , we reject that null hypothesis. Terms

  3. Significance level– The alpha level is also called the significance level, most often in the phrase such as a conclusion that a particular test is “significant at the 5% significant level.” Critical value—The value in the sampling distribution model of the statistic whose P-value is equal to the alpha level. Any statistic value farther from the null hypothesis value than the critical value will have a smaller P-value than and will lead to rejecting the null hypothesis. The critical value is often denoted with an asterisk, as z*, for example. Terms

  4. Type I error– The error of rejecting a null hypothesis when in fact it is true (also called a “false positive”). The probability of a Type I error is . Type II error– The error of failing to reject a null hypothesis when in fact it is false (also called a “false negative”). The probability of a Type II error is commonly denoted and depends on the effect size. Terms

  5. Power– The probability that a hypothesis test will correctly reject a false null hypothesis is the power of the test. To find power, we must specify a particular alternative parameter value as the “true” value. For any specific value in the alternative, the power is . Effect size—The difference between the null hypothesis value and true value of a model parameter is called the effect size. Terms

  6. Null hypothesis have special requirements. The null must be a statement about the value of a parameter for a model. This value is used to compute the probability that the observed sample statistic– or something even farther from the null value– would occur. Zero in on the null

  7. Arises directly from the context of the problem. It is not dictated by the data, but instead by the situation. One good way to identify both the null and alternative hypothesis is to think about the Why of the situation. Choosing an appropriate Null

  8. To write a null hypothesis, you can’t just choose any value you like. The null must relate to the question at hand. Even though the null usually means no difference or no change, you can’t automatically interpret “null” to mean zero. Choosing an appropriate Null

  9. Example– Fourth-graders in Elmwood School perform the same in math than fourth-graders in Lancaster School. Stating null and alternative hypotheses

  10. Example– Fourth-graders in Elmwood School perform better in math than fourth-graders in Lancaster School. Stating Null and Alternative Hypotheses

  11. Suppose you want to test the theory that sunlight helps prevent depression. One hypothesis derived from this theory might be that hospital admission rates for depression in sunny regions of the country are lower than the national average. Suppose that you know the national annual admission rate for depression to be 17 per 10,000. You intend to take the mean of a sample of admission rates from hospitals in sunny parts of the country and compare it to the national average. Stating null and alternative hypotheses Example

  12. Your research hypothesis is: • The mean annual admission rate for depression from the hospitals in sunny areas is less than 17 per 10,000. • The null hypothesis is: • The mean annual admission rate for depression from the hospitals in sunny areas is equal to or greater than 17 per 10,000 Stating null and alternative hypotheses Example

  13. You know that the mean must be lower than 17 per 10,000 in order to reject the null hypothesis, but how much lower? You decide on the probability level of 95%. In other words, if the mean admission rate for the sample of sunny hospitals selected at random from the national population is less than 5%, you will reject the null hypothesis and conclude that there is evidence to support the hypothesis that exposure to the sun reduced the incidence of depression. Stating null and alternative hypotheses example

  14. Next, look up the critical z-score– the z-score that corresponds to your chosen level of probability– in the standard normal table. It is important to remember what end of the scale you are looking at. Because a computed test statistic in the lower end of the distribution will allow you to reject your null hypothesis, you look up the z-score for the probability (or area) of .05 and find that it is -1.65. Example continued

  15. The z-score defines the boundary of the zones of rejection and acceptance. Example continued Region of Acceptance Region of Rejection Z: -1.65 -1.20 0 Rate (per 10,000) 13 17

  16. Suppose the mean admission rate for the sample hospitals in sunny regions is 13 per 10,000 and suppose also that the corresponding z-score for that mean is -1.20. The test statistic falls in the region of acceptance; so you cannot reject the null hypothesis that the mean in sunny parts of the country is significantly lower than the mean in the national average. There is a greater than 5% chance of obtaining a mean admission rate of 13 per 10,000 or lower from a sample of hospitals chosen at random from the national population, so you cannot conclude that your sample mean could not have come from the population. Example continued

  17. A P-value is a conditional probability. It tells us the probability of getting results at least as unusual as the observed statistic given that the null hypothesis is true. The P-value is not the probability that the null hypothesis is true. The P-value is not the conditional probability that the null hypothesis is true given the data. How to thing about p-values

  18. How do you know how much confidence to put in the outcome of a hypothesis test? The statistician’s criterion is the statistical significance of the test, or the likelihood of obtaining a given result by chance. This is called the alpha level. Common alpha levels are 0.10, 0.05, and 0.01. The smaller the alpha level, the more stringent the test and the greater the likelihood that the conclusion is correct. Statistically significant

  19. The following statements are all equivalent. The finding is significant at the .05 level. The confidence level is 95%. The Type I error rate is .05. The alpha level is .05. There is a 95% certainty that the result is not due to chance. Statistically significant

  20. There is a 1 in 20 chance of obtaining this result. The area of the region of rejection is .05. The P-value is .05 P = .05 Statistically significant

  21. Traditional critical values from the Normal model. Statistically significant When the alternative is one-sided, the critical value puts all of alpha on one side. When the alternative is two-sided, the critical value splits alpha equally into two tails.

  22. Even with lots of evidence, we can still make the wrong decision. • When we perform a hypothesis test, we can make mistakes in two ways: • I. The null hypothesis is true, but we mistakenly reject it. • II. The null hypothesis is false, but we fail to reject it. Type I and type ii errors

  23. Types of Statistical Errors Type I and type ii errors The Truth My Decision

  24. Represented by the Greek letter alpha. In choosing the level of probability for a test, you are actually deciding how much you want to risk committing a Type I error– rejecting the null hypothesis when, in fact, it is true. This is why the threshold level or area in the region of rejections is called the alpha level. It represents the likelihood of committing a Type I error. Type I error

  25. Type II errors are represented by the Greek letter beta. This is harder to find because it requires estimating the distribution of the alternative hypothesis, which is usually unknown. Type II error

  26. Power is the probability that a test will reject the null hypothesis when it, in fact, false. In other words, the power of a test is the probability that it correctly rejects a false null hypothesis. When the power is high, we can be confident that we have looked hard enough. We know that beta is the probability that a test fails to reject a false null hypothesis. The power of the test is the complement Power

  27. An advertiser wants to know if the average age of people watching a particular TV show regularly is less than 24 years. • Is this a one- or two-tailed test? • One • State the alternative and null hypotheses. Try This

  28. An advertiser wants to know if the average age of people watching a particular TV show regularly is less than 24 years. • A random survey of 50 viewers determines that their mean age is 19 years, with a standard deviation of 1.7 years. Find the 90% confidence interval of the age of the viewers. • Name the variables Try This-- continued

  29. Here is the work to find the confidence interval. Try This-- continued We are 90% confident that the mean age of viewers is between 18.6 and 19.4.

  30. An advertiser wants to know if the average age of people watching a particular TV show regularly is less than 24 years. • What is the significance level for rejecting the null hypothesis? • .0016 Try this-- continued

  31. Statistical tests should always be performed on a null hypothesis. • True • If two variables are correlated, then they must casually related. • False • A result with a high level of significance is always very important. • False True or false

  32. Rejecting the null hypothesis when it is actually true is: • No error • A Type I error • A Type II error • Neither a Type I or Type II error • Impossible. Try This Answer: A Type I error

  33. Don’t interpret the P-value as the probability that H0 is true. Don’t believe too strongly in arbitrary alpha values. Don’t confuse practical and statistical significance. Don’t forget that in spite of all your care, you might make a wrong decision. What can go wrong?

More Related