1 / 60

Review

Review. Tests of Significance. Single Proportion. Null Hypothesis: Buzz will randomly pick a button. (He chooses the correct button 50% of the time, in the long run.) ( )

urbano
Télécharger la présentation

Review

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review Tests of Significance

  2. Single Proportion • Null Hypothesis: Buzz will randomly pick a button. (He chooses the correct button 50% of the time, in the long run.) () • Alternative Hypothesis: Buzz understands what Doris is communicating to him. (He chooses the correct button more than 50% of the time, in the long run.) ()

  3. Single Proportion • Buzz got it right 15 out of 16 times ( • This is very unlikely (p-value = 0.0005) to occur by chance.

  4. Single Proportion • Theory-based works well when number of successes and failures are at least 10. • A normal distribution is used to predict what the null distribution looks like. (These are centered on the proportion under the null hypothesis.)

  5. Comparing Two Proportions Null hypothesis: Swimming with dolphins has no association on if someone shows substantial improvement (dolphins =control or dolphins control = 0) Alternative hypothesis: Swimming with dolphins increases the probability of substantial improvement in depression symptoms (dolphins >control or dolphins control > 0)

  6. Comparing Two Proportions • Our statistic is the observed difference in proportions 0.67 – 0.20 = 0.47.

  7. Comparing Two Proportions • If the null hypothesis is true (dolphin therapy is not better) we would have 13 improvers and 17 non-improvers regardless of the group they were in. • Any differences we see between groups arise solely from the randomness in the assignment to the groups. • Randomly assign the groups to the improvers and non-improvers and recalculate the statistic many times.

  8. Comparing Two Proportions • We did 1000 repetitions to develop a null distribution and found that just 13 out of 1000 results had a difference of 0.47 or higher (p-value = 0.013).

  9. Comparing Two Proportions • Just like with a single proportion, the theory-based test works well when number of successes and failures are at least 10 in each group. • Again, a normal distribution is used to predict the shape of the null distribution. (These are always centered at 0.)

  10. Comparing Two Means • Null hypothesis:There is no association between which bike is used and commute time • Commute time is not affected by which bike is used. (µcarbon = µsteel OR µcarbon – µsteel = 0) • Alternative hypothesis:There is an association between which bike is used and commute time • Commute time is affected by which bike is used. (µcarbon ≠ µsteel OR µcarbon – µsteel ≠ 0)

  11. Comparing Two Means Our statistic is the observed difference in means 108.34 – 107.81 = 0.53.

  12. Comparing Two Means The Original Data Shuffled Results • Shuffling assumes the null hypothesis that the bike has no effect on commute times. • Calculate the simulated statistic after shuffling. • Repeating this many times develops a null distribution

  13. Comparing Two Means Strength of Evidence • 705 of 1000 repetition are 0.53 or farther away from 0. • p-value = 0.705.

  14. Comparing Two Means • A theory-based test works well here when the sample size is at least 20. • A t-distribution is used to predict the shape of the null distribution and it is centered on 0.

  15. Matched Pairs • H0: µd = 0 • On average, the mean of the differences between the running times (narrow – wide) is 0. • Ha: µd 0 • On average, the mean of the differences in running times (narrow – wide) is not 0.

  16. Matched Pairs • In this type of test the data starts off as two separate groups. But there is a naturally pairing. In this case the times for the same person running both paths. • So we need to look at the differences.

  17. Matched Pairs • Mean difference is d = 0.075 seconds

  18. Matched Pairs • The null basically says the running path doesn’t matter. • So we can randomly decide which time goes with the which path (Notice we don’t break our pairs.) • Each time we do this, compute a simulated difference in means. • We repeat this process many times to develop a null distribution.

  19. Matched Pairs • Only 2 of the 1000 repetitions of random swappings gave a value at least as extreme as 0.075

  20. Matched Pairs • A theory-based test works well when the sample size is at least 20. • Like comparing two means, a t-distribution is used to predict the null distribution. • The data used in this test are the differences and this is the same test that is used for a single mean. (Except in testing a single mean, we compare data to any number, not typically just 0.)

  21. Comparing Multiple Proportions • Null hypothesis: There is no association between the arrival pattern of the vehicle and if it comes to a complete stop. (Single = Lead = Follow) • Alternative hypothesis: There is an association between the arrival pattern of the vehicle and if it comes to a complete stop. The alternative hypothesis is that (Not all these long-term probabilities are the same OR at least one is different).

  22. Comparing Multiple Proportions • MAD (mean absolute difference)

  23. Comparing Multiple Proportions • If there is no association between arrival pattern and whether or not a vehicle stops it basically means it doesn’t matter what the arrival pattern is. Some vehicles will stop no matter what the arrival pattern and some vehicles won’t. • We can model this by shuffling either the explanatory or response variables. (The applet will shuffle the response.) and recomputing the MAD statistic many times.

  24. Comparing Multiple Proportions • Simulated values of the statistic for 1000 shuffles • P-value = 0.083

  25. Comparing Multiple Proportions • Theory-based tests work well for multiple proportions if the number of successes and failures are at least 10. (Just like with all proportions.) • The MAD statistic is not used in theory-based, but the chi-squared statistic (and hence a chi-squared distribution) is. • This is test is called a chi-squared test of association.

  26. Comparing Multiple Means • Null: There is no association between whether and when a picture was shown and comprehension of the passage (µno picture = µpicture before = µpicture after) • Alternative: There is an association between whether and when a picture was shown and comprehension of the passage (At least one of the mean comprehension scores will be different.)

  27. Comparing Multiple Means Means 3.37 3.21 4.95 MAD = (|3.21−4.95|+|3.21−3.37|+|4.95−3.37|)/3 = 1.16.

  28. Comparing Multiple Means • Simulated values of the statistic for 5000 shuffles • P-value = 0.0008

  29. Comparing Multiple Means • Since we have a small p-value we can conclude at least one of the mean comprehension scores is different. • We can do pairwise confidence intervals to find which means are significantly different than the other means.

  30. Comparing Multiple Means • Theory-based tests work well when we have at a sample size of least 20 in each group. (Like all tests with means.) • The MAD statistic is not used, but an F-statistic (and hence an F distribution) is. • Just like the MAD, the larger the F-statistic, the larger the strength of evidence (and hence a smaller p-value). • This test is called Analysis of Variance or ANOVA.

  31. Correlation/Regression • Null: There is no association between heart rate and body temperature. (ρ = 0 or β = 0) • Alternative: There is a positive linear association between heart rate and body temperature. (ρ > 0 or β> 0)

  32. Correlation/Regression r = 0.378

  33. Correlation/Regression • If there is no association, we can break apart the temperatures and their corresponding heart rates by scrambling one of the variables. Just like we did in previous tests. We will do this by scrambling one of the variables. • After each scramble, we will compute the appropriate statistic, either correlation or the slope of the regression equation. • Repeat this many times to develop a null distribution.

  34. Correlation/Regression • We found that 68/1000 times we had a simulated correlation greater than or equal to 0.378.

  35. Correlation/Regression • Theory-based test work well when the values of the response variable are normally distributed for each value of the explanatory variable and these normal distributions have similar variability. • We can use either the correlation or the slope of the regression line as the statistic. • A t-distribution, centered at 0, is used.

  36. Review Confidence Intervals

  37. Confidence Intervals • Tests of significance answer yes/no questions. • Is there strong evidence that Buzz is not just guessing? • Is there strong evidence that swimming with dolphins help reduce depression symptoms? • Sometimes we might just might want an estimate of a population parameter. E.g. What proportion of the voters will vote in the next election?

  38. Confidence Intervals • Confidence intervals are interval estimates of a population parameter. • A population parameter is some fixed measurement for a population such as a proportion (or long-term probability), a difference in two proportions, a mean, a difference in means, or a slope of a regression equation. • These intervals give plausible (believable, credible) values for the parameter.

  39. 2SD Confidence Intervals • The observed statistics we found are used as the center of these intervals. • We used 2 standard deviations of an appropriate null distribution as our margin of error to give us a 95% confidence interval. Observed statistic ± 2SD Remember the observed statistic can be a single mean or proportion, slope of a regression line, or a difference in two means or proportions.

  40. Supersize Drinks • Asurvey found 46% of 1093 randomly selected NYC voters supported the ban on large soft drinks. • What is our estimate of the population proportion that supports the ban? • 0.46 ± 2(0.015) or 0.46 ± 0.03 • 43% to 49%

  41. Meaning of a confidence interval • What does 95% confidence mean? • If we resampled 1093 NYC voters over and over and each time produced 95% confidence intervals, 95% of the time we would capture the true proportion of all NYC voters that favor the ban. • The interval (like the observed proportion) is random. The population parameter is fixed.

  42. Theory-based confidence intervals • Using theory-based techniques, confidence intervals can easily be found and the confidence levels can easily be adjusted. • The same validity conditions we use for tests of significance should also be used for confidence intervals.

  43. What effects the width of CI? • As the level of confidence increases, the width of the confidence interval increases. • The wider interval, the more confident we captured the parameter. (The wider the net, the more confident we capture the fish.) • As the sample size increases, the width of the confidence interval decreases. • Larger sample sizes give us more information, thus we can be more accurate.

  44. Connecting confidence intervals and tests of significance • A small p-value means that the value under the null will not be contained in the confidence interval. H0: , p-value = 0.02, CI (0.52, 0.59) • A large p-value means that the value under the null will be contained in the confidence interval. H0: , p-value = 0.42, CI (-0.28, 0.57)

  45. Significance level and confidence level • Suppose H0: , and the corresponding two-sided p-value = 0.03. Will 0.5 be contained in a: • 90% confidence interval? • no • 95% confidence interval? • no • 99% confidence interval? • yes • If a the p-value is large (greater than ), the value under the null will be contained in a ()% confidence interval.

  46. Review Big Ideas

  47. Terminology • The population is the entire set of observational units we want to know something about. • The sample is the subgroup of the population on which we actually record data. • A statistic is a number calculated from the observed data. • A parameter is the same type of number as the statistic, but represents the underlying process or the population from which the sample was selected.

  48. Terminology • Standard deviation (SD) is the most common measure of variability. • We can think of standard deviation as average distance values are from their mean. • A distribution is skewed to the right if the right side extends much farther than the left side.

  49. Hypotheses and Null Distribution • The null hypothesis (H0) is the chance explanation. (=) • The alternative hypothesis (Ha) is you are trying to show is true. (<, >, or ≠) • A null distribution is the distribution of simulated statistics that represent the chance outcome.

  50. Significance and p-value • Results are statistically significant if they are unlikely to arise by random chance (or a true null hypothesis). • The p-value is the proportion of the simulated statistics in the null distribution that are at least as extremeas the value of the observed statistic. • The smaller the p-value, the stronger the evidence against the null.

More Related