1 / 55

Biostat 200 Lecture 6

Biostat 200 Lecture 6. Recap. We calculate confidence intervals to give the most plausible values for the population mean or proportion.

zoey
Télécharger la présentation

Biostat 200 Lecture 6

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Biostat 200Lecture 6

  2. Recap • We calculate confidence intervals to give the most plausible values for the population mean or proportion. • We conduct hypothesis tests of a mean or a proportion to make a conclusion about how our sample mean or proportion compares with some hypothesized value for the population mean or proportion. • You can use 95% confidence intervals to reach the same conclusions as hypothesis tests • If that value is null value of the mean is outside of the 95% confidence intervals, that would be the same as rejecting the null of a two sided test at significance level 0.05.

  3. Recap • We make these conclusions based on what we observed in our sample -- we will never know the true population mean or proportion • If the data are very different from the hypothesized mean or proportion, we reject the null • Example: Phase I vaccine trial – does the candidate vaccine meet minimum thresholds for safety and efficacy? • Statistical significance can be driven by n, and does not equal clinical or biological significance • On the other hand, you might have a suggestive result but not statistical significance with a small sample that deserves a larger follow up study

  4. Types of error • Type I error = significance level of the test =P(reject H0 | H0 is true) • Incorrectly reject the null • We take a sample from the population, calculate the statistics, and make inference about the true population. If we did this repeatedly, we would incorrectly reject the null 5% of the time that it is true if  is set to 0.05.

  5. Types of error • Type II error –  = P(do not reject H0 | H0 is false) • Incorrectly fail to reject the null • This happens when the test statistic is not large enough, even if the underlying distribution is different

  6. Types of error • Remember, H0 is a statement about the population and is either true or false • We take a sample and use the information in the sample to try to determine the answer • Whether we make a Type I error or a Type II error depends on whether H0 is true or false • We set , the chance of a Type I error, and we can design our study to minimize the chance of a Type II error

  7. Chance of a type II error , chance of failing to reject the null if the alternative is true

  8. If the alternative is very different from the null, the chance of a Type II error is low , chance of failing to reject the null if the alternative is true

  9. If the alternative is not very different from the null, the chance of a Type II error is high , chance of failing to reject the null if the alternative is true

  10. Chance of a Type II error is lower if the SD is smaller This is relevant because the SD for the distribution of a sample mean is σ/n So increasing n decreases the SD of the mean

  11. Finding , P(Type II error) • Find the critical value for your test • At what X will zstat be greater than 1.96 (or 1.645 for a one-sided test) ? • This depends on n, , and  • What is the probability of getting a sample mean less extreme than the critical value if the true mean is the alternate mean? This is .

  12. Finding , P(Type II error) • Example: Mean age of walking • H0: μ<11.4 months (μ0) • Alternative hypothesis: HA: μ>11.4 months • Known SD=2 • Significance level=0.05 • Sample size=9 • We will reject the null if the zstat (assuming σ known) > 1.645 • So we will reject the null if • For our example, the null will be rejected if X> 1.645*2/3 + 11.4 = 12.5

  13. But if the true mean is really 16, what is the probability that the null will not be rejected? • The probability of a Type II error? • The null will be rejected if the sample mean is >12.5, not rejected if is ≤12.5 • What is the probability of getting a sample mean of ≤12.5 if the true mean is 16? • P(Z<(12.5-16)/(2/3)) . di normal((12.5-16)/2*3) . 7.605e-08 So if the true mean is 16 and the sample size is 9, the probability of rejecting the null incorrectly is <0.001

  14. Note that this depended on: • The alternative population mean (e.g. 16) • The chance of failing to reject the null will increase if the true population mean is closer to the null value • What is the probability of failing to reject the null if the true population mean is 15? • P(Z<(12.5-15)/.6667)) . di normal((12.5-15)/2*3) . .00008842 • What is the probability of failing to reject the null if the true population mean is 14? • P(Z<(12.5-14)/.6667)) . di normal((12.5-14)/2*3) . .01222447 • What is the probability of failing to reject the null if the true population mean is 12? • P(Z<(12.5-12)/.6667)) . di normal((12.5-12)/2*3) .77337265 Power =1-beta = .22662735

  15. How to calculate power in Stata for a test of one mean sampsi nullmu altmu, sd() onesample onesided n() . sampsi 11.4 12, sd(2) onesample onesided n(9) Estimated power for one-sample comparison of mean to hypothesized value Test Ho: m = 11.4, where m is the mean in the population Assumptions: alpha = 0.0500 (one-sided) alternative m = 12 sd = 2 sample size n = 9 Estimated power: power = 0.2282

  16. How to calculate sample size for a fixed power in Stata . sampsi 11.4 12, sd(2) onesample onesided power(.8) Estimated sample size for one-sample comparison of mean to hypothesized value Test Ho: m = 11.4, where m is the mean in the population Assumptions: alpha = 0.0500 (one-sided) power = 0.8000 alternative m = 12 sd = 2 Estimated required sample size: n = 69

  17. How to calculate power in Stata for a test of one proportion sampsi nullmu altmu, onesample n() . sampsi .3 .2, onesample n(50) Estimated power for one-sample comparison of proportion to hypothesized value Test Ho: p = 0.3000, where p is the proportion in the population Assumptions: alpha = 0.0500 (two-sided) alternative p = 0.2000 sample size n = 50 Estimated power: power = 0.3165

  18. How to calculate samples size in Stata for a test of one proportion for fixed power . sampsi .3 .2, onesample power(.8) Estimated sample size for one-sample comparison of proportion to hypothesized value Test Ho: p = 0.3000, where p is the proportion in the population Assumptions: alpha = 0.0500 (two-sided) power = 0.8000 alternative p = 0.2000 Estimated required sample size: n = 153

  19. Power • The power of a statistical test is lower for alternative values that are closer to the null value (the chance of a Type II error is higher) and higher for more extreme alternative values. • It is standard to fix =0.05 and =0.20 (for 80% power) and determine n for various alternative hypotheses

  20. In practice, you often have n fixed by cost • Then you can calculate how big the alternative has to be to reject the null with 80% probability assuming the alternative is true • The difference between this alternative and the null is called the minimum detectable difference • In epidemiology when wanting to estimate an odds ratio it is call the minimum detectable odds ratio • So if the minimum detectable difference is large, that is a bad thing – you will only have statistical significance if the alternative is very far from the null (very large effect sizes)

  21. Comparison of two means: the paired t-test • Paired samples, numerical variables • Two determinations on the same person (before and after) – e.g. before and after intervention • Matched samples – measurement on pairs of persons similar in some characteristics, i.e. identical twins (matching is on genetics) • Matching or pairing is performed to control for extraneous factors • Each person or pair has 2 data points, and we calculate the difference for each • Then we can use our one-sample methods to test hypotheses about the value of the difference

  22. Paired data – pre and post diet

  23. Comparison of two means: paired t-test • Step 1: The hypotheses (two sided) • Generically H0: μ1-μ2 =δ HA: μ1-μ2 ≠δ • Often δ=0, no difference So H0: μ1-μ2 =0, i.e. H0: μ1=μ2 HA: μ1-μ2 ≠0, i.e. HA: μ1≠μ2

  24. Comparison of two means: paired t-test • Step 1: The hypotheses (one sided) • Generically H0: μ1-μ2 ≥δ or H0: μ1-μ2 ≤δ HA: μ1-μ2 <δH0: μ1-μ2 <δ • Often δ=0, no difference So H0: μ1 ≥ μ2 or H0: μ1 ≤ μ2 HA: μ1 < μ2 HA: μ1 > μ2

  25. Comparison of two means: paired t-test • Step 2: Determine the compatibility with the null hypothesis The test statistic is

  26. Comparison of two means: paired t-test • Step 3: Reject or fail to reject the null • Is the p-value (the probability of observing a difference as large or larger, under the null hypothesis) greater than or less than the significance level, ?

  27. Example • We think the diet works. We specify a one-sided hypothesis. The null hypothesis is that it doesn’t work. H0: μ2-μ1 ≥0 μ2>=μ1HA: μ1-μ2 <0  μ2<μ1 • Significance level=0.05

  28. . summ diff Variable | Obs Mean Std. Dev. Min Max -------------+-------------------------------------------------------- diff | 12 -2.083333 3.028901 -7 3 *** calculate the t statistic . di -2.08333/3.0289*sqrt(12) -2.3826692 *** calculate the p-value . di 1-ttail(11,-2.382669) .01816464  So we reject the null

  29. Using the ttest command . ttest diff==0 One-sample t test ------------------------------------------------------------------------------ Variable | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- diff | 12 -2.083333 .8743685 3.028901 -4.007805 -.1588613 ------------------------------------------------------------------------------ mean = mean(diff) t = -2.3827 Ho: mean = 0 degrees of freedom = 11 Ha: mean < 0 Ha: mean != 0 Ha: mean > 0 Pr(T < t) = 0.0182 Pr(|T| > |t|) = 0.0363 Pr(T > t) = 0.9818

  30. Another way.. The command is ttest var1==var2 . ttestposttestkg==pretestkg Paired t test ------------------------------------------------------------------------------ Variable | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- postte~g | 12 101.1667 7.167724 24.82972 85.39061 116.9427 pretes~g | 12 103.25 6.93599 24.02697 87.98399 118.516 ---------+-------------------------------------------------------------------- diff | 12 -2.083333 .8743685 3.028901 -4.007805 -.1588613 ------------------------------------------------------------------------------ mean(diff) = mean(posttestkg - pretestkg) t = -2.3827 Ho: mean(diff) = 0 degrees of freedom = 11 Ha: mean(diff) < 0 Ha: mean(diff) != 0 Ha: mean(diff) > 0 Pr(T < t) = 0.0182 Pr(|T| > |t|) = 0.0363 Pr(T > t) = 0.9818 .

  31. Comparison of two means: t-test • The goal is to compare means from two independent samples • Two different populations • E.g. vaccine versus placebo group • E.g. women with adequate versus in adequate micronutrient levels

  32. Comparison of two means: t-test • Two sided hypothesis H0: μ1=μ2 HA: μ1≠μ2 • One sided hypothesis H0: μ1≥μ2 HA: μ1<μ2 • One sided hypothesis H0: μ1≤μ2 HA: μ1>μ2

  33. Comparison of two means: t-test • Even though the null and alternative hypotheses are the same as for the paired t-test, the test is different, it is wrong to use a paired t-test with independent samples and vice versa

  34. Comparison of two means: t-test • By the CLT, if X1 and X2 are normally distributed, then is normally distributed with mean μ1-μ2 and standard deviation • In one version of the t-test, we assume that the population standard deviations are equal, so σ1 = σ2 = σ • Substituting, the standard deviation for the distribution of the difference of two sample means is

  35. Comparison of two means: t-test • So we can calculate a z-score for the difference in the means and compare it to the standard normal distribution. The test statistic is

  36. Comparison of two means: t-test • If the σ’s are unknown (pretty much always), we substitute with sample standard deviations, s, and compare the test statistic to the t-distribution • t-test test statistic  • The formula for the pooled SD is a weighted average of the individual sample SDs • The degrees of freedom for the test are n1+n2-2

  37. Comparison of two means: t-test • As in our other hypothesis tests, compare the t statistic to the t-distribution to determine the probability of obtaining a mean difference as large or larger as the observed difference • Reject the null if the probability, the p-value, is less than , the significance level • Fail to reject the null if p≥ 

  38. Comparison of two means, example • Study of non-pneumatic anti-shock garment (Miller et al) • Two groups – pre-intervention received usual treatment, intervention group received NASG • Comparison of hemorrhaging in the two groups • Null hypothesis: The hemorrhaging is the same in the two groups H0: μ1=μ2 HA: μ1≠μ2 • The data: • External blood loss: • Pre-intervention group (n=83) mean=340.4 SD=248.2 • Intervention group (n=83) mean=73.5 SD=93.9

  39. Calculating by hand • External blood loss: • Pre-intervention group (n=83) mean=340.4 SD=248.2 • Intervention group (n=83) mean=73.5 SD=93.9 • First calculate sp2 = (82*248.22 + 82*93.92)/(83+83-2) = 35210.2 tstat = (340.4-73.5)/sqrt(35210.2*(2/83)) = 9.16 df=83+83-2=164 . di 2*ttail(164,9.16) 2.041e-16

  40. Comparison of two means, example * ttesti n1 mean1 sd1 n2 mean2 sd2 ttesti 83 340.4 248.2 83 73.5 93.9 Two-sample t test with equal variances ------------------------------------------------------------------------------ | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- x | 83 340.4 27.24349 248.2 286.204 394.596 y | 83 73.5 10.30686 93.9 52.99636 94.00364 ---------+-------------------------------------------------------------------- combined | 166 206.95 17.85377 230.0297 171.6987 242.2013 ---------+-------------------------------------------------------------------- diff | 266.9 29.12798 209.3858 324.4142 ------------------------------------------------------------------------------ diff = mean(x) - mean(y) t = 9.1630 Ho: diff = 0 degrees of freedom = 164 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 Pr(T < t) = 1.0000 Pr(|T| > |t|) = 0.0000 Pr(T > t) = 0.0000

  41. You can calculate a 95% confidence interval for the difference in the means • If the confidence interval for the difference does not include 0, then you can reject the null hypothesis of no difference • This is NOT equivalent to calculating separate confidence intervals for each mean and determining whether they overlap

  42. Comparison of two means: t-test • This t-test assumes equal variances in the two underlying populations • If we do not assume equal variances we use a slightly different test statistic • Variances not assumed to be equal, so you do not use a pooled estimate • There is another formula for degrees of freedom • Often the two different t-tests yield the same answer, but you should not assume equivalence unless you have a good reason for it • If the sample sizes are equal, you will get the same test statistic, just the df changes

  43. The t statistic is Round up to the nearest integer to get the degrees of freedom

  44. Comparison of two means, example ttesti 83 340.4 248.2 83 73.5 93.9, unequal Two-sample t test with unequal variances ------------------------------------------------------------------------------ | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- x | 83 340.4 27.24349 248.2 286.204 394.596 y | 83 73.5 10.30686 93.9 52.99636 94.00364 ---------+-------------------------------------------------------------------- combined | 166 206.95 17.85377 230.0297 171.6987 242.2013 ---------+-------------------------------------------------------------------- diff | 266.9 29.12798 209.1446 324.6554 ------------------------------------------------------------------------------ diff = mean(x) - mean(y) t = 9.1630 Ho: diff = 0 Satterthwaite's degrees of freedom = 105.002 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 Pr(T < t) = 1.0000 Pr(|T| > |t|) = 0.0000 Pr(T > t) = 0.0000 • Note that the t-statistic stayed the same. This is because the sample sizes in each group are equal. When the sample sizes are not equal this will not be the case • The degrees of freedom are decreased, so if the sample sizes are equal in the two groups this is a more conservative test

  45. Test of the means of independent samples • When you have the data in Stata, with the different groups in different columns, use ttest var1==var2, unpaired or ttest var1==var2, unpaired unequal • More often, you will have the data all in one variable, and the grouping in another variable. Then use ttest var, by(groupvar) or ttest var, by(groupvar) unequal

  46. Testing whether BMI in our class data set differs by sex Null hypothesis: BMI of females = BMI of males . ttest bmi, by(sex) Two-sample t test with equal variances ------------------------------------------------------------------------------ Group | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- Male | 290 22.87219 .2378528 4.050487 22.40405 23.34034 Female | 241 24.72488 .1932413 2.999911 24.34421 25.10555 ---------+-------------------------------------------------------------------- combined | 531 23.71305 .1616406 3.724754 23.39552 24.03059 ---------+-------------------------------------------------------------------- diff | -1.852688 .3148317 -2.471161 -1.234214 ------------------------------------------------------------------------------ diff = mean(Male) - mean(Female) t = -5.8847 Ho: diff = 0 degrees of freedom = 529 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 Pr(T < t) = 0.0000 Pr(|T| > |t|) = 0.0000 Pr(T > t) = 1.0000

  47. . ttest bmi, by(sex) unequal Two-sample t test with unequal variances ------------------------------------------------------------------------------ Group | Obs Mean Std. Err. Std. Dev. [95% Conf. Interval] ---------+-------------------------------------------------------------------- Male | 290 22.87219 .2378528 4.050487 22.40405 23.34034 Female | 241 24.72488 .1932413 2.999911 24.34421 25.10555 ---------+-------------------------------------------------------------------- combined | 531 23.71305 .1616406 3.724754 23.39552 24.03059 ---------+-------------------------------------------------------------------- diff | -1.852688 .3064574 -2.454728 -1.250647 ------------------------------------------------------------------------------ diff = mean(Male) - mean(Female) t = -6.0455 Ho: diff = 0 Satterthwaite's degrees of freedom = 522.373 Ha: diff < 0 Ha: diff != 0 Ha: diff > 0 Pr(T < t) = 0.0000 Pr(|T| > |t|) = 0.0000 Pr(T > t) = 1.0000

  48. Confidence interval for the difference of two means from independent samples, when unequal variances are assumed

  49. Comparison of two proportions • Similar to comparing two means • Null hypothesis about two proportions, p1 and p2, H0: p1= p2 HA: p1≠ p2 • If n1 and n2 are sufficiently large, the difference between the two proportions follows a normal distribution.

  50. Comparison of two proportions • So we can use the z statistic to find the probability of observing a difference as large as we do, under the null hypothesis of no difference

More Related