1 / 22

The t-test

The t-test. Inferences about Population Means. Questions. How are the distributions of z and t related? Given that construct a rejection region. Draw a picture to illustrate.

radha
Télécharger la présentation

The t-test

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The t-test Inferences about Population Means

  2. Questions • How are the distributions of z and t related? • Given that construct a rejection region. Draw a picture to illustrate. • What is the standard error of the difference between means? What are the factors that influence its size?

  3. Questions (2) • What are the main uses of the t-test? • Give a concrete example of the use of the {one sample, independent samples, dependent samples} t-test. State why the particular test is the right one to choose. • What is the importance of variance accounted for?

  4. Confidence intervals in z • For large samples (N>100) can use z. • Suppose • Then • If

  5. The t Distribution We use t when the population variance is unknown (the usual case) and sample size is small (N<100, the usual case). If you use a stat package for testing hypotheses about means, you will use t. The t distribution is a short, fat relative of the normal. The shape of t depends on its df. As N becomes infinitely large, t becomes normal.

  6. Degrees of Freedom For the t distribution, degrees of freedom are always a simple function of the sample size, e.g., (N-1). One way of explaining df is that if we know the total or mean, and all but one score, the last (N-1) score is not free to vary. It is fixed by the other scores. 4+3+2+X = 10. X=1.

  7. Confidence Intervals in t With a small sample size, we compute the same numbers as we did for z, but we compare them to the t distribution instead of the z distribution. (c.f. z=1.96) 1<2.064, n.s. Interval = Interval is about 9 to 13 and contains 10, so n.s.

  8. Review • How are the distributions of z and t related? • Given that construct a rejection region. Draw a picture to illustrate.

  9. Difference Between Means (1) • Most studies have at least 2 groups (e.g., M vs. F, Exp vs. Control)[1 v 2 sample] • If we want to know diff in population means, best guess is diff in sample means. • Unbiased: • Variance of the Difference: • Standard Error:

  10. Difference Between Means (2) • We can estimate the standard error of the difference between means. • For large samples, can use z

  11. Independent Samples t (1) • Looks just like z: • df=N1-1+N2-1=N1+N2-2 • If SDs are equal, estimate is: Pooled variance estimate is weighted average: Pooled Standard Error of the Difference (computed):

  12. Independent Samples t (2) tcrit = t(.05,10)=2.23

  13. Assumptions • The t-test is based on assumptions of normality and homogeneity of variance. • You can test for both these (make sure you learn the SAS methods). • As long as the samples in each group are large and nearly equal, the t-test is robust, that is, still good, even tho assumptions are not met.

  14. Review • What is the standard error of the difference between means? What are the factors that influence its size? • What are the assumptions of the t-test?

  15. Strength of Association (1) • Scientific purpose is to predict or explain variation. • Our variable Y has some variance that we would like to account for. There are statistical indexes of how well our IV accounts for variance in the DV. These are measures of how strongly or closely associated our IVs and DVs are. • Variance accounted for:

  16. Strength of Association (2) • How much of variance in Y is associated with the IV? Compare the 1st (left-most) curve with the curve in the middle and the one on the right. In each case, how much of the variance in Y is associated with the IV, group membership? More in the second comparison. As mean diff gets big, so does variance acct.

  17. Association & Significance • Power increases with association (effect size) and sample size. • Effect size: • Significance = effect size X sample size. Increasing sample size does not increase effect size (strength of association). It decreases the standard error so power is greater. Widely misunderstood.

  18. Estimating Power (1) • If the null is false, the statistic is no longer distributed as t, but rather as noncentral t. This makes power computation difficult. • Hays (p. 334) presents an alternative method based on strength of association, that is, on

  19. Estimating Power (2) • Based on Hays’s method, we find: Suppose alpha is .01, power desired is .90, and variance accounted for is .25. What is n per group? It’s 24 (23?) per group or 48 all together. (Hays says add one more person for luck. “it’s wise” Same problem, but variance a/c is .10, need 68/group. Same again, but .15, need 43 per group. What if alpha = .05?

  20. Dependent t (1) Observations come in pairs. Brother, sister, repeated measure. Problem solved by finding diffs between pairs Di=yi1-yi2. df=N(pairs)-1

  21. Dependent t (2) df =2; n.s.

  22. Review • What are the main uses of the t-test? • Give a concrete example of the use of the {one sample, independent samples, dependent samples} t-test. State why the particular test is the right one to choose. • What is the importance of variance accounted for?

More Related