1 / 16

Review of the Basic Logic of NHST

Review of the Basic Logic of NHST. Significance tests are used to accept or reject the null hypothesis. This is done by studying the sampling distribution for a statistic. If the probability of observing your result is < .05 if the null is true, reject the null

inigo
Télécharger la présentation

Review of the Basic Logic of NHST

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Review of the Basic Logic of NHST • Significance tests are used to accept or reject the null hypothesis. • This is done by studying the sampling distribution for a statistic. • If the probability of observing your result is < .05 if the null is true, reject the null • If the probability of observing your result is > .05, accept the null. • There are many kinds of significance tests for different kinds of statistics. Today we’re going to discuss t-tests.

  2. t-test • A common situation in psychology is when an experimenter randomly assigns people to an “experimental” group or a “control” group to study the effect of the manipulation on a continuous outcome. • In this situation, we are interested in the mean difference between the two conditions. • The significance test used in this kind of scenario is called a t-test. A t-test is used to determine whether the observed mean difference is within the range that would be expected if the null hypothesis were true.

  3. t-test example • We are interested in whether caffeine consumption improves people’s happiness. • We randomly assign 25 people to drink decaf and 25 people to drink regular coffee. • Subsequently we measure how happy people are. • Note: The independent variable is categorical (you’re in one group or the other), and there are only two groups. • The dependent variable is continuous—we measure how happy people are on a continuous metric.

  4. t-test example • Let’s say we find that the control group has a mean score of 3 (SD =1) and the experimental group has a mean score of 3.2 (SD = .9). • Thus, there is a .20 difference between the two groups. [3.2 – 3.0 = .2] • Two possibilities • The .2 difference between groups is due to sampling error, not a real effect of caffeine. In other words, the two samples are drawn from populations with identical means and variances. • The .2 difference between groups is due to the effect of caffeine, not sampling error. In other words, the two samples are drawn from populations with different means (and maybe different variances).

  5. t-test example • We need to know how likely it is that we would observe a difference of .20 of higher if the null hypothesis is true. • How can we do this? • We can construct a sampling distribution of mean differences—assuming the null hypothesis is true. • We can use this distribution to determine how large of mean difference we will observe on average when the population mean difference is zero.

  6. t-test example • As before, then, we need to specify (a) the mean of the sampling distribution and (b) the SD of the sampling distribution (SE). • [Recall] For a sampling distribution of means • the mean is equal to the mean of the population • the SD is

  7. t-test example • For a sampling distribution of mean differences • The mean is 0 • The SD or SE is • Why is the mean 0? If the two groups are drawn from populations with identical means, then the difference between those two population means, on average (i.e., as we sample repeatedly from the two populations), is zero. 1 - 2 = 0. This is true regardless of what the actual means are! • Notice that this equation is pretty simple if we break it down. According to this equation, the SE of the s.d.m.d. is a combination of the SE’s of two sampling distributions for each sample mean, assuming the null hypothesis is true.

  8. t-test example • Technical note: • These SE’s require an estimate of the population variance. Typically, we would use the sample variance, with N – 1, to estimate this quantity. • Here, however, we have two estimates of the population variance (assuming the null is true): one derived from the control group and one from the experimental group. • Typically these two estimates of the population variance are pooled or averaged to obtain a single estimate of the population variance.

  9. Let’s, then, find • Let’s assume that and , then • and

  10. t-test example • What does this tell us? On average, if we are taking two two samples of size 25 from populations with identical means and variances [note: we’re stating the “facts” about the population(s) and the sampling process], we expect to observe a mean difference of zero, but, recognizing that there is sampling error at work, we might expect to observe a mean difference as large as .24.

  11. Now, armed with info about SED , we are in a position to evaluate the size of the mean difference we observed (M1 – M2) against the expected error that we might observe if the null is true (SED). • This ratio is called a t statistic. When t is large, the mean difference we observed is large relative to the size of the difference we might expect to observe if the null hypothesis is true. If this ratio is small, then the mean difference we observed is roughly what we might expect to observe if the null is true.

  12. t-test example • What counts as “large” and “small”? • Importantly, the t-statistic has a p-value associated with it. • The p-value quantifies the probability of observing a value of t or higher, given that the null hypothesis is true. • Like all significance tests, when p < .05, we reject the null hypothesis. When p > .05, we do not reject the null hypothesis.

  13. t-test example • How do you find the p-value associated with a t-statistic? • As a heuristic, if the t-value is greater than  1.96, the corresponding p-value is less than 5%. • You can use computers or tables in books to find the exact p-value associated with a t-statistic.

  14. As sample size gets increasingly large, the t-distribution assumes the shape of a normal distribution. Hence, what we know about the normal distribution applies here (you can assume the t-score is a z-score). 34% 14% 2 %

  15. Summary of the steps • Find the mean difference between the two samples • Use pooled estimates of the population variance to find the SE for the sampling distribution of means. • Use this info to find SED • Dividing the mean difference by SED gives a t-statistic. • If t > 1.96 or < - 1.96, then p < .05.

  16. Population for control group Population for experimental group These two populations have identical means and variances These two samples may or may not have identical means and variances because of sampling error hence, one sample mean might be .2 points higher than the other

More Related