1 / 76

The t-test (happy happy joy!)

The t-test (happy happy joy!). Today´s programme. Summary from last week Introducing the free experiment (week 10) Parametric statistics T-test Exercises The t-test in SPSS Group problem solving on t-tests. Updates. Changes to the course plan – new course book uploded

ocean
Télécharger la présentation

The t-test (happy happy joy!)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. The t-test(happy happy joy!)

  2. Today´s programme • Summary from last week • Introducing the free experiment (week 10) • Parametric statistics • T-test • Exercises • The t-test in SPSS • Group problem solving on t-tests

  3. Updates • Changes to the course plan – new course book uploded • Changes to provide more time for parametric statistics • Compendium is in the press – should be available early next week from the book store at KUA • Emilie guest teacher next 2 weeks while I am on paternity leave • For week 12, regression and correlation, 100+ pages in compendium: No need to read all of it – read the introductions to each chapter, get the feel for the first simple examples – multiple regression and –correlation is for future reference

  4. The Free Experiment • The purpose of this exercise is to give you the chance to design your own experiment, run it, analyze the results and write a report about your experience • You can experiment with anything! Pick something you care about, something you wonder about: • What is the best way to make popcorn? • Why do my plants die? • Is chat program X faster than program Y? • How long should rise boil for maximum tenderness? • Does my daughter have a favorite toy? • Which beer tastes the best? • Is iron or aluminium pans best for boiling water? • What is the ideal speed for walking with a coffee-cup? • Etc. etc.

  5. The Free Experiment • Example: Designing a 2*3 factorial experiment to determine the effect of three variables on the amount of popcorn produced • Variables: Brand of popcorn (Netto, Irma), size of batch (100 g, 200g), popcorn-to-oil-ratio (low, high) • Looking into e.g. if more expensive popcorn are worth the price in terms of produced amount? • What combination of variables produces the best result in terms of volume?

  6. The Free Experiment • Having picked your topic, sit down and define your hypothesis • Consider: IV and DV? How many levels to the IV? • What is the expected causal relationship? • Directional or non-directional hypothesis? Argument to support this? • Then design an experiment to test your hypothesis • Choose something that is easy to re-test – you may experience that the test design needs to be redesigned

  7. The Free Experiment • Design experiments in your groups - on the website is a list of 101 examples of student experiment projects as inspiration • You have 4 weeks to complete the experiment and write a report about what was done and what was learned, problems encountered, how to improve the experiment, etc. • Experiments can be the ”home type” or the ”laboratory type” • As you progress and encounter questions, raise them in the class where they can be discussed, or contact me/Emilie

  8. The Free Experiment • The only formal requirement is that you MUST use statistics in your analysis! • This should be a piece of cake – just identify the type of experiment you do, then use the appropriate statistical test ... • Similar to the Mousepad experiment – but with more statistics as appropriate • E.g.: 2 groups with 2 levels of IV? Use t-test • Etc.

  9. Timeline • Week 10: Prepare topic • Week 11+12: Run experiment • Week 13: Prepare report and presentation of experiment • Week 14: Present experiment and results (5 minutes) • There is no time set aside to this work in the exercise hours (there may be some time in exercises but do not plan for it) • No page limit on the report – you are expected to use the standard report template and add content as appropriate. Use the textbook + Mousepad experiment reports as a guide.

  10. Parametric statistics

  11. From last week • 1. Sample means vary, and hence differences between sample means and population mean varies. • 2. Small differences are likely to occur by chance, large differences are not (but can occasionally do so). • 3. Small difference -> retain null hypothesis (difference has occurred by chance). Large difference –> reject null hypothesis in favour of experimental hypothesis (difference has not occurred by chance). • 4. "Large" is a difference that is likely to occur by chance only 5% of the time or less (p < .05) - a compromise between Type1 and Type 2 errors. • 5. Directional hypothesis versus non-directional hypothesis.

  12. Parametric statistics • Different types of statistics • Descriptive statistics: Describing a single sample and the population it came from • Inferential statistics: To answer research questions – inference about the world • Parametric statistics = inferential statistical testing methods

  13. Parametric statistics • Parametric statistics work on the mean -> All data must be interval or ratio level data • Parametric tests also make assumptions about the variance between groups or conditions

  14. Parametric statistics • For independent-measures (between groups), we assume that variance in one condition is the same as the other: Homogeneity of variance • The spread of scores in each sample should be roughly similar • Tested using Levene´s test • For repeated-measures (within subjects), we operate with the sphericity assumption, • Tested using Mauchly´s test • Basically the same thing:homogeneity of variance

  15. Parametric statistics • We also assume our data come from a population with a normal distribution • We can test how much a distribution is similar to the normal distribution using the Kolmogorov-Smirnov test (the vodka test) and the Shapiro-Wilk tests • The tests compare the set of scores in the sample to a normally distributed set of scores with the same mean and standard deviation • If the test is non-significant (p>0.05) the distribution of the sample is NOT significantly different from a normal distribution (i.e. it is normal) • If p<0.05, the distribution of the sample is significantly different from normal (e.g. positively or negatively skewed).

  16. Parametric statistics • We can run Kolmogorov-Smirnov and Shapiro-Wilk tests in SPSS • The most important is the Kolmogorov-Smirnov Test(K-S-test) • SPSS produces an output that includes the test statistic itself (D), the degrees of freedom (df) (= the sample size) and the significance value of the test (sig.). • If the significance of the K-S-test is less than .05, the distribution deviates significantly from the normal

  17. The t-test

  18. The t-test • We have run an experiment with two groups (e.g. control and experiment groups) • We have sampledata, and we can use descriptive statistics to calcuate the means, SDs etc. etc. • But how do we find out if the two samples are significantlydifferent? I.e.: • If our experiment was a success? • Our manipulation of IV caused a variation in DV larger than the random variance

  19. The t-test • The simplest experimental design is to have two conditions: an "experimental“ condition in which subjects receive some kind of treatment, and a "control" condition in which they do not. • We want to compare performance in the two conditions. • We use a t-test to help us to decide whether the difference between the conditions is "real" or whether it is due merely to chance fluctuations • The t-test enables us to decide whether the mean of one condition is really different from the mean of another condition

  20. The t-test • We use the t-test in the simplest experimental condition: 2 groups to compare • Sample-sample (or sample-population) • The test statistic is called ”t” – it has its own frequency distribution which varies with sample size • There are two types of t-test • Independent t-test: 2 groups with different participants [independent measures design/between-groups] • Dependent t-test: 2 groups with same participants [repeated measures design/within-subject]

  21. The t-test • In both cases, we have one independent variable • The thing we manipulate in our experiment), with two levels (the two different conditions of our experiment). • Small mouse pad or big mouse pad • We have one dependent variable • The thing we actually measure). • Task completion time in seconds

  22. Examples • 1) Differences between extraverts and introverts in performance on a memory test. • The independent variable (I.V.) is "personality type", with two levels - introversion and extraversion - and the dependent variable (D.V.) is the memory test score • An independent t-test would be appropriate here • 2) The effects of alcohol on reaction-time performance. • The I.V. is "alcohol consumption", with two levels - drunk and sober - and the D.V. is reaction-time performance • A dependent t-test could be used here; each subject's reaction time could be measured twice, once while they were drunk and once while they were sober

  23. Rationale of the t-test • There are some considerations underlying the t-test which we need to be aware off to avoid using the test blindly • Understanding how statistical tests operate is important – we need to know how tests operate in order to use them correctly • Rationale of the t-test: • 1) We have two sample means – they differ to some extent • Given two sample means, - we want to find out if the sample means come from two populations with the same mean (same population), or from two populations with different means. • 2) If null hypothesis, means are identical, if experimental hypothesis, means are significantlydissimilar

  24. Interpretation under the null hypothesis: Samples come from the same population: Interpretation under the experimental hypothesis: Samples come from different populations: mean of population 2 mean of population 1 population mean mean of sample 2 mean of sample 2 mean of sample 1 mean of sample 1

  25. Rationale of the t-test • 3) in the t-test, we compare the differences we have obtained with the difference we would expect (we assume no difference, null hypothesis) • If we find a big difference between the means, we have either • 1) atypicalsamples [by random chance, we got two dissimilar samples] • 2) the samples are from different populations because their means are different [our experiment had an effect] • The bigger the difference in sample means, the bigger the chance of the null hypothesis being rejected

  26. Rationale of the t-test • 4) Because samples can be different by random chance, we cannot just work with the difference of the means • We need some way of calculating the odds of two samples being dissimilar by randomchance • We can then “compare” our sample means difference with the chance of this difference occurring

  27. Rationale of the t-test • I.e., we need to know the frequency distribution of sample mean differences • For example, say the difference in our two sample means is “243”, we need to know how likely this difference size is in our population? • The frequency distribution of the sample means difference can tell us how likely it is that “243” is the difference between two sample means – e.g. “X%” • If the chance of the difference occurring is small, there is a good chance the difference in sample means is significant.

  28. Population = 10 M = 10 M = 9 M = 11 M = 10 M = 9 M = 8 M = 12 M = 11 M = 10 Once more ... • Recall: Sample means from a population will be normally distributed: • -> higher chance of sample means being similar than not • However sometimes samples do not have similar means: • -> large difference in sample means by chance alone • we need to account for this when figuring out if samples are different!

  29. Sampling distribution of differences between means: A new type of distribution Population I Population II μ1 μ2 . . . . . . . . . . . . Note: we want to figure out if Pop. 1 and 2 are the same frequency of D values of −

  30. And again ... • I.e. We take all possible sample means and subtract all possible sample means, and map the distribution • The distribution is of course normally distributed • The SD of this distribution = SE of differences [SE because we are dealing with the distribution of samplemeans – we call it SD when we have just one sample]

  31. And again ... • -> small SE means most pairs of samples from a population will generally have similar means (difference between sample means is small) • -> large SE means that sample means can deviate a lot from population mean, and differences between pairs of samples can be large by chance alone

  32. And again ... • The SE of the sample means difference frequency distribution gives us an estimate of the extent to which we would expect sample means to be different by chancealone • A measure of unsystematic variance in our experiment • T-test is simply difference between means as a function of the degree to which those means would differ by chance alone • Note: If large differences are COMMON in the means of samples from a population, because the normal distribution of sample means is flat, the difference between the samples need to be correspondingly larger to be significant

  33. Rationale of the t-test • t = Observed difference of Sample means Difference between means under null hypothesis - Estimate of the standard error (SE) of the difference between the two sample means (the unsystematic variance

  34. Rationale of the t-test • Recall: Two types of t-test • Independent t-test: 2 groups with different participants [independent measures design/between-groups] • Dependent t-test: 2 groups with same participants [repeated measures design/within-subject]

  35. Dependent t-test

  36. Repeated measures/dependent t-test • The dependent t-test is used when the same participants are used in both experimental conditions

  37. Repeated measures experiment. To examine the effect of variable A on variable B POPULATION N subjects N subjects are selected from the population The subjects are first given Level 1 of the independent variable A Level 1 of independent variable A administered Subjects are measured on dependent variable B. ( and s1 are computed from these data) subjects measured on dependent variable B The same subjects are then given Level 2 of the independent variable A Level 2 of independent variable A administered Subjects are measured on dependent variable B. ( and s2 are computed from these data) subjects measured on dependent variable B Compute Statistics are computed and hypothesis test carried out to decide if the difference between and is due to sampling variability or effect of A on B. Test your hypothesis H0: μ1 – μ2 = 0 H1: μ1 – μ2 ≠ 0

  38. Example • Experiment on the effects of alcohol on task performance (time in seconds). • Measure time taken to perform the task for subjects when drunk, and when (same subjects are) sober. • Null hypothesis: Alcohol has no effect on time taken: Variation between the drunk sample mean and the sober sample mean is due to sampling variation alone. • i.e. The drunk and sober performance times aresamples from the same population.

  39. Quick reminder: Sampling distribution of differences between means Population level 1 of A with Alcohol Population level 2 of A without Alcohol μ1 μ2 . . . . . . . . . . . . frequency of D μD values of −

  40. Times (in seconds) of participants to complete a motor coordination task

  41. the mean difference between scores in our two samples (should be close to zero if there is no difference between the two conditions) the predicted average difference between scores in our two samples (usually zero, since we assume the two samples do not differ ) estimated standard error of the mean difference (a measure of how much the mean difference might vary from one occasion to the next randomly).

  42. If independent t-test, (2 groups of different subjects), we just subtract sample mean 1 from sample mean 2

  43. 1. Add up the differences: 2. Find the mean difference:

  44. 3a. Estimate of the population standard deviation We need this to calculate the standard error of the mean differences Standard deviation Standard error of sample means differences

  45. Breaking this calculation down In steps:

  46. 3b. Estimate of the population standard deviation

  47. 4. Estimate of the population standard error (the SE of the population of differences between means of samples) Recall: The SE is the SD of sample means (here it is the standard error of the differences between two sample means – our difference frequency distribution):

  48. 5. Hypothesised difference between the sample means Our null hypothesis is usually that there is no difference between the two sample means. (In statistical terms, that they have come from two identical populations): D(hypothesised) = 0 6. Work out t: 7. "Degrees of freedom" (df) are the number of subjects minus one: df = N - 1 = 10 - 1 = 9

  49. 8. Find t-critical value of t from a table (at the back of statistics books; also on the course website).

  50. 8. Find t-critical value of t from a table (at the back of statistics books; also on the course website). TWO-Tailed:t-observed(2.183) is smaller than t-critical (2.26) “There is no significant difference between the times taken to complete the task with or without alcohol” t(9) = 2.183, p > 0.05 • “Two-tailed test”: If we are predicting a difference between Level 1 and 2; find the critical value of t for a "two-tailed" test. With df = 9, critical value = 2.26. • “One-tailed test”: If we are predicting that Level 1 is bigger than 2, (or 1 is smaller than 2), find the critical value of t for a "one-tailed" test. For df = 9, critical value = 1.83. ONE-Tailed:t-observed (2.183) is larger than t-critical (1.83) “The times taken to complete the task is significantly longer with alcohol than without” t(9) = 2.183, p < 0.05

More Related