1 / 27

Chapter 22 Using Inferential Statistics to Test Hypotheses

Chapter 22 Using Inferential Statistics to Test Hypotheses. Inferential Statistics. A means of drawing conclusions about a population (i.e., estimating population parameters), given data from a sample Based on laws of probability. Sampling Distribution of the Mean.

cbrandl
Télécharger la présentation

Chapter 22 Using Inferential Statistics to Test Hypotheses

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 22Using Inferential Statistics to Test Hypotheses

  2. Inferential Statistics • A means of drawing conclusions about a population (i.e., estimating population parameters), given data from a sample • Based on laws of probability

  3. Sampling Distribution of the Mean • A theoretical distribution of means for an infinite number of samples drawn from the same population • Is always normally distributed • Has a mean that equals the population mean • Has a standard deviation (SD) called the standard error of the mean (SEM) • SEM is estimated from a sample SD and the sample size

  4. Sampling Distribution

  5. Statistical Inference—Two Forms • Estimation of parameters • Hypothesis testing (more common)

  6. Estimation of Parameters • Used to estimate a single parameter (e.g., a population mean) • Two forms of estimation: • Point estimation • Interval estimation

  7. Point Estimation Calculating a single statistic to estimate the population parameter (e.g., the mean birth weight of infants born in the U.S.)

  8. Interval Estimation • Calculating a range of values within which the parameter has a specified probability of lying • A confidence interval (CI) is constructed around the point estimate • The upper and lower limits are confidence limits

  9. Hypothesis Testing • Based on rules of negative inference: research hypotheses are supported if null hypotheses can be rejected • Involves statistical decision making to either: • accept the null hypothesis, or • reject the null hypothesis

  10. Hypothesis Testing (cont’d) • Researchers compute a test statistic with their data, then determine whether the statistic falls beyond the critical region in the relevant theoretical distribution • If the value of the test statistic indicates that the null hypothesis is “improbable,” the result is statistically significant • A nonsignificant result means that any observed difference or relationship could have resulted from chance fluctuations

  11. Statistical Decisions are Either Correct or Incorrect Two types of incorrect decisions: • Type I error: a null hypothesis is rejected when it should not be rejected • Risk of a Type I error is controlled by the level of significance (alpha), e.g.,  = .05 or .01. • Type II error: failure to reject a null hypothesis when it should be rejected

  12. Outcomes of Statistical Decision Making

  13. One-Tailed and Two-Tailed Tests Two-tailed tests Hypothesis testing in which both ends of the sampling distribution are used to define the region of improbable values One-tailed tests Critical region of improbable values is entirely in one tail of the distribution—the tail corresponding to the direction of the hypothesis

  14. Critical Region in the Sampling Distribution for a One-Tailed Test: IVF Attitudes Example

  15. Critical Regions in the Sampling Distribution for a Two-Tailed Test: IVF Attitudes Example

  16. Parametric Statistics • Involve the estimation of a parameter • Require measurements on at least an interval scale • Involve several assumptions (e.g., that variables are normally distributed in the population)

  17. Nonparametric Statistics (Distribution-Free Statistics) • Do not estimate parameters • Involve variables measured on a nominal or ordinal scale • Have less restrictive assumptions about the shape of the variables’ distribution than parametric tests

  18. Overview of Hypothesis-Testing Procedures • Select an appropriate test statistic • Establish the level of significance (e.g.,  = .05) • Select a one-tailed or a two-tailed test • Compute test statistic with actual data • Calculate degrees of freedom (df) for the test statistic

  19. Overview of Hypothesis-Testing Procedures (cont’d) • Obtain a tabled value for the statistical test • Compare the test statistic to the tabled value • Make decision to accept or reject null hypothesis

  20. Commonly Used Bivariate Statistical Tests • t-Test • Analysis of variance (ANOVA) • Pearson’s r • Chi-square test

  21. Quick Guide to Bivariate Statistical Tests

  22. t-Test Tests the difference between two means • t-Test for independent groups (between subjects) • t-Test for dependent groups (within subjects)

  23. Analysis of Variance (ANOVA) • Tests the difference between 3+ means • One-way ANOVA • Multifactor (e.g., two-way) ANOVA • Repeated measures ANOVA (within subjects)

  24. Correlation • Pearson’s r, a parametric test • Tests that the relationship between two variables is not zero • Used when measures are on an interval or ratio scale

  25. Chi-Square Test • Tests the difference in proportions in categories within a contingency table • A nonparametric test

  26. Power Analysis • A method of reducing the risk of Type II errors and estimating their occurrence • With power = .80, the risk of a Type II error () is 20% • Method is frequently used to estimate how large a sample is needed to reliably test hypotheses

  27. Power Analysis (cont’d) Four components in a power analysis: • Significance criterion (α) • Sample size (N) • Population effect size—the magnitude of the relationship between research variables (γ) • Power—the probability of obtaining a significant result (1-β)

More Related