1 / 19

Analyzing Differences II

Analyzing Differences II. Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz. What to expect?. Describe the normal curve, Explain hypothesis testing. Null hypotheses Distinction between type I and type II errors.

maris
Télécharger la présentation

Analyzing Differences II

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Analyzing Differences II Introduction to Communication Research School of Communication Studies James Madison University Dr. Michael Smilowitz

  2. What to expect? • Describe the normal curve, • Explain hypothesis testing. • Null hypotheses • Distinction between type I and type II errors. • Discuss the logic behind tests of differences • Variance • Between group variance and within group variance

  3. Galton’s Machine • Click on the following URL or copy it and paste it into your browser’s address bar. You will see a demonstration of Galton’s Machine. Note the “distribution” of where the balls land that occurs from “chance” alone. http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html

  4. The Normal curve.

  5. The Normal curve. 1 σ = 68.26% 1.96 σ = 95% 2.58 σ = 99%

  6. The Normal curve. 1 σ = 68.26% 2.5 % 1.96 σ = 95% 2.5 % .5 % 2.58 σ = 99% .5 %

  7. Hypothesis testing • A research hypothesis is an expectation about events based on generalizations of the assumed relationship between variables. • A null hypothesis states that there is no relationship between the variables. • Allows us to determine if the differences in sample means is the result of chance alone.

  8. Hypothesis testing • If the null hypothesis is rejected, then the research hypothesis may be tenable and the researcher concludes that a relationship exists among variables. • If the null hypothesis is not rejected (logically you cannot “accept” a null hypothesis) then researchers can make no claim that a relationship exists among the variables.

  9. Hypothesis testing Do you see why it is inappropriate to say “the hypothesis was proven”? All that is tested is the extent to which we can confidently say the observed relationships do not result from chance alone. So we can say: The hypothesis is not rejected. The results are valid indicators of a relationship. The findings support the hypothesis. • If the null hypothesis is rejected, then the research hypothesis may be tenable and the researcher concludes that a relationship exists among variables. • If the null hypothesis is not rejected (logically you cannot “accept” a null hypothesis) then researchers can make no claim that a relationship exists among the variables.

  10. Definitions of Type I and Type II Errors: Smith (1988) A type I error, sometimes called an alpha error, results from rejecting a null hypothesis when it is true (p. 115). A type II error, sometimes called a beta error, results from failing to reject the null hypothesis when in fact it is false (p. 117)

  11. Definitions of Type I and Type II Errors: Rinard (1994) Type I Error: Incorrectly rejecting the null hypothesis (p. 338). Type II Error: Failing to detect a relationship that is present; incorrectly failing to reject the null hypothesis (p. 338).

  12. Definitions of Type I and Type II Errors:

  13. Analyzing Differences The question: Does the difference in the measurements of two (or more) samples represent “real” differences in the populations from which the samples are drawn? Researchers ask “Is the difference significant?”

  14. Analyzing differences Tests of significant differences are based on the following ratio: The difference between the group (sample) means ________________________________________ An estimate of the error (chance) differences

  15. Components of Variance The total variance (s2t) associated with any set of sample scores is comprised of (partitioned by)two components: The variation among scores due to some influence that “pushes” scores in one direction. Systematic Variance: The observed differences between groups is typically called the “between-group variance (s2b).

  16. Components of Variance The total variance (s2t) associated with any set of sample scores is comprised of (partitioned by) two components: Fluctuations in group scores due to random or chance factors (errors such as fatigue, distractions, carelessness). Error Variance: Error variance is typically called the “within-group variance (s2w).

  17. Components of Variance Therefore, the variance in a set of scores is: S2t = S2b + S2w

  18. Got the point? Testing for the significance in the differences between samples is simply comparing the ratio of “between group variance” and “within group variance.”

  19. Got the point? S2b ----------- S2w The greater the magnitude of S2b relative to S2w , the more confidently we expect the samples’ differences to indicate “real” differences in populations.

More Related