1 / 26

Correlation

Correlation. MEASURING ASSOCIATION

malcolm
Télécharger la présentation

Correlation

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Correlation • MEASURING ASSOCIATION • Establishing a degree of association between two or more variables gets at the central objective of the scientific enterprise. Scientists spend most of their time figuring out how one thing relates to another and structuring these relationships into explanatory theories. The question of association comes up in normal discourse as well, as in "like father like son“.

  2. I. Person's product-moment correlation coefficient • The statistical methods currently used for studying such relationships were invented by Sir Francis Galton (1822-1911), • As everyone knows, children resemble their parents. What Galton wanted to know was the strength of this resemblance--how much of a difference the parents made. Statisticians in Victorian England were fascinated by this question, and gathered huge amounts of data in pursuit of the answer.

  3. Scatterplots A. scatter diagram A list of 1,078 pairs of heights would be impossible to grasp. [so we need some method that can examine this data and convert it into a more conceivable format]. One method is plotting the data for the two variables (father's height and son's height) in a graph called a scatter diagram.

  4. B. The Correlation Coefficient This scatter plot looks like a cloud of points which visually can give us a nice representation and a gut feeling on the strength of the relationship, and is especially useful for examining outliners or data anomalies, but statistics isn't too fond of simply providing a gut feeling. Statistics is interested in the summary and interpretation of masses of numerical data - so we need to summarize this relationship numerically. How do we do that - yes, with a correlation coefficient. The correlation coefficient ranges from +1 to -1

  5. r = 1.0

  6. r = .85

  7. r = .42

  8. R = .17

  9. R = - .94

  10. R = - .54

  11. R = - .33

  12. More scatter plots http://noppa5.pc.helsinki.fi/koe/corr/cor7.html

  13. THE SD LINE The closer the coefficient is to 1, the more tightly clustered the points are around a line. What is this line? It is called the SD line and it goes through all the points which are an equal number of SDs away from the average, for both variables. For example, a person who was one SD above the average in his height and his father was one SD above the average would be plotted on the SD line.

  14. SD LINE

  15. Computing the Pearson's r correlation coefficient • Definitional formula is: Convert each variable to standard units (zscores). The average of the products give the correlation coefficient. But this formula requires you to calculate z-scores for each observation, which means you have to calculate the standard deviation of X and Y before you can get started. For example, look what you have to do for only 5 cases.

  16. Dividing the Sum of ZxZy (2.50) by N (5) get you the correlation coefficient = .50

  17. The above formula can also be translated into the following – which is a little easier to decipher but is still tedious to use.

  18. Or in other words …..

  19. Therefore through some algebraic magic we get the computational formula, which is a bit more manageable.

  20. Interpreting correlation coefficients • Strong Association versus Weak Association: strong: knowing one helps a lot in predicting the other. Weak, information about one variables does not help much in guessing the other. 0 = none; .25 weak; .5 moderate; .75 < strong • Index of Association • R-squared defined as the proportion of the variance of one variable accounted for by another variable a.k.a PRE STATISTIC (Proportionate Reduction of Error))

  21. Significance of the correlation • Null hypothesis? • Formula: • Then look to Table C in Appendix B • Or just look at Table F in Appendix B

  22. Limitations of Pearson's r • 1) at best, one must speak of "strong" and "weak," "some" and "none"-- precisely the vagueness statistical work is meant to cure. • 2) Assumes Interval level data: Variables measured at different levels require that different statistics be used to test for association.

  23. 3) Outliers and nonlinearity • The correlation coefficient does not always give a true indication of the clustering. There are two main exceptional cases: Outliers and nonlinearity. r = .457 r = .336

  24. 4. Assumes a linear relationship

  25. 4) Christopher Achen in 1977 argues (and shows empirically) that two correlations can differ because the variance in the samples differ, not because the underlying relationship has changed. Solution? Regression analysis

More Related