1 / 48

Introduction to Categorical Data Analysis July 22, 2004

Introduction to Categorical Data Analysis July 22, 2004. Categorical data. The t-test, ANOVA, and linear regression all assumed outcome variables that were continuous (normally distributed).

chun
Télécharger la présentation

Introduction to Categorical Data Analysis July 22, 2004

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Introduction to Categorical DataAnalysisJuly 22, 2004

  2. Categorical data • The t-test, ANOVA, and linear regression all assumed outcome variables that were continuous (normally distributed). • Even their non-parametric equivalents assumed at least many levels of the outcome (discrete quantitative or ordinal). • We haven’t discussed the case where the outcome variable is categorical.

  3. discrete random variables Types of Variables: a taxonomy Categorical Quantitative binary nominal ordinal discrete continuous 2 categories + more categories + order matters + numerical + uninterrupted

  4. Continuous outcome Continuous predictors Binary predictor Overview of statistical tests Independent variable=predictor Dependent variable=outcome e.g., BMD= pounds age amenorrheic (1/0)

  5. Types of variables to be analyzed Statistical procedure or measure ofassociation Predictor (independent) variable/s Outcome (dependent) variable Categorical Continuous ANOVA Dichotomous Continuous T-test Continuous Continuous Simple linear regression Kaplan-Meier curve/ log-rank test Multivariate Continuous Multiple linear regression Categorical Categorical Chi-square test Dichotomous Dichotomous Odds ratio, Mantel-Haenszel OR, Relative risk, difference in proportions Multivariate Dichotomous Logistic regression Categorical Time-to-event Multivariate Time-to-event Cox-proportional hazards model

  6. Types of variables to be analyzed Statistical procedure or measure ofassociation Predictor (independent) variable/s Outcome (dependent) variable done Today and next week Categorical Continuous ANOVA Dichotomous Continuous T-test Continuous Continuous Simple linear regression Kaplan-Meier curve/ log-rank test Last part of course Multivariate Continuous Multiple linear regression Categorical Categorical Chi-square test Dichotomous Dichotomous Odds ratio, Mantel-Haenszel OR, Relative risk, difference in proportions Multivariate Dichotomous Logistic regression Categorical Time-to-event Multivariate Time-to-event Cox-proportional hazards model

  7. Difference in proportions Example: You poll 50 people from random districts in Florida as they exit the polls on election day 2004. You also poll 50 people from random districts in Massachusetts. 49% of pollees in Florida say that they voted for Kerry, and 53% of pollees in Massachusetts say they voted for Kerry. Is there enough evidence to reject the null hypothesis that the states voted for Kerry in equal proportions?

  8. Standard error of a proportion= Standard error can be estimated by= (still normally distributed) Standard error of the difference of two proportions= The variance of a difference is the sum of variances (as with difference in means). Analagous to pooled variance in the ttest Null distribution of a difference in proportions

  9. Difference of proportions For our example, null distribution= Null distribution of a difference in proportions

  10. Answer to Example • We saw a difference of 4% between Florida and Massachusetts • Null distribution predicts chance variation between the two states of 10%. • P(our data/null distribution)=P(Z>.04/.10=.4)>.05 • Not enough evidence to reject the null.

  11. Chi-square testfor comparing proportions (of a categorical variable) between groups I. Chi-Square Test of Independence When both your predictor and outcome variables are categorical, they may be cross-classified in a contingency table and compared usinga chi-square test of independence. A contingency table with R rows and C columns is an R x C contingency table.

  12. Example • Asch, S.E. (1955). Opinions and social pressure. Scientific American, 193, 31-35.

  13. The Experiment • A Subject volunteers to participate in a “visual perception study.” • Everyone else in the room is actually a conspirator in the study (unbeknownst to the Subject). • The “experimenter” reveals a pair of cards…

  14. The Task Cards Standard line Comparison lines A, B, and C

  15. The Experiment • Everyone goes around the room and says which comparison line (A, B, or C) is correct; the true Subject always answers last – after hearing all the others’ answers. • The first few times, the 7 “conspirators” give the correct answer. • Then, they start purposely giving the (obviously) wrong answer. • 75% of Subjects tested went along with the group’s consensus at least once.

  16. Further Results • In a further experiment, group size (number of conspirators) was altered from 2-10. • Does the group size alter the proportion of subjects who conform?

  17. Conformed? Number of group members? 2 4 6 8 10 Yes 20 50 75 60 30 No 80 50 25 40 70 The Chi-Square test Apparently, conformity less likely when less or more group members…

  18. 20 + 50 + 75 + 60 + 30 = 235 conformed • out of 500 experiments. • Overall likelihood of conforming = 235/500 = .47

  19. Conformed? Number of group members? 2 4 6 8 10 Yes 47 47 47 47 47 No 53 53 53 53 53 Expected frequencies if no association between group size and conformity…

  20. Do observed and expected differ more than expected due to chance?

  21. Degrees of freedom = (rows-1)*(columns-1)=(2-1)*(5-1)=4 Chi-Square test Rule of thumb: if the chi-square statistic is much greater than it’s degrees of freedom, indicates statistical significance. Here 85>>4.

  22. The Chi-Square distribution:is sum of squared normal deviates The expected value and variance of a chi-square: E(x)=df Var(x)=2(df)

  23. Degrees of freedom = (rows-1)*(columns-1)=(2-1)*(5-1)=4 Chi-Square test Rule of thumb: if the chi-square statistic is much greater than it’s degrees of freedom, indicates statistical significance. Here 85>>4.

  24. Caveat **When the sample size is very small in any cell (<5), Fischer’s exact test is used as an alternative to the chi-square test.

  25. Example of Fisher’s Exact Test

  26. Fisher’s “Tea-tasting experiment” Claim: Fisher’s colleague (call her “Cathy”) claimed that, when drinking tea, she could distinguish whether milk or tea was added to the cup first. To test her claim, Fisher designed an experiment in which she tasted 8 cups of tea (4 cups had milk poured first, 4 had tea poured first). Null hypothesis: Cathy’s guessing abilities are no better than chance. Alternatives hypotheses: Right-tail: She guesses right more than expected by chance. Left-tail: She guesses wrong more than expected by chance

  27. Milk Tea Milk 3 1 Tea 1 3 Guess poured first Poured First 4 4 Fisher’s “Tea-tasting experiment” Experimental Results:

  28. Milk Milk Tea Tea Milk Milk 3 4 1 0 Tea Tea 0 1 4 3 Guess poured first Guess poured first Poured First Poured First 4 4 4 4 Fisher’s Exact Test Step 1: Identify tables that are as extreme or more extreme than what actually happened: Here she identified 3 out of 4 of the milk-poured-first teas correctly. Is that good luck or real talent? The only way she could have done better is if she identified 4 of 4 correct.

  29. Milk Milk Tea Tea Milk Milk 3 4 1 0 Tea Tea 0 1 4 3 Guess poured first Guess poured first Poured First Poured First 4 4 4 4 Fisher’s Exact Test Step 2: Calculate the probability of the tables (assuming fixed marginals)

  30. Step 3: to get the left tail and right-tail p-values, consider the probability mass function: Probability mass function of X, where X= the number of correct identifications of the cups with milk-poured-first: “right-hand tail probability”: p=.243 “left-hand tail probability” (testing the null hypothesis that she’s systematically wrong): p=.986

  31. Milk Tea Milk 3 1 Tea 1 3 4 4 SAS code and outputfor generating Fisher’s Exact statistics for 2x2 table

  32. data tea; input MilkFirst GuessedMilk Freq; datalines; 1 1 3 1 0 1 0 1 1 0 0 3 run; data tea; *Fix quirky reversal of SAS 2x2 tables; set tea; MilkFirst=1-MilkFirst; GuessedMilk=1-GuessedMilk;run; proc freq data=tea; tables MilkFirst*GuessedMilk /exact; weight freq;run;

  33. SAS output Statistics for Table of MilkFirst by GuessedMilk Statistic DF Value Prob ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Chi-Square 1 2.0000 0.1573 Likelihood Ratio Chi-Square 1 2.0930 0.1480 Continuity Adj. Chi-Square 1 0.5000 0.4795 Mantel-Haenszel Chi-Square 1 1.7500 0.1859 Phi Coefficient 0.5000 Contingency Coefficient 0.4472 Cramer's V 0.5000 WARNING: 100% of the cells have expected counts less than 5. Chi-Square may not be a valid test. Fisher's Exact Test ƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒƒ Cell (1,1) Frequency (F) 3 Left-sided Pr <= F 0.9857 Right-sided Pr >= F 0.2429 Table Probability (P) 0.2286 Two-sided Pr <= P 0.4857 Sample Size = 8

  34. Introduction to the 2x2 Table

  35. Exposure (E) No Exposure (~E) Disease (D) a b a+b = P(D) No Disease (~D) c d c+d = P(~D) a+c = P(E) b+d = P(~E) Marginal probability of exposure Marginal probability of disease Introduction to the 2x2 Table

  36. Exposed Disease-free cohort Not Exposed Cohort Studies Disease Disease-free Target population Disease Disease-free TIME

  37. Exposure (E) No Exposure (~E) Disease (D) a b No Disease (~D) c d a+c b+d risk to the exposed risk to the unexposed The Risk Ratio, or Relative Risk (RR)

  38. Normal BP Congestive Heart Failure High Systolic BP No CHF 400 400 1500 3000 1100 2600 Hypothetical Data

  39. Case-Control Studies Sample on disease status and ask retrospectively about exposures (for rare diseases) • Marginal probabilities of exposure for cases and controls are valid. • Doesn’t require knowledge of the absolute risks of disease • For rare diseases, can approximate relative risk

  40. Case-Control Studies Disease (Cases) Exposed in past Not exposed Target population Exposed No Disease (Controls) Not Exposed

  41. Exposure (E) No Exposure (~E) Disease (D) a = P (D& E) b = P(D& ~E) No Disease (~D) c = P (~D&E) d = P (~D&~E) The Odds Ratio (OR)

  42. 1 Via Bayes’ Rule 1 When disease is rare: P(~D)  1 “The Rare Disease Assumption” The Odds Ratio

  43. Properties of the OR (simulation)

  44. Standard deviation = Properties of the lnOR Standard deviation =

  45. Smoker Non-smoker Lung Cancer 20 10 No lung cancer 6 24 Note that the size of the smallest 2x2 cell determines the magnitude of the variance Hypothetical Data 30 30

  46. Brain tumor No brain tumor Own a cell phone 5 347 352 Don’t own a cell phone 3 88 91 8 435 453 Example: Cell phones and brain tumors (cross-sectional data)

  47. Brain tumor No brain tumor Own 5 347 352 Don’t own 3 88 91 8 435 453 Same data, but use Chi-square testor Fischer’s exact

  48. Brain tumor No brain tumor Own a cell phone 5 347 352 Don’t own a cell phone 3 88 91 8 435 453 Same data, but use Odds Ratio

More Related